Spring Cloud provides tools for developers to quickly build some of the common patterns in distributed systems (e.g. configuration management, service discovery, circuit breakers, intelligent routing, micro-proxy, control bus). Coordination of distributed systems leads to boiler plate patterns, and using Spring Cloud developers can quickly stand up services and applications that implement those patterns. They will work well in any distributed environment, including the developer’s own laptop, bare metal data centres, and managed platforms such as Cloud Foundry.

Release Train Version: Hoxton.SR3

Supported Boot Version: 2.2.5.RELEASE

1. Features

Spring Cloud focuses on providing good out of box experience for typical use cases and extensibility mechanism to cover others.

  • Distributed/versioned configuration

  • Service registration and discovery

  • Routing

  • Service-to-service calls

  • Load balancing

  • Circuit Breakers

  • Distributed messaging

2. Release Train Versions

Table 1. Release Train Project Versions
Project Name Project Version

spring-cloud-build

2.2.1.RELEASE

spring-cloud-commons

2.2.1.RELEASE

spring-cloud-function

3.0.1.RELEASE

spring-cloud-stream

Horsham.SR1

spring-cloud-aws

2.2.1.RELEASE

spring-cloud-bus

2.2.0.RELEASE

spring-cloud-task

2.2.2.RELEASE

spring-cloud-config

2.2.1.RELEASE

spring-cloud-netflix

2.2.1.RELEASE

spring-cloud-cloudfoundry

2.2.0.RELEASE

spring-cloud-kubernetes

1.1.1.RELEASE

spring-cloud-openfeign

2.2.1.RELEASE

spring-cloud-consul

2.2.1.RELEASE

spring-cloud-gateway

2.2.1.RELEASE

spring-cloud-security

2.2.0.RELEASE

spring-cloud-sleuth

2.2.1.RELEASE

spring-cloud-zookeeper

2.2.0.RELEASE

spring-cloud-contract

2.2.1.RELEASE

spring-cloud-gcp

1.2.1.RELEASE

spring-cloud-vault

2.2.1.RELEASE

spring-cloud-circuitbreaker

1.0.0.RELEASE

spring-cloud-cli

2.2.1.RELEASE

3. Cloud Native Applications

Cloud Native is a style of application development that encourages easy adoption of best practices in the areas of continuous delivery and value-driven development. A related discipline is that of building 12-factor Applications, in which development practices are aligned with delivery and operations goals — for instance, by using declarative programming and management and monitoring. Spring Cloud facilitates these styles of development in a number of specific ways. The starting point is a set of features to which all components in a distributed system need easy access.

Many of those features are covered by Spring Boot, on which Spring Cloud builds. Some more features are delivered by Spring Cloud as two libraries: Spring Cloud Context and Spring Cloud Commons. Spring Cloud Context provides utilities and special services for the ApplicationContext of a Spring Cloud application (bootstrap context, encryption, refresh scope, and environment endpoints). Spring Cloud Commons is a set of abstractions and common classes used in different Spring Cloud implementations (such as Spring Cloud Netflix and Spring Cloud Consul).

If you get an exception due to "Illegal key size" and you use Sun’s JDK, you need to install the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files. See the following links for more information:

Extract the files into the JDK/jre/lib/security folder for whichever version of JRE/JDK x64/x86 you use.

Spring Cloud is released under the non-restrictive Apache 2.0 license. If you would like to contribute to this section of the documentation or if you find an error, you can find the source code and issue trackers for the project at {docslink}[github].

3.1. Spring Cloud Context: Application Context Services

Spring Boot has an opinionated view of how to build an application with Spring. For instance, it has conventional locations for common configuration files and has endpoints for common management and monitoring tasks. Spring Cloud builds on top of that and adds a few features that many components in a system would use or occasionally need.

3.1.1. The Bootstrap Application Context

A Spring Cloud application operates by creating a “bootstrap” context, which is a parent context for the main application. This context is responsible for loading configuration properties from the external sources and for decrypting properties in the local external configuration files. The two contexts share an Environment, which is the source of external properties for any Spring application. By default, bootstrap properties (not bootstrap.properties but properties that are loaded during the bootstrap phase) are added with high precedence, so they cannot be overridden by local configuration.

The bootstrap context uses a different convention for locating external configuration than the main application context. Instead of application.yml (or .properties), you can use bootstrap.yml, keeping the external configuration for bootstrap and main context nicely separate. The following listing shows an example:

Example 1. bootstrap.yml
spring:
  application:
    name: foo
  cloud:
    config:
      uri: ${SPRING_CONFIG_URI:http://localhost:8888}

If your application needs any application-specific configuration from the server, it is a good idea to set the spring.application.name (in bootstrap.yml or application.yml). For the property spring.application.name to be used as the application’s context ID, you must set it in bootstrap.[properties | yml].

If you want to retrieve specific profile configuration, you should also set spring.profiles.active in bootstrap.[properties | yml].

You can disable the bootstrap process completely by setting spring.cloud.bootstrap.enabled=false (for example, in system properties).

3.1.2. Application Context Hierarchies

If you build an application context from SpringApplication or SpringApplicationBuilder, the Bootstrap context is added as a parent to that context. It is a feature of Spring that child contexts inherit property sources and profiles from their parent, so the “main” application context contains additional property sources, compared to building the same context without Spring Cloud Config. The additional property sources are:

  • “bootstrap”: If any PropertySourceLocators are found in the bootstrap context and if they have non-empty properties, an optional CompositePropertySource appears with high priority. An example would be properties from the Spring Cloud Config Server. See “Customizing the Bootstrap Property Sources” for how to customize the contents of this property source.

  • “applicationConfig: [classpath:bootstrap.yml]” (and related files if Spring profiles are active): If you have a bootstrap.yml (or .properties), those properties are used to configure the bootstrap context. Then they get added to the child context when its parent is set. They have lower precedence than the application.yml (or .properties) and any other property sources that are added to the child as a normal part of the process of creating a Spring Boot application. See “Changing the Location of Bootstrap Properties” for how to customize the contents of these property sources.

Because of the ordering rules of property sources, the “bootstrap” entries take precedence. However, note that these do not contain any data from bootstrap.yml, which has very low precedence but can be used to set defaults.

You can extend the context hierarchy by setting the parent context of any ApplicationContext you create — for example, by using its own interface or with the SpringApplicationBuilder convenience methods (parent(), child() and sibling()). The bootstrap context is the parent of the most senior ancestor that you create yourself. Every context in the hierarchy has its own “bootstrap” (possibly empty) property source to avoid promoting values inadvertently from parents down to their descendants. If there is a config server, every context in the hierarchy can also (in principle) have a different spring.application.name and, hence, a different remote property source. Normal Spring application context behavior rules apply to property resolution: properties from a child context override those in the parent, by name and also by property source name. (If the child has a property source with the same name as the parent, the value from the parent is not included in the child).

Note that the SpringApplicationBuilder lets you share an Environment amongst the whole hierarchy, but that is not the default. Thus, sibling contexts (in particular) do not need to have the same profiles or property sources, even though they may share common values with their parent.

3.1.3. Changing the Location of Bootstrap Properties

The bootstrap.yml (or .properties) location can be specified by setting spring.cloud.bootstrap.name (default: bootstrap), spring.cloud.bootstrap.location (default: empty) or spring.cloud.bootstrap.additional-location (default: empty) — for example, in System properties.

Those properties behave like the spring.config.* variants with the same name. With spring.cloud.bootstrap.location the default locations are replaced and only the specified ones are used. To add locations to the list of default ones, spring.cloud.bootstrap.additional-location could be used. In fact, they are used to set up the bootstrap ApplicationContext by setting those properties in its Environment. If there is an active profile (from spring.profiles.active or through the Environment API in the context you are building), properties in that profile get loaded as well, the same as in a regular Spring Boot app — for example, from bootstrap-development.properties for a development profile.

3.1.4. Overriding the Values of Remote Properties

The property sources that are added to your application by the bootstrap context are often “remote” (from example, from Spring Cloud Config Server). By default, they cannot be overridden locally. If you want to let your applications override the remote properties with their own system properties or config files, the remote property source has to grant it permission by setting spring.cloud.config.allowOverride=true (it does not work to set this locally). Once that flag is set, two finer-grained settings control the location of the remote properties in relation to system properties and the application’s local configuration:

  • spring.cloud.config.overrideNone=true: Override from any local property source.

  • spring.cloud.config.overrideSystemProperties=false: Only system properties, command line arguments, and environment variables (but not the local config files) should override the remote settings.

3.1.5. Customizing the Bootstrap Configuration

The bootstrap context can be set to do anything you like by adding entries to /META-INF/spring.factories under a key named org.springframework.cloud.bootstrap.BootstrapConfiguration. This holds a comma-separated list of Spring @Configuration classes that are used to create the context. Any beans that you want to be available to the main application context for autowiring can be created here. There is a special contract for @Beans of type ApplicationContextInitializer. If you want to control the startup sequence, you can mark classes with the @Order annotation (the default order is last).

When adding custom BootstrapConfiguration, be careful that the classes you add are not @ComponentScanned by mistake into your “main” application context, where they might not be needed. Use a separate package name for boot configuration classes and make sure that name is not already covered by your @ComponentScan or @SpringBootApplication annotated configuration classes.

The bootstrap process ends by injecting initializers into the main SpringApplication instance (which is the normal Spring Boot startup sequence, whether it runs as a standalone application or is deployed in an application server). First, a bootstrap context is created from the classes found in spring.factories. Then, all @Beans of type ApplicationContextInitializer are added to the main SpringApplication before it is started.

3.1.6. Customizing the Bootstrap Property Sources

The default property source for external configuration added by the bootstrap process is the Spring Cloud Config Server, but you can add additional sources by adding beans of type PropertySourceLocator to the bootstrap context (through spring.factories). For instance, you can insert additional properties from a different server or from a database.

As an example, consider the following custom locator:

@Configuration
public class CustomPropertySourceLocator implements PropertySourceLocator {

    @Override
    public PropertySource<?> locate(Environment environment) {
        return new MapPropertySource("customProperty",
                Collections.<String, Object>singletonMap("property.from.sample.custom.source", "worked as intended"));
    }

}

The Environment that is passed in is the one for the ApplicationContext about to be created — in other words, the one for which we supply additional property sources. It already has its normal Spring Boot-provided property sources, so you can use those to locate a property source specific to this Environment (for example, by keying it on spring.application.name, as is done in the default Spring Cloud Config Server property source locator).

If you create a jar with this class in it and then add a META-INF/spring.factories containing the following setting, the customProperty PropertySource appears in any application that includes that jar on its classpath:

org.springframework.cloud.bootstrap.BootstrapConfiguration=sample.custom.CustomPropertySourceLocator

3.1.7. Logging Configuration

If you use Spring Boot to configure log settings, you should place this configuration in bootstrap.[yml | properties] if you would like it to apply to all events.

For Spring Cloud to initialize logging configuration properly, you cannot use a custom prefix. For example, using custom.loggin.logpath is not recognized by Spring Cloud when initializing the logging system.

3.1.8. Environment Changes

The application listens for an EnvironmentChangeEvent and reacts to the change in a couple of standard ways (additional ApplicationListeners can be added as @Beans in the normal way). When an EnvironmentChangeEvent is observed, it has a list of key values that have changed, and the application uses those to:

  • Re-bind any @ConfigurationProperties beans in the context.

  • Set the logger levels for any properties in logging.level.*.

Note that the Spring Cloud Config Client does not, by default, poll for changes in the Environment. Generally, we would not recommend that approach for detecting changes (although you could set it up with a @Scheduled annotation). If you have a scaled-out client application, it is better to broadcast the EnvironmentChangeEvent to all the instances instead of having them polling for changes (for example, by using the Spring Cloud Bus).

The EnvironmentChangeEvent covers a large class of refresh use cases, as long as you can actually make a change to the Environment and publish the event. Note that those APIs are public and part of core Spring). You can verify that the changes are bound to @ConfigurationProperties beans by visiting the /configprops endpoint (a standard Spring Boot Actuator feature). For instance, a DataSource can have its maxPoolSize changed at runtime (the default DataSource created by Spring Boot is a @ConfigurationProperties bean) and grow capacity dynamically. Re-binding @ConfigurationProperties does not cover another large class of use cases, where you need more control over the refresh and where you need a change to be atomic over the whole ApplicationContext. To address those concerns, we have @RefreshScope.

3.1.9. Refresh Scope

When there is a configuration change, a Spring @Bean that is marked as @RefreshScope gets special treatment. This feature addresses the problem of stateful beans that get their configuration injected only when they are initialized. For instance, if a DataSource has open connections when the database URL is changed through the Environment, you probably want the holders of those connections to be able to complete what they are doing. Then, the next time something borrows a connection from the pool, it gets one with the new URL.

Sometimes, it might even be mandatory to apply the @RefreshScope annotation on some beans that can be only initialized once. If a bean is “immutable”, you have to either annotate the bean with @RefreshScope or specify the classname under the property key: spring.cloud.refresh.extra-refreshable.

If you hava a DataSource bean that is a HikariDataSource, it can not be refreshed. It is the default value for spring.cloud.refresh.never-refreshable. Choose a different DataSource implementation if you need it to be refreshed.

Refresh scope beans are lazy proxies that initialize when they are used (that is, when a method is called), and the scope acts as a cache of initialized values. To force a bean to re-initialize on the next method call, you must invalidate its cache entry.

The RefreshScope is a bean in the context and has a public refreshAll() method to refresh all beans in the scope by clearing the target cache. The /refresh endpoint exposes this functionality (over HTTP or JMX). To refresh an individual bean by name, there is also a refresh(String) method.

To expose the /refresh endpoint, you need to add following configuration to your application:

management:
  endpoints:
    web:
      exposure:
        include: refresh
@RefreshScope works (technically) on a @Configuration class, but it might lead to surprising behavior. For example, it does not mean that all the @Beans defined in that class are themselves in @RefreshScope. Specifically, anything that depends on those beans cannot rely on them being updated when a refresh is initiated, unless it is itself in @RefreshScope. In that case, it is rebuilt on a refresh and its dependencies are re-injected. At that point, they are re-initialized from the refreshed @Configuration).

3.1.10. Encryption and Decryption

Spring Cloud has an Environment pre-processor for decrypting property values locally. It follows the same rules as the Spring Cloud Config Server and has the same external configuration through encrypt.*. Thus, you can use encrypted values in the form of {cipher}*, and, as long as there is a valid key, they are decrypted before the main application context gets the Environment settings. To use the encryption features in an application, you need to include Spring Security RSA in your classpath (Maven co-ordinates: org.springframework.security:spring-security-rsa), and you also need the full strength JCE extensions in your JVM.

If you get an exception due to "Illegal key size" and you use Sun’s JDK, you need to install the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files. See the following links for more information:

Extract the files into the JDK/jre/lib/security folder for whichever version of JRE/JDK x64/x86 you use.

3.1.11. Endpoints

For a Spring Boot Actuator application, some additional management endpoints are available. You can use:

  • POST to /actuator/env to update the Environment and rebind @ConfigurationProperties and log levels.

  • /actuator/refresh to re-load the boot strap context and refresh the @RefreshScope beans.

  • /actuator/restart to close the ApplicationContext and restart it (disabled by default).

  • /actuator/pause and /actuator/resume for calling the Lifecycle methods (stop() and start() on the ApplicationContext).

If you disable the /actuator/restart endpoint then the /actuator/pause and /actuator/resume endpoints will also be disabled since they are just a special case of /actuator/restart.

3.2. Spring Cloud Commons: Common Abstractions

Patterns such as service discovery, load balancing, and circuit breakers lend themselves to a common abstraction layer that can be consumed by all Spring Cloud clients, independent of the implementation (for example, discovery with Eureka or Consul).

3.2.1. The @EnableDiscoveryClient Annotation

Spring Cloud Commons provides the @EnableDiscoveryClient annotation. This looks for implementations of the DiscoveryClient and ReactiveDiscoveryClient interfaces with META-INF/spring.factories. Implementations of the discovery client add a configuration class to spring.factories under the org.springframework.cloud.client.discovery.EnableDiscoveryClient key. Examples of DiscoveryClient implementations include Spring Cloud Netflix Eureka, Spring Cloud Consul Discovery, and Spring Cloud Zookeeper Discovery.

Spring Cloud will provide both the blocking and reactive service discovery clients by default. You can disable the blocking and/or reactive clients easily by setting spring.cloud.discovery.blocking.enabled=false or spring.cloud.discovery.reactive.enabled=false. To completely disable service discovery you just need to set spring.cloud.discovery.enabled=false.

By default, implementations of DiscoveryClient auto-register the local Spring Boot server with the remote discovery server. This behavior can be disabled by setting autoRegister=false in @EnableDiscoveryClient.

@EnableDiscoveryClient is no longer required. You can put a DiscoveryClient implementation on the classpath to cause the Spring Boot application to register with the service discovery server.
Health Indicator

Commons creates a Spring Boot HealthIndicator that DiscoveryClient implementations can participate in by implementing DiscoveryHealthIndicator. To disable the composite HealthIndicator, set spring.cloud.discovery.client.composite-indicator.enabled=false. A generic HealthIndicator based on DiscoveryClient is auto-configured (DiscoveryClientHealthIndicator). To disable it, set spring.cloud.discovery.client.health-indicator.enabled=false. To disable the description field of the DiscoveryClientHealthIndicator, set spring.cloud.discovery.client.health-indicator.include-description=false. Otherwise, it can bubble up as the description of the rolled up HealthIndicator.

Ordering DiscoveryClient instances

DiscoveryClient interface extends Ordered. This is useful when using multiple discovery clients, as it allows you to define the order of the returned discovery clients, similar to how you can order the beans loaded by a Spring application. By default, the order of any DiscoveryClient is set to 0. If you want to set a different order for your custom DiscoveryClient implementations, you just need to override the getOrder() method so that it returns the value that is suitable for your setup. Apart from this, you can use properties to set the order of the DiscoveryClient implementations provided by Spring Cloud, among others ConsulDiscoveryClient, EurekaDiscoveryClient and ZookeeperDiscoveryClient. In order to do it, you just need to set the spring.cloud.{clientIdentifier}.discovery.order (or eureka.client.order for Eureka) property to the desired value.

SimpleDiscoveryClient

If there is no Service-Registry-backed DiscoveryClient in the classpath, SimpleDiscoveryClient instance, that uses properties to get information on service and instances, will be used.

The information about the available instances should be passed to via properties in the following format: spring.cloud.discovery.client.simple.instances.service1[0].uri=http://s11:8080, where spring.cloud.discovery.client.simple.instances is the common prefix, then service1 stands for the ID of the service in question, while [0] indicates the index number of the instance (as visible in the example, indexes start with 0), and then the value of uri is the actual URI under which the instance is available.

3.2.2. ServiceRegistry

Commons now provides a ServiceRegistry interface that provides methods such as register(Registration) and deregister(Registration), which let you provide custom registered services. Registration is a marker interface.

The following example shows the ServiceRegistry in use:

@Configuration
@EnableDiscoveryClient(autoRegister=false)
public class MyConfiguration {
    private ServiceRegistry registry;

    public MyConfiguration(ServiceRegistry registry) {
        this.registry = registry;
    }

    // called through some external process, such as an event or a custom actuator endpoint
    public void register() {
        Registration registration = constructRegistration();
        this.registry.register(registration);
    }
}

Each ServiceRegistry implementation has its own Registry implementation.

  • ZookeeperRegistration used with ZookeeperServiceRegistry

  • EurekaRegistration used with EurekaServiceRegistry

  • ConsulRegistration used with ConsulServiceRegistry

If you are using the ServiceRegistry interface, you are going to need to pass the correct Registry implementation for the ServiceRegistry implementation you are using.

ServiceRegistry Auto-Registration

By default, the ServiceRegistry implementation auto-registers the running service. To disable that behavior, you can set: * @EnableDiscoveryClient(autoRegister=false) to permanently disable auto-registration. * spring.cloud.service-registry.auto-registration.enabled=false to disable the behavior through configuration.

ServiceRegistry Auto-Registration Events

There are two events that will be fired when a service auto-registers. The first event, called InstancePreRegisteredEvent, is fired before the service is registered. The second event, called InstanceRegisteredEvent, is fired after the service is registered. You can register an ApplicationListener(s) to listen to and react to these events.

These events will not be fired if the spring.cloud.service-registry.auto-registration.enabled property is set to false.
Service Registry Actuator Endpoint

Spring Cloud Commons provides a /service-registry actuator endpoint. This endpoint relies on a Registration bean in the Spring Application Context. Calling /service-registry with GET returns the status of the Registration. Using POST to the same endpoint with a JSON body changes the status of the current Registration to the new value. The JSON body has to include the status field with the preferred value. Please see the documentation of the ServiceRegistry implementation you use for the allowed values when updating the status and the values returned for the status. For instance, Eureka’s supported statuses are UP, DOWN, OUT_OF_SERVICE, and UNKNOWN.

3.2.3. Spring RestTemplate as a Load Balancer Client

You can configure a RestTemplate to use a Load-balancer client. To create a load-balanced RestTemplate, create a RestTemplate @Bean and use the @LoadBalanced qualifier, as the following example shows:

@Configuration
public class MyConfiguration {

    @LoadBalanced
    @Bean
    RestTemplate restTemplate() {
        return new RestTemplate();
    }
}

public class MyClass {
    @Autowired
    private RestTemplate restTemplate;

    public String doOtherStuff() {
        String results = restTemplate.getForObject("http://stores/stores", String.class);
        return results;
    }
}
A RestTemplate bean is no longer created through auto-configuration. Individual applications must create it.

The URI needs to use a virtual host name (that is, a service name, not a host name). The Ribbon client is used to create a full physical address. See {githubroot}/spring-cloud-netflix/blob/master/spring-cloud-netflix-ribbon/src/main/java/org/springframework/cloud/netflix/ribbon/RibbonAutoConfiguration.java[RibbonAutoConfiguration] for the details of how the RestTemplate is set up.

To use a load-balanced RestTemplate, you need to have a load-balancer implementation in your classpath. The recommended implementation is BlockingLoadBalancerClient. Add Spring Cloud LoadBalancer starter to your project in order to use it. The RibbonLoadBalancerClient also can be used, but it’s now under maintenance and we do not recommend adding it to new projects.
By default, if you have both RibbonLoadBalancerClient and BlockingLoadBalancerClient, to preserve backward compatibility, RibbonLoadBalancerClient is used. To override it, you can set the spring.cloud.loadbalancer.ribbon.enabled property to false.

3.2.4. Spring WebClient as a Load Balancer Client

You can configure WebClient to automatically use a load-balancer client. To create a load-balanced WebClient, create a WebClient.Builder @Bean and use the @LoadBalanced qualifier, as follows:

@Configuration
public class MyConfiguration {

    @Bean
    @LoadBalanced
    public WebClient.Builder loadBalancedWebClientBuilder() {
        return WebClient.builder();
    }
}

public class MyClass {
    @Autowired
    private WebClient.Builder webClientBuilder;

    public Mono<String> doOtherStuff() {
        return webClientBuilder.build().get().uri("http://stores/stores")
                        .retrieve().bodyToMono(String.class);
    }
}

The URI needs to use a virtual host name (that is, a service name, not a host name). The Ribbon client or Spring Cloud LoadBalancer is used to create a full physical address.

If you want to use a @LoadBalanced WebClient.Builder, you need to have a load balancer implementation in the classpath. We recommend that you add the Spring Cloud LoadBalancer starter to your project. Then, ReactiveLoadBalancer is used underneath. Alternatively, this functionality also works with spring-cloud-starter-netflix-ribbon, but the request is handled by a non-reactive LoadBalancerClient under the hood. Additionally, spring-cloud-starter-netflix-ribbon is already in maintenance mode, so we do not recommend adding it to new projects. If you have both spring-cloud-starter-loadbalancer and spring-cloud-starter-netflix-ribbon in your classpath, Ribbon is used by default. To switch to Spring Cloud LoadBalancer, set the spring.cloud.loadbalancer.ribbon.enabled property to false.
Retrying Failed Requests

A load-balanced RestTemplate can be configured to retry failed requests. By default, this logic is disabled. You can enable it by adding Spring Retry to your application’s classpath. The load-balanced RestTemplate honors some of the Ribbon configuration values related to retrying failed requests. You can use client.ribbon.MaxAutoRetries, client.ribbon.MaxAutoRetriesNextServer, and client.ribbon.OkToRetryOnAllOperations properties. If you would like to disable the retry logic with Spring Retry on the classpath, you can set spring.cloud.loadbalancer.retry.enabled=false. See the Ribbon documentation for a description of what these properties do.

If you would like to implement a BackOffPolicy in your retries, you need to create a bean of type LoadBalancedRetryFactory and override the createBackOffPolicy method:

@Configuration
public class MyConfiguration {
    @Bean
    LoadBalancedRetryFactory retryFactory() {
        return new LoadBalancedRetryFactory() {
            @Override
            public BackOffPolicy createBackOffPolicy(String service) {
                return new ExponentialBackOffPolicy();
            }
        };
    }
}
client in the preceding examples should be replaced with your Ribbon client’s name.

If you want to add one or more RetryListener implementations to your retry functionality, you need to create a bean of type LoadBalancedRetryListenerFactory and return the RetryListener array you would like to use for a given service, as the following example shows:

@Configuration
public class MyConfiguration {
    @Bean
    LoadBalancedRetryListenerFactory retryListenerFactory() {
        return new LoadBalancedRetryListenerFactory() {
            @Override
            public RetryListener[] createRetryListeners(String service) {
                return new RetryListener[]{new RetryListener() {
                    @Override
                    public <T, E extends Throwable> boolean open(RetryContext context, RetryCallback<T, E> callback) {
                        //TODO Do you business...
                        return true;
                    }

                    @Override
                     public <T, E extends Throwable> void close(RetryContext context, RetryCallback<T, E> callback, Throwable throwable) {
                        //TODO Do you business...
                    }

                    @Override
                    public <T, E extends Throwable> void onError(RetryContext context, RetryCallback<T, E> callback, Throwable throwable) {
                        //TODO Do you business...
                    }
                }};
            }
        };
    }
}

3.2.5. Multiple RestTemplate Objects

If you want a RestTemplate that is not load-balanced, create a RestTemplate bean and inject it. To access the load-balanced RestTemplate, use the @LoadBalanced qualifier when you create your @Bean, as the following example shows:

@Configuration
public class MyConfiguration {

    @LoadBalanced
    @Bean
    RestTemplate loadBalanced() {
        return new RestTemplate();
    }

    @Primary
    @Bean
    RestTemplate restTemplate() {
        return new RestTemplate();
    }
}

public class MyClass {
@Autowired
private RestTemplate restTemplate;

    @Autowired
    @LoadBalanced
    private RestTemplate loadBalanced;

    public String doOtherStuff() {
        return loadBalanced.getForObject("http://stores/stores", String.class);
    }

    public String doStuff() {
        return restTemplate.getForObject("http://example.com", String.class);
    }
}
Notice the use of the @Primary annotation on the plain RestTemplate declaration in the preceding example to disambiguate the unqualified @Autowired injection.
If you see errors such as java.lang.IllegalArgumentException: Can not set org.springframework.web.client.RestTemplate field com.my.app.Foo.restTemplate to com.sun.proxy.$Proxy89, try injecting RestOperations or setting spring.aop.proxyTargetClass=true.

3.2.6. Multiple WebClient Objects

If you want a WebClient that is not load-balanced, create a WebClient bean and inject it. To access the load-balanced WebClient, use the @LoadBalanced qualifier when you create your @Bean, as the following example shows:

@Configuration
public class MyConfiguration {

    @LoadBalanced
    @Bean
    WebClient.Builder loadBalanced() {
        return WebClient.builder();
    }

    @Primary
    @Bean
    WebClient.Builder webClient() {
        return WebClient.builder();
    }
}

public class MyClass {
    @Autowired
    private WebClient.Builder webClientBuilder;

    @Autowired
    @LoadBalanced
    private WebClient.Builder loadBalanced;

    public Mono<String> doOtherStuff() {
        return loadBalanced.build().get().uri("http://stores/stores")
                        .retrieve().bodyToMono(String.class);
    }

    public Mono<String> doStuff() {
        return webClientBuilder.build().get().uri("http://example.com")
                        .retrieve().bodyToMono(String.class);
    }
}

3.2.7. Spring WebFlux WebClient as a Load Balancer Client

The Spring WebFlux can work with both reactive and non-reactive WebClient configurations, as the topics describe:

Spring WebFlux WebClient with ReactorLoadBalancerExchangeFilterFunction

You can configure WebClient to use the ReactiveLoadBalancer. If you add Spring Cloud LoadBalancer starter to your project and if spring-webflux is on the classpath, ReactorLoadBalancerExchangeFilterFunction is auto-configured. The following example shows how to configure a WebClient to use reactive load-balancer:

public class MyClass {
    @Autowired
    private ReactorLoadBalancerExchangeFilterFunction lbFunction;

    public Mono<String> doOtherStuff() {
        return WebClient.builder().baseUrl("http://stores")
            .filter(lbFunction)
            .build()
            .get()
            .uri("/stores")
            .retrieve()
            .bodyToMono(String.class);
    }
}

The URI needs to use a virtual host name (that is, a service name, not a host name). The ReactorLoadBalancer is used to create a full physical address.

By default, if you have spring-cloud-netflix-ribbon in your classpath, LoadBalancerExchangeFilterFunction is used to maintain backward compatibility. To use ReactorLoadBalancerExchangeFilterFunction, set the spring.cloud.loadbalancer.ribbon.enabled property to false.
Spring WebFlux WebClient with a Non-reactive Load Balancer Client

If you you do not have Spring Cloud LoadBalancer starter in your project but you do have spring-cloud-starter-netflix-ribbon, you can still use WebClient with LoadBalancerClient. If spring-webflux is on the classpath, LoadBalancerExchangeFilterFunction is auto-configured. Note, however, that this uses a non-reactive client under the hood. The following example shows how to configure a WebClient to use load-balancer:

public class MyClass {
    @Autowired
    private LoadBalancerExchangeFilterFunction lbFunction;

    public Mono<String> doOtherStuff() {
        return WebClient.builder().baseUrl("http://stores")
            .filter(lbFunction)
            .build()
            .get()
            .uri("/stores")
            .retrieve()
            .bodyToMono(String.class);
    }
}

The URI needs to use a virtual host name (that is, a service name, not a host name). The LoadBalancerClient is used to create a full physical address.

WARN: This approach is now deprecated. We suggest that you use WebFlux with reactive Load-Balancer instead.

3.2.8. Ignore Network Interfaces

Sometimes, it is useful to ignore certain named network interfaces so that they can be excluded from Service Discovery registration (for example, when running in a Docker container). A list of regular expressions can be set to cause the desired network interfaces to be ignored. The following configuration ignores the docker0 interface and all interfaces that start with veth:

Example 2. application.yml
spring:
  cloud:
    inetutils:
      ignoredInterfaces:
        - docker0
        - veth.*

You can also force the use of only specified network addresses by using a list of regular expressions, as the following example shows:

Example 3. bootstrap.yml
spring:
  cloud:
    inetutils:
      preferredNetworks:
        - 192.168
        - 10.0

You can also force the use of only site-local addresses, as the following example shows:

Example 4. application.yml
spring:
  cloud:
    inetutils:
      useOnlySiteLocalInterfaces: true

See Inet4Address.html.isSiteLocalAddress() for more details about what constitutes a site-local address.

3.2.9. HTTP Client Factories

Spring Cloud Commons provides beans for creating both Apache HTTP clients (ApacheHttpClientFactory) and OK HTTP clients (OkHttpClientFactory). The OkHttpClientFactory bean is created only if the OK HTTP jar is on the classpath. In addition, Spring Cloud Commons provides beans for creating the connection managers used by both clients: ApacheHttpClientConnectionManagerFactory for the Apache HTTP client and OkHttpClientConnectionPoolFactory for the OK HTTP client. If you would like to customize how the HTTP clients are created in downstream projects, you can provide your own implementation of these beans. In addition, if you provide a bean of type HttpClientBuilder or OkHttpClient.Builder, the default factories use these builders as the basis for the builders returned to downstream projects. You can also disable the creation of these beans by setting spring.cloud.httpclientfactories.apache.enabled or spring.cloud.httpclientfactories.ok.enabled to false.

3.2.10. Enabled Features

Spring Cloud Commons provides a /features actuator endpoint. This endpoint returns features available on the classpath and whether they are enabled. The information returned includes the feature type, name, version, and vendor.

Feature types

There are two types of 'features': abstract and named.

Abstract features are features where an interface or abstract class is defined and that an implementation the creates, such as DiscoveryClient, LoadBalancerClient, or LockService. The abstract class or interface is used to find a bean of that type in the context. The version displayed is bean.getClass().getPackage().getImplementationVersion().

Named features are features that do not have a particular class they implement. These features include “Circuit Breaker”, “API Gateway”, “Spring Cloud Bus”, and others. These features require a name and a bean type.

Declaring features

Any module can declare any number of HasFeature beans, as the following examples show:

@Bean
public HasFeatures commonsFeatures() {
  return HasFeatures.abstractFeatures(DiscoveryClient.class, LoadBalancerClient.class);
}

@Bean
public HasFeatures consulFeatures() {
  return HasFeatures.namedFeatures(
    new NamedFeature("Spring Cloud Bus", ConsulBusAutoConfiguration.class),
    new NamedFeature("Circuit Breaker", HystrixCommandAspect.class));
}

@Bean
HasFeatures localFeatures() {
  return HasFeatures.builder()
      .abstractFeature(Something.class)
      .namedFeature(new NamedFeature("Some Other Feature", Someother.class))
      .abstractFeature(Somethingelse.class)
      .build();
}

Each of these beans should go in an appropriately guarded @Configuration.

3.2.11. Spring Cloud Compatibility Verification

Due to the fact that some users have problem with setting up Spring Cloud application, we’ve decided to add a compatibility verification mechanism. It will break if your current setup is not compatible with Spring Cloud requirements, together with a report, showing what exactly went wrong.

At the moment we verify which version of Spring Boot is added to your classpath.

Example of a report

***************************
APPLICATION FAILED TO START
***************************

Description:

Your project setup is incompatible with our requirements due to following reasons:

- Spring Boot [2.1.0.RELEASE] is not compatible with this Spring Cloud release train


Action:

Consider applying the following actions:

- Change Spring Boot version to one of the following versions [1.2.x, 1.3.x] .
You can find the latest Spring Boot versions here [https://spring.io/projects/spring-boot#learn].
If you want to learn more about the Spring Cloud Release train compatibility, you can visit this page [https://spring.io/projects/spring-cloud#overview] and check the [Release Trains] section.

In order to disable this feature, set spring.cloud.compatibility-verifier.enabled to false. If you want to override the compatible Spring Boot versions, just set the spring.cloud.compatibility-verifier.compatible-boot-versions property with a comma separated list of compatible Spring Boot versions.

3.3. Spring Cloud LoadBalancer

Spring Cloud provides its own client-side load-balancer abstraction and implementation. For the load-balancing mechanism, ReactiveLoadBalancer interface has been added and a Round-Robin-based implementation has been provided for it. In order to get instances to select from reactive ServiceInstanceListSupplier is used. Currently we support a service-discovery-based implementation of ServiceInstanceListSupplier that retrieves available instances from Service Discovery using a Discovery Client available in the classpath.

3.3.1. Spring Cloud LoadBalancer integrations

In order to make it easy to use Spring Cloud LoadBalancer, we provide ReactorLoadBalancerExchangeFilterFunction that can be used with WebClient and BlockingLoadBalancerClient that works with RestTemplate. You can see more information and examples of usage in the following sections:

3.3.2. Spring Cloud LoadBalancer Caching

Apart from the basic ServiceInstanceListSupplier implementation that retrieves instances via DiscoveryClient each time it has to choose an instance, we provide two caching implementations.

Caffeine-backed LoadBalancer Cache Implementation

If you have com.github.ben-manes.caffeine:caffeine in the classpath, Caffeine-based implementation will be used. See the LoadBalancerCacheConfiguration section for information on how to configure it.

If you are using Caffeine, you can also override the default Caffeine Cache setup for the LoadBalancer by passing your own Caffeine Specification in the spring.cloud.loadbalancer.cache.caffeine.spec property.

WARN: Passing your own Caffeine specification will override any other LoadBalancerCache settings, including General LoadBalancer Cache Configuration fields, such as ttl and capacity.

Default LoadBalancer Cache Implementation

If you do not have Caffeine in the classpath, the DefaultLoadBalancerCache, which comes automatically with spring-cloud-starter-loadbalancer, will be used. See the LoadBalancerCacheConfiguration section for information on how to configure it.

To use Caffeine instead of the default cache, add the com.github.ben-manes.caffeine:caffeine dependency to classpath.
LoadBalancer Cache Configuration

You can set your own ttl value (the time after write after which entries should be expired), expressed as Duration, by passing a String compliant with the Spring Boot String to Duration converter syntax. as the value of the spring.cloud.loadbalancer.cache.ttl property. You can also set your own LoadBalancer cache initial capacity by setting the value of the spring.cloud.loadbalancer.cache.capacity property.

The default setup includes ttl set to 30 seconds and the default initialCapacity is 256.

You can also altogether disable loadBalancer caching by setting the value of spring.cloud.loadbalancer.cache.enabled to false.

Although the basic, non-cached, implementation is useful for prototyping and testing, it’s much less efficient than the cached versions, so we recommend always using the cached version in production.

3.3.3. Zone-Based Load-Balancing

To enable zone-based load-balancing, we provide the ZonePreferenceServiceInstanceListSupplier. We use DiscoveryClient-specific zone configuration (for example, eureka.instance.metadata-map.zone) to pick the zone that the client tries to filter available service instances for.

You can also override DiscoveryClient-specific zone setup by setting the value of spring.cloud.loadbalancer.zone property.
For the time being, only Eureka Discovery Client is instrumented to set the LoadBalancer zone. For other discovery client, set the spring.cloud.loadbalancer.zone property. More instrumentations coming shortly.
To determine the zone of a retrieved ServiceInstance, we check the value under the "zone" key in its metadata map.

The ZonePreferenceServiceInstanceListSupplier filters retrieved instances and only returns the ones within the same zone. If the zone is null or there are no instances within the same zone, it returns all the retrieved instances.

In order to use the zone-based load-balancing approach, you will have to instantiate a ZonePreferenceServiceInstanceListSupplier bean in a custom configuration.

We use delegates to work with ServiceInstanceListSupplier beans. We suggest passing a DiscoveryClientServiceInstanceListSupplier delegate in the constructor of ZonePreferenceServiceInstanceListSupplier and, in turn, wrapping the latter with a CachingServiceInstanceListSupplier to leverage LoadBalancer caching mechanism.

You could use this sample configuration to set it up:

public class CustomLoadBalancerConfiguration {

    @Bean
    public ServiceInstanceListSupplier discoveryClientServiceInstanceListSupplier(
            ReactiveDiscoveryClient discoveryClient, Environment environment,
            LoadBalancerZoneConfig zoneConfig,
            ApplicationContext context) {
        DiscoveryClientServiceInstanceListSupplier firstDelegate = new DiscoveryClientServiceInstanceListSupplier(
                discoveryClient, environment);
        ZonePreferenceServiceInstanceListSupplier delegate = new ZonePreferenceServiceInstanceListSupplier(firstDelegate,
                zoneConfig);
        ObjectProvider<LoadBalancerCacheManager> cacheManagerProvider = context
                .getBeanProvider(LoadBalancerCacheManager.class);
        if (cacheManagerProvider.getIfAvailable() != null) {
            return new CachingServiceInstanceListSupplier(delegate,
                    cacheManagerProvider.getIfAvailable());
        }
        return delegate;
    }
    }

3.3.4. Instance Health-Check for LoadBalancer

It is possible to enable a scheduled HealthCheck for the LoadBalancer. The HealthCheckServiceInstanceListSupplier is provided for that. It regularly verifies if the instances provided by a delegate ServiceInstanceListSupplier are still alive and only returns the healthy instances, unless there are none - then it returns all the retrieved instances.

This mechanism is particularly helpful while using the SimpleDiscoveryClient. For the clients backed by an actual Service Registry, it’s not necessary to use, as we already get healthy instances after querying the external ServiceDiscovery.

The HealthCheckServiceInstanceListSupplier uses InstanceHealthChecker to verify if the instances are alive. We provide a default PingHealthChecker instance. It uses WebClient to execute requests against the health endpoint of the instance. You can also provide your own implementation of InstanceHealthChecker instead.

HealthCheckServiceInstanceListSupplier uses properties prefixed with spring.cloud.loadbalancer.healthcheck. You can set the initialDelay and interval for the scheduler.

For the PingHealthChecker, you can set the default path for the healthcheck URL by setting the value of the spring.cloud.loadbalancer.healthcheck.path.default. You can also set a specific value for any given service by setting the value of the spring.cloud.loadbalancer.healthcheck.path.[SERVICE_ID], substituting the [SERVICE_ID] with the correct ID of your service. If the path is not set, /actuator/health is used by default.

In order to use the health-check scheduler approach, you will have to instantiate a HealthCheckServiceInstanceListSupplier bean in a custom configuration.

We use delegates to work with ServiceInstanceListSupplier beans. We suggest passing a DiscoveryClientServiceInstanceListSupplier delegate in the constructor of HealthCheckServiceInstanceListSupplier and, in turn, wrapping the latter with a CachingServiceInstanceListSupplier to leverage LoadBalancer caching mechanism.

You could use this sample configuration to set it up:

public class CustomLoadBalancerConfiguration {

    @Bean
    public ServiceInstanceListSupplier discoveryClientServiceInstanceListSupplier(
            ReactiveDiscoveryClient discoveryClient, Environment environment,
            LoadBalancerProperties loadBalancerProperties,
            ApplicationContext context,
            InstanceHealthChecker healthChecker) {
        DiscoveryClientServiceInstanceListSupplier firstDelegate = new DiscoveryClientServiceInstanceListSupplier(
                discoveryClient, environment);
        HealthCheckServiceInstanceListSupplier delegate = new HealthCheckServiceInstanceListSupplier(firstDelegate,
                loadBalancerProperties, healthChecker);
        ObjectProvider<LoadBalancerCacheManager> cacheManagerProvider = context
                .getBeanProvider(LoadBalancerCacheManager.class);
        if (cacheManagerProvider.getIfAvailable() != null) {
            return new CachingServiceInstanceListSupplier(delegate,
                    cacheManagerProvider.getIfAvailable());
        }
        return delegate;
    }
    }

3.3.5. Spring Cloud LoadBalancer Starter

We also provide a starter that allows you to easily add Spring Cloud LoadBalancer in a Spring Boot app. In order to use it, just add org.springframework.cloud:spring-cloud-starter-loadbalancer to your Spring Cloud dependencies in your build file.

Spring Cloud LoadBalancer starter includes Spring Boot Caching and Evictor.
If you have both Ribbon and Spring Cloud LoadBalancer int the classpath, in order to maintain backward compatibility, Ribbon-based implementations will be used by default. In order to switch to using Spring Cloud LoadBalancer under the hood, make sure you set the property spring.cloud.loadbalancer.ribbon.enabled to false.

3.3.6. Passing Your Own Spring Cloud LoadBalancer Configuration

You can also use the @LoadBalancerClient annotation to pass your own load-balancer client configuration, passing the name of the load-balancer client and the configuration class, as follows:

@Configuration
@LoadBalancerClient(value = "stores", configuration = CustomLoadBalancerConfiguration.class)
public class MyConfiguration {

    @Bean
    @LoadBalanced
    public WebClient.Builder loadBalancedWebClientBuilder() {
        return WebClient.builder();
    }
}

You can use this feature to instantiate different implementations of ServiceInstanceListSupplier or ReactorLoadBalancer, either written by you, or provided by us as alternatives (for example ZonePreferenceServiceInstanceListSupplier) to override the default setup.

You can see an example of a custom configuration here.

The annotation value arguments (stores in the example above) specifies the service id of the service that we should send the requests to with the given custom configuration.

You can also pass multiple configurations (for more than one load-balancer client) through the @LoadBalancerClients annotation, as the following example shows:

@Configuration
@LoadBalancerClients({@LoadBalancerClient(value = "stores", configuration = StoresLoadBalancerClientConfiguration.class), @LoadBalancerClient(value = "customers", configuration = CustomersLoadBalancerClientConfiguration.class)})
public class MyConfiguration {

    @Bean
    @LoadBalanced
    public WebClient.Builder loadBalancedWebClientBuilder() {
        return WebClient.builder();
    }
}

3.4. Spring Cloud Circuit Breaker

3.4.1. Introduction

Spring Cloud Circuit breaker provides an abstraction across different circuit breaker implementations. It provides a consistent API to use in your applications, letting you, the developer, choose the circuit breaker implementation that best fits your needs for your application.

Supported Implementations

Spring Cloud supports the following circuit-breaker implementations:

3.4.2. Core Concepts

To create a circuit breaker in your code, you can use the CircuitBreakerFactory API. When you include a Spring Cloud Circuit Breaker starter on your classpath, a bean that implements this API is automatically created for you. The following example shows a simple example of how to use this API:

@Service
public static class DemoControllerService {
    private RestTemplate rest;
    private CircuitBreakerFactory cbFactory;

    public DemoControllerService(RestTemplate rest, CircuitBreakerFactory cbFactory) {
        this.rest = rest;
        this.cbFactory = cbFactory;
    }

    public String slow() {
        return cbFactory.create("slow").run(() -> rest.getForObject("/slow", String.class), throwable -> "fallback");
    }

}

The CircuitBreakerFactory.create API creates an instance of a class called CircuitBreaker. The run method takes a Supplier and a Function. The Supplier is the code that you are going to wrap in a circuit breaker. The Function is the fallback that is executed if the circuit breaker is tripped. The function is passed the Throwable that caused the fallback to be triggered. You can optionally exclude the fallback if you do not want to provide one.

Circuit Breakers In Reactive Code

If Project Reactor is on the class path, you can also use ReactiveCircuitBreakerFactory for your reactive code. The following example shows how to do so:

@Service
public static class DemoControllerService {
    private ReactiveCircuitBreakerFactory cbFactory;
    private WebClient webClient;


    public DemoControllerService(WebClient webClient, ReactiveCircuitBreakerFactory cbFactory) {
        this.webClient = webClient;
        this.cbFactory = cbFactory;
    }

    public Mono<String> slow() {
        return webClient.get().uri("/slow").retrieve().bodyToMono(String.class).transform(
        it -> cbFactory.create("slow").run(it, throwable -> return Mono.just("fallback")));
    }
}

The ReactiveCircuitBreakerFactory.create API creates an instance of a class called ReactiveCircuitBreaker. The run method takes a Mono or a Flux and wraps it in a circuit breaker. You can optionally profile a fallback Function, which will be called if the circuit breaker is tripped and is passed the Throwable that caused the failure.

3.4.3. Configuration

You can configure your circuit breakers by creating beans of type Customizer. The Customizer interface has a single method (called customize) that takes the Object to customize.

For detailed information on how to customize a given implementation see the following documentation:

Some CircuitBreaker implementations such as Resilience4JCircuitBreaker call customize method every time CircuitBreaker#run is called. It can be inefficient. In that case, you can use CircuitBreaker#once method. It is useful where calling customize many times doesn’t make sense, for example, in case of consuming Resilience4j’s events.

The following example shows the way for each io.github.resilience4j.circuitbreaker.CircuitBreaker to consume events.

Customizer.once(circuitBreaker -> {
  circuitBreaker.getEventPublisher()
    .onStateTransition(event -> log.info("{}: {}", event.getCircuitBreakerName(), event.getStateTransition()));
}, CircuitBreaker::getName)

3.5. Configuration Properties

To see the list of all Spring Cloud Commons related configuration properties please check the Appendix page.

4. Spring Cloud Config

Hoxton.SR3

Spring Cloud Config provides server-side and client-side support for externalized configuration in a distributed system. With the Config Server, you have a central place to manage external properties for applications across all environments. The concepts on both client and server map identically to the Spring Environment and PropertySource abstractions, so they fit very well with Spring applications but can be used with any application running in any language. As an application moves through the deployment pipeline from dev to test and into production, you can manage the configuration between those environments and be certain that applications have everything they need to run when they migrate. The default implementation of the server storage backend uses git, so it easily supports labelled versions of configuration environments as well as being accessible to a wide range of tooling for managing the content. It is easy to add alternative implementations and plug them in with Spring configuration.

4.1. Quick Start

This quick start walks through using both the server and the client of Spring Cloud Config Server.

First, start the server, as follows:

$ cd spring-cloud-config-server
$ ../mvnw spring-boot:run

The server is a Spring Boot application, so you can run it from your IDE if you prefer to do so (the main class is ConfigServerApplication).

Next try out a client, as follows:

$ curl localhost:8888/foo/development
{"name":"foo","label":"master","propertySources":[
  {"name":"https://github.com/scratches/config-repo/foo-development.properties","source":{"bar":"spam"}},
  {"name":"https://github.com/scratches/config-repo/foo.properties","source":{"foo":"bar"}}
]}

The default strategy for locating property sources is to clone a git repository (at spring.cloud.config.server.git.uri) and use it to initialize a mini SpringApplication. The mini-application’s Environment is used to enumerate property sources and publish them at a JSON endpoint.

The HTTP service has resources in the following form:

/{application}/{profile}[/{label}]
/{application}-{profile}.yml
/{label}/{application}-{profile}.yml
/{application}-{profile}.properties
/{label}/{application}-{profile}.properties

where application is injected as the spring.config.name in the SpringApplication (what is normally application in a regular Spring Boot app), profile is an active profile (or comma-separated list of properties), and label is an optional git label (defaults to master.)

Spring Cloud Config Server pulls configuration for remote clients from various sources. The following example gets configuration from a git repository (which must be provided), as shown in the following example:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo

Other sources are any JDBC compatible database, Subversion, Hashicorp Vault, Credhub and local filesystems.

4.1.1. Client Side Usage

To use these features in an application, you can build it as a Spring Boot application that depends on spring-cloud-config-client (for an example, see the test cases for the config-client or the sample application). The most convenient way to add the dependency is with a Spring Boot starter org.springframework.cloud:spring-cloud-starter-config. There is also a parent pom and BOM (spring-cloud-starter-parent) for Maven users and a Spring IO version management properties file for Gradle and Spring CLI users. The following example shows a typical Maven configuration:

pom.xml
<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>{spring-boot-docs-version}</version>
    <relativePath /> <!-- lookup parent from repository -->
</parent>

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>{spring-cloud-version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependencies>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-config</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-test</artifactId>
        <scope>test</scope>
    </dependency>
</dependencies>

<build>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
        </plugin>
    </plugins>
</build>

<!-- repositories also needed for snapshots and milestones -->

Now you can create a standard Spring Boot application, such as the following HTTP server:

@SpringBootApplication
@RestController
public class Application {

    @RequestMapping("/")
    public String home() {
        return "Hello World!";
    }

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

}

When this HTTP server runs, it picks up the external configuration from the default local config server (if it is running) on port 8888. To modify the startup behavior, you can change the location of the config server by using bootstrap.properties (similar to application.properties but for the bootstrap phase of an application context), as shown in the following example:

spring.cloud.config.uri: http://myconfigserver.com

By default, if no application name is set, application will be used. To modify the name, the following property can be added to the bootstrap.properties file:

spring.application.name: myapp
When setting the property ${spring.application.name} do not prefix your app name with the reserved word application- to prevent issues resolving the correct property source.

The bootstrap properties show up in the /env endpoint as a high-priority property source, as shown in the following example.

$ curl localhost:8080/env
{
  "profiles":[],
  "configService:https://github.com/spring-cloud-samples/config-repo/bar.properties":{"foo":"bar"},
  "servletContextInitParams":{},
  "systemProperties":{...},
  ...
}

A property source called configService:<URL of remote repository>/<file name> contains the foo property with a value of bar and is the highest priority.

The URL in the property source name is the git repository, not the config server URL.

4.2. Spring Cloud Config Server

Spring Cloud Config Server provides an HTTP resource-based API for external configuration (name-value pairs or equivalent YAML content). The server is embeddable in a Spring Boot application, by using the @EnableConfigServer annotation. Consequently, the following application is a config server:

ConfigServer.java
@SpringBootApplication
@EnableConfigServer
public class ConfigServer {
  public static void main(String[] args) {
    SpringApplication.run(ConfigServer.class, args);
  }
}

Like all Spring Boot applications, it runs on port 8080 by default, but you can switch it to the more conventional port 8888 in various ways. The easiest, which also sets a default configuration repository, is by launching it with spring.config.name=configserver (there is a configserver.yml in the Config Server jar). Another is to use your own application.properties, as shown in the following example:

application.properties
server.port: 8888
spring.cloud.config.server.git.uri: file://${user.home}/config-repo

where ${user.home}/config-repo is a git repository containing YAML and properties files.

On Windows, you need an extra "/" in the file URL if it is absolute with a drive prefix (for example,/${user.home}/config-repo).

The following listing shows a recipe for creating the git repository in the preceding example:

$ cd $HOME
$ mkdir config-repo
$ cd config-repo
$ git init .
$ echo info.foo: bar > application.properties
$ git add -A .
$ git commit -m "Add application.properties"
Using the local filesystem for your git repository is intended for testing only. You should use a server to host your configuration repositories in production.
The initial clone of your configuration repository can be quick and efficient if you keep only text files in it. If you store binary files, especially large ones, you may experience delays on the first request for configuration or encounter out of memory errors in the server.

4.2.1. Environment Repository

Where should you store the configuration data for the Config Server? The strategy that governs this behaviour is the EnvironmentRepository, serving Environment objects. This Environment is a shallow copy of the domain from the Spring Environment (including propertySources as the main feature). The Environment resources are parametrized by three variables:

  • {application}, which maps to spring.application.name on the client side.

  • {profile}, which maps to spring.profiles.active on the client (comma-separated list).

  • {label}, which is a server side feature labelling a "versioned" set of config files.

Repository implementations generally behave like a Spring Boot application, loading configuration files from a spring.config.name equal to the {application} parameter, and spring.profiles.active equal to the {profiles} parameter. Precedence rules for profiles are also the same as in a regular Spring Boot application: Active profiles take precedence over defaults, and, if there are multiple profiles, the last one wins (similar to adding entries to a Map).

The following sample client application has this bootstrap configuration:

bootstrap.yml
spring:
  application:
    name: foo
  profiles:
    active: dev,mysql

(As usual with a Spring Boot application, these properties could also be set by environment variables or command line arguments).

If the repository is file-based, the server creates an Environment from application.yml (shared between all clients) and foo.yml (with foo.yml taking precedence). If the YAML files have documents inside them that point to Spring profiles, those are applied with higher precedence (in order of the profiles listed). If there are profile-specific YAML (or properties) files, these are also applied with higher precedence than the defaults. Higher precedence translates to a PropertySource listed earlier in the Environment. (These same rules apply in a standalone Spring Boot application.)

You can set spring.cloud.config.server.accept-empty to false so that Server would return a HTTP 404 status, if the application is not found.By default, this flag is set to true.

Git Backend

The default implementation of EnvironmentRepository uses a Git backend, which is very convenient for managing upgrades and physical environments and for auditing changes. To change the location of the repository, you can set the spring.cloud.config.server.git.uri configuration property in the Config Server (for example in application.yml). If you set it with a file: prefix, it should work from a local repository so that you can get started quickly and easily without a server. However, in that case, the server operates directly on the local repository without cloning it (it does not matter if it is not bare because the Config Server never makes changes to the "remote" repository). To scale the Config Server up and make it highly available, you need to have all instances of the server pointing to the same repository, so only a shared file system would work. Even in that case, it is better to use the ssh: protocol for a shared filesystem repository, so that the server can clone it and use a local working copy as a cache.

This repository implementation maps the {label} parameter of the HTTP resource to a git label (commit id, branch name, or tag). If the git branch or tag name contains a slash (/), then the label in the HTTP URL should instead be specified with the special string (_) (to avoid ambiguity with other URL paths). For example, if the label is foo/bar, replacing the slash would result in the following label: foo(_)bar. The inclusion of the special string (_) can also be applied to the {application} parameter. If you use a command-line client such as curl, be careful with the brackets in the URL — you should escape them from the shell with single quotes ('').

Skipping SSL Certificate Validation

The configuration server’s validation of the Git server’s SSL certificate can be disabled by setting the git.skipSslValidation property to true (default is false).

spring:
  cloud:
    config:
      server:
        git:
          uri: https://example.com/my/repo
          skipSslValidation: true
Setting HTTP Connection Timeout

You can configure the time, in seconds, that the configuration server will wait to acquire an HTTP connection. Use the git.timeout property.

spring:
  cloud:
    config:
      server:
        git:
          uri: https://example.com/my/repo
          timeout: 4
Placeholders in Git URI

Spring Cloud Config Server supports a git repository URL with placeholders for the {application} and {profile} (and {label} if you need it, but remember that the label is applied as a git label anyway). So you can support a “one repository per application” policy by using a structure similar to the following:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/myorg/{application}

You can also support a “one repository per profile” policy by using a similar pattern but with {profile}.

Additionally, using the special string "(_)" within your {application} parameters can enable support for multiple organizations, as shown in the following example:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/{application}

where {application} is provided at request time in the following format: organization(_)application.

Pattern Matching and Multiple Repositories

Spring Cloud Config also includes support for more complex requirements with pattern matching on the application and profile name. The pattern format is a comma-separated list of {application}/{profile} names with wildcards (note that a pattern beginning with a wildcard may need to be quoted), as shown in the following example:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo
          repos:
            simple: https://github.com/simple/config-repo
            special:
              pattern: special*/dev*,*special*/dev*
              uri: https://github.com/special/config-repo
            local:
              pattern: local*
              uri: file:/home/configsvc/config-repo

If {application}/{profile} does not match any of the patterns, it uses the default URI defined under spring.cloud.config.server.git.uri. In the above example, for the “simple” repository, the pattern is simple/* (it only matches one application named simple in all profiles). The “local” repository matches all application names beginning with local in all profiles (the /* suffix is added automatically to any pattern that does not have a profile matcher).

The “one-liner” short cut used in the “simple” example can be used only if the only property to be set is the URI. If you need to set anything else (credentials, pattern, and so on) you need to use the full form.

The pattern property in the repo is actually an array, so you can use a YAML array (or [0], [1], etc. suffixes in properties files) to bind to multiple patterns. You may need to do so if you are going to run apps with multiple profiles, as shown in the following example:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo
          repos:
            development:
              pattern:
                - '*/development'
                - '*/staging'
              uri: https://github.com/development/config-repo
            staging:
              pattern:
                - '*/qa'
                - '*/production'
              uri: https://github.com/staging/config-repo
Spring Cloud guesses that a pattern containing a profile that does not end in * implies that you actually want to match a list of profiles starting with this pattern (so */staging is a shortcut for ["*/staging", "*/staging,*"], and so on). This is common where, for instance, you need to run applications in the “development” profile locally but also the “cloud” profile remotely.

Every repository can also optionally store config files in sub-directories, and patterns to search for those directories can be specified as searchPaths. The following example shows a config file at the top level:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo
          searchPaths: foo,bar*

In the preceding example, the server searches for config files in the top level and in the foo/ sub-directory and also any sub-directory whose name begins with bar.

By default, the server clones remote repositories when configuration is first requested. The server can be configured to clone the repositories at startup, as shown in the following top-level example:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://git/common/config-repo.git
          repos:
            team-a:
                pattern: team-a-*
                cloneOnStart: true
                uri: https://git/team-a/config-repo.git
            team-b:
                pattern: team-b-*
                cloneOnStart: false
                uri: https://git/team-b/config-repo.git
            team-c:
                pattern: team-c-*
                uri: https://git/team-a/config-repo.git

In the preceding example, the server clones team-a’s config-repo on startup, before it accepts any requests. All other repositories are not cloned until configuration from the repository is requested.

Setting a repository to be cloned when the Config Server starts up can help to identify a misconfigured configuration source (such as an invalid repository URI) quickly, while the Config Server is starting up. With cloneOnStart not enabled for a configuration source, the Config Server may start successfully with a misconfigured or invalid configuration source and not detect an error until an application requests configuration from that configuration source.
Authentication

To use HTTP basic authentication on the remote repository, add the username and password properties separately (not in the URL), as shown in the following example:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo
          username: trolley
          password: strongpassword

If you do not use HTTPS and user credentials, SSH should also work out of the box when you store keys in the default directories (~/.ssh) and the URI points to an SSH location, such as [email protected]:configuration/cloud-configuration. It is important that an entry for the Git server be present in the ~/.ssh/known_hosts file and that it is in ssh-rsa format. Other formats (such as ecdsa-sha2-nistp256) are not supported. To avoid surprises, you should ensure that only one entry is present in the known_hosts file for the Git server and that it matches the URL you provided to the config server. If you use a hostname in the URL, you want to have exactly that (not the IP) in the known_hosts file. The repository is accessed by using JGit, so any documentation you find on that should be applicable. HTTPS proxy settings can be set in ~/.git/config or (in the same way as for any other JVM process) with system properties (-Dhttps.proxyHost and -Dhttps.proxyPort).

If you do not know where your ~/.git directory is, use git config --global to manipulate the settings (for example, git config --global http.sslVerify false).

JGit requires RSA keys in PEM format. Below is an example ssh-keygen (from openssh) command that will generate a key in the corect format:

ssh-keygen -m PEM -t rsa -b 4096 -f ~/config_server_deploy_key.rsa

Warning: When working with SSH keys, the expected ssh private-key must begin with -----BEGIN RSA PRIVATE KEY-----. If the key starts with -----BEGIN OPENSSH PRIVATE KEY----- then the RSA key will not load when spring-cloud-config server is started. The error looks like:

- Error in object 'spring.cloud.config.server.git': codes [PrivateKeyIsValid.spring.cloud.config.server.git,PrivateKeyIsValid]; arguments [org.springframework.context.support.DefaultMessageSourceResolvable: codes [spring.cloud.config.server.git.,]; arguments []; default message []]; default message [Property 'spring.cloud.config.server.git.privateKey' is not a valid private key]

To correct the above error the RSA key must be converted to PEM format. An example using openssh is provided above for generating a new key in the appropriate format.

Authentication with AWS CodeCommit

Spring Cloud Config Server also supports AWS CodeCommit authentication. AWS CodeCommit uses an authentication helper when using Git from the command line. This helper is not used with the JGit library, so a JGit CredentialProvider for AWS CodeCommit is created if the Git URI matches the AWS CodeCommit pattern. AWS CodeCommit URIs follow this pattern://git-codecommit.${AWS_REGION}.amazonaws.com/${repopath}.

If you provide a username and password with an AWS CodeCommit URI, they must be the AWS accessKeyId and secretAccessKey that provide access to the repository. If you do not specify a username and password, the accessKeyId and secretAccessKey are retrieved by using the AWS Default Credential Provider Chain.

If your Git URI matches the CodeCommit URI pattern (shown earlier), you must provide valid AWS credentials in the username and password or in one of the locations supported by the default credential provider chain. AWS EC2 instances may use IAM Roles for EC2 Instances.

The aws-java-sdk-core jar is an optional dependency. If the aws-java-sdk-core jar is not on your classpath, the AWS Code Commit credential provider is not created, regardless of the git server URI.
Authentication with Google Cloud Source

Spring Cloud Config Server also supports authenticating against Google Cloud Source repositories.

If your Git URI uses the http or https protocol and the domain name is source.developers.google.com, the Google Cloud Source credentials provider will be used. A Google Cloud Source repository URI has the format source.developers.google.com/p/${GCP_PROJECT}/r/${REPO}. To obtain the URI for your repository, click on "Clone" in the Google Cloud Source UI, and select "Manually generated credentials". Do not generate any credentials, simply copy the displayed URI.

The Google Cloud Source credentials provider will use Google Cloud Platform application default credentials. See Google Cloud SDK documentation on how to create application default credentials for a system. This approach will work for user accounts in dev environments and for service accounts in production environments.

com.google.auth:google-auth-library-oauth2-http is an optional dependency. If the google-auth-library-oauth2-http jar is not on your classpath, the Google Cloud Source credential provider is not created, regardless of the git server URI.
Git SSH configuration using properties

By default, the JGit library used by Spring Cloud Config Server uses SSH configuration files such as ~/.ssh/known_hosts and /etc/ssh/ssh_config when connecting to Git repositories by using an SSH URI. In cloud environments such as Cloud Foundry, the local filesystem may be ephemeral or not easily accessible. For those cases, SSH configuration can be set by using Java properties. In order to activate property-based SSH configuration, the spring.cloud.config.server.git.ignoreLocalSshSettings property must be set to true, as shown in the following example:

  spring:
    cloud:
      config:
        server:
          git:
            uri: [email protected]:team/repo1.git
            ignoreLocalSshSettings: true
            hostKey: someHostKey
            hostKeyAlgorithm: ssh-rsa
            privateKey: |
                         -----BEGIN RSA PRIVATE KEY-----
                         MIIEpgIBAAKCAQEAx4UbaDzY5xjW6hc9jwN0mX33XpTDVW9WqHp5AKaRbtAC3DqX
                         IXFMPgw3K45jxRb93f8tv9vL3rD9CUG1Gv4FM+o7ds7FRES5RTjv2RT/JVNJCoqF
                         ol8+ngLqRZCyBtQN7zYByWMRirPGoDUqdPYrj2yq+ObBBNhg5N+hOwKjjpzdj2Ud
                         1l7R+wxIqmJo1IYyy16xS8WsjyQuyC0lL456qkd5BDZ0Ag8j2X9H9D5220Ln7s9i
                         oezTipXipS7p7Jekf3Ywx6abJwOmB0rX79dV4qiNcGgzATnG1PkXxqt76VhcGa0W
                         DDVHEEYGbSQ6hIGSh0I7BQun0aLRZojfE3gqHQIDAQABAoIBAQCZmGrk8BK6tXCd
                         fY6yTiKxFzwb38IQP0ojIUWNrq0+9Xt+NsypviLHkXfXXCKKU4zUHeIGVRq5MN9b
                         BO56/RrcQHHOoJdUWuOV2qMqJvPUtC0CpGkD+valhfD75MxoXU7s3FK7yjxy3rsG
                         EmfA6tHV8/4a5umo5TqSd2YTm5B19AhRqiuUVI1wTB41DjULUGiMYrnYrhzQlVvj
                         5MjnKTlYu3V8PoYDfv1GmxPPh6vlpafXEeEYN8VB97e5x3DGHjZ5UrurAmTLTdO8
                         +AahyoKsIY612TkkQthJlt7FJAwnCGMgY6podzzvzICLFmmTXYiZ/28I4BX/mOSe
                         pZVnfRixAoGBAO6Uiwt40/PKs53mCEWngslSCsh9oGAaLTf/XdvMns5VmuyyAyKG
                         ti8Ol5wqBMi4GIUzjbgUvSUt+IowIrG3f5tN85wpjQ1UGVcpTnl5Qo9xaS1PFScQ
                         xrtWZ9eNj2TsIAMp/svJsyGG3OibxfnuAIpSXNQiJPwRlW3irzpGgVx/AoGBANYW
                         dnhshUcEHMJi3aXwR12OTDnaLoanVGLwLnkqLSYUZA7ZegpKq90UAuBdcEfgdpyi
                         PhKpeaeIiAaNnFo8m9aoTKr+7I6/uMTlwrVnfrsVTZv3orxjwQV20YIBCVRKD1uX
                         VhE0ozPZxwwKSPAFocpyWpGHGreGF1AIYBE9UBtjAoGBAI8bfPgJpyFyMiGBjO6z
                         FwlJc/xlFqDusrcHL7abW5qq0L4v3R+FrJw3ZYufzLTVcKfdj6GelwJJO+8wBm+R
                         gTKYJItEhT48duLIfTDyIpHGVm9+I1MGhh5zKuCqIhxIYr9jHloBB7kRm0rPvYY4
                         VAykcNgyDvtAVODP+4m6JvhjAoGBALbtTqErKN47V0+JJpapLnF0KxGrqeGIjIRV
                         cYA6V4WYGr7NeIfesecfOC356PyhgPfpcVyEztwlvwTKb3RzIT1TZN8fH4YBr6Ee
                         KTbTjefRFhVUjQqnucAvfGi29f+9oE3Ei9f7wA+H35ocF6JvTYUsHNMIO/3gZ38N
                         CPjyCMa9AoGBAMhsITNe3QcbsXAbdUR00dDsIFVROzyFJ2m40i4KCRM35bC/BIBs
                         q0TY3we+ERB40U8Z2BvU61QuwaunJ2+uGadHo58VSVdggqAo0BSkH58innKKt96J
                         69pcVH/4rmLbXdcmNYGm6iu+MlPQk4BUZknHSmVHIFdJ0EPupVaQ8RHT
                         -----END RSA PRIVATE KEY-----

The following table describes the SSH configuration properties.

Table 2. SSH Configuration Properties
Property Name Remarks

ignoreLocalSshSettings

If true, use property-based instead of file-based SSH config. Must be set at as spring.cloud.config.server.git.ignoreLocalSshSettings, not inside a repository definition.

privateKey

Valid SSH private key. Must be set if ignoreLocalSshSettings is true and Git URI is SSH format.

hostKey

Valid SSH host key. Must be set if hostKeyAlgorithm is also set.

hostKeyAlgorithm

One of ssh-dss, ssh-rsa, ecdsa-sha2-nistp256, ecdsa-sha2-nistp384, or ecdsa-sha2-nistp521. Must be set if hostKey is also set.

strictHostKeyChecking

true or false. If false, ignore errors with host key.

knownHostsFile

Location of custom .known_hosts file.

preferredAuthentications

Override server authentication method order. This should allow for evading login prompts if server has keyboard-interactive authentication before the publickey method.

Placeholders in Git Search Paths

Spring Cloud Config Server also supports a search path with placeholders for the {application} and {profile} (and {label} if you need it), as shown in the following example:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo
          searchPaths: '{application}'

The preceding listing causes a search of the repository for files in the same name as the directory (as well as the top level). Wildcards are also valid in a search path with placeholders (any matching directory is included in the search).

Force pull in Git Repositories

As mentioned earlier, Spring Cloud Config Server makes a clone of the remote git repository in case the local copy gets dirty (for example, folder content changes by an OS process) such that Spring Cloud Config Server cannot update the local copy from remote repository.

To solve this issue, there is a force-pull property that makes Spring Cloud Config Server force pull from the remote repository if the local copy is dirty, as shown in the following example:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo
          force-pull: true

If you have a multiple-repositories configuration, you can configure the force-pull property per repository, as shown in the following example:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://git/common/config-repo.git
          force-pull: true
          repos:
            team-a:
                pattern: team-a-*
                uri: https://git/team-a/config-repo.git
                force-pull: true
            team-b:
                pattern: team-b-*
                uri: https://git/team-b/config-repo.git
                force-pull: true
            team-c:
                pattern: team-c-*
                uri: https://git/team-a/config-repo.git
The default value for force-pull property is false.
Deleting untracked branches in Git Repositories

As Spring Cloud Config Server has a clone of the remote git repository after check-outing branch to local repo (e.g fetching properties by label) it will keep this branch forever or till the next server restart (which creates new local repo). So there could be a case when remote branch is deleted but local copy of it is still available for fetching. And if Spring Cloud Config Server client service starts with --spring.cloud.config.label=deletedRemoteBranch,master it will fetch properties from deletedRemoteBranch local branch, but not from master.

In order to keep local repository branches clean and up to remote - deleteUntrackedBranches property could be set. It will make Spring Cloud Config Server force delete untracked branches from local repository. Example:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo
          deleteUntrackedBranches: true
The default value for deleteUntrackedBranches property is false.
Git Refresh Rate

You can control how often the config server will fetch updated configuration data from your Git backend by using spring.cloud.config.server.git.refreshRate. The value of this property is specified in seconds. By default the value is 0, meaning the config server will fetch updated configuration from the Git repo every time it is requested.

Version Control Backend Filesystem Use
With VCS-based backends (git, svn), files are checked out or cloned to the local filesystem. By default, they are put in the system temporary directory with a prefix of config-repo-. On linux, for example, it could be /tmp/config-repo-<randomid>. Some operating systems routinely clean out temporary directories. This can lead to unexpected behavior, such as missing properties. To avoid this problem, change the directory that Config Server uses by setting spring.cloud.config.server.git.basedir or spring.cloud.config.server.svn.basedir to a directory that does not reside in the system temp structure.
File System Backend

There is also a “native” profile in the Config Server that does not use Git but loads the config files from the local classpath or file system (any static URL you want to point to with spring.cloud.config.server.native.searchLocations). To use the native profile, launch the Config Server with spring.profiles.active=native.

Remember to use the file: prefix for file resources (the default without a prefix is usually the classpath). As with any Spring Boot configuration, you can embed ${}-style environment placeholders, but remember that absolute paths in Windows require an extra / (for example, /${user.home}/config-repo).
The default value of the searchLocations is identical to a local Spring Boot application (that is, [classpath:/, classpath:/config, file:./, file:./config]). This does not expose the application.properties from the server to all clients, because any property sources present in the server are removed before being sent to the client.
A filesystem backend is great for getting started quickly and for testing. To use it in production, you need to be sure that the file system is reliable and shared across all instances of the Config Server.

The search locations can contain placeholders for {application}, {profile}, and {label}. In this way, you can segregate the directories in the path and choose a strategy that makes sense for you (such as subdirectory per application or subdirectory per profile).

If you do not use placeholders in the search locations, this repository also appends the {label} parameter of the HTTP resource to a suffix on the search path, so properties files are loaded from each search location and a subdirectory with the same name as the label (the labelled properties take precedence in the Spring Environment). Thus, the default behaviour with no placeholders is the same as adding a search location ending with /{label}/. For example, file:/tmp/config is the same as file:/tmp/config,file:/tmp/config/{label}. This behavior can be disabled by setting spring.cloud.config.server.native.addLabelLocations=false.

Vault Backend

Spring Cloud Config Server also supports Vault as a backend.

Vault is a tool for securely accessing secrets. A secret is anything that to which you want to tightly control access, such as API keys, passwords, certificates, and other sensitive information. Vault provides a unified interface to any secret while providing tight access control and recording a detailed audit log.

For more information on Vault, see the Vault quick start guide.

To enable the config server to use a Vault backend, you can run your config server with the vault profile. For example, in your config server’s application.properties, you can add spring.profiles.active=vault.

By default, the config server assumes that your Vault server runs at 127.0.0.1:8200. It also assumes that the name of backend is secret and the key is application. All of these defaults can be configured in your config server’s application.properties. The following table describes configurable Vault properties:

Name Default Value

host

127.0.0.1

port

8200

scheme

http

backend

secret

defaultKey

application

profileSeparator

,

kvVersion

1

skipSslValidation

false

timeout

5

namespace

null

All of the properties in the preceding table must be prefixed with spring.cloud.config.server.vault or placed in the correct Vault section of a composite configuration.

All configurable properties can be found in org.springframework.cloud.config.server.environment.VaultEnvironmentProperties.

Vault 0.10.0 introduced a versioned key-value backend (k/v backend version 2) that exposes a different API than earlier versions, it now requires a data/ between the mount path and the actual context path and wraps secrets in a data object. Setting spring.cloud.config.server.vault.kv-version=2 will take this into account.

Optionally, there is support for the Vault Enterprise X-Vault-Namespace header. To have it sent to Vault set the namespace property.

With your config server running, you can make HTTP requests to the server to retrieve values from the Vault backend. To do so, you need a token for your Vault server.

First, place some data in you Vault, as shown in the following example:

$ vault kv put secret/application foo=bar baz=bam
$ vault kv put secret/myapp foo=myappsbar

Second, make an HTTP request to your config server to retrieve the values, as shown in the following example:

$ curl -X "GET" "http://localhost:8888/myapp/default" -H "X-Config-Token: yourtoken"

You should see a response similar to the following:

{
   "name":"myapp",
   "profiles":[
      "default"
   ],
   "label":null,
   "version":null,
   "state":null,
   "propertySources":[
      {
         "name":"vault:myapp",
         "source":{
            "foo":"myappsbar"
         }
      },
      {
         "name":"vault:application",
         "source":{
            "baz":"bam",
            "foo":"bar"
         }
      }
   ]
}

The default way for a client to provide the necessary authentication to let Config Server talk to Vault is to set the X-Config-Token header. However, you can instead omit the header and configure the authentication in the server, by setting the same configuration properties as Spring Cloud Vault. The property to set is spring.cloud.config.server.vault.authentication. It should be set to one of the supported authentication methods. You may also need to set other properties specific to the authentication method you use, by using the same property names as documented for spring.cloud.vault but instead using the spring.cloud.config.server.vault prefix. See the Spring Cloud Vault Reference Guide for more detail.

If you omit the X-Config-Token header and use a server property to set the authentication, the Config Server application needs an additional dependency on Spring Vault to enable the additional authentication options. See the Spring Vault Reference Guide for how to add that dependency.
Multiple Properties Sources

When using Vault, you can provide your applications with multiple properties sources. For example, assume you have written data to the following paths in Vault:

secret/myApp,dev
secret/myApp
secret/application,dev
secret/application

Properties written to secret/application are available to all applications using the Config Server. An application with the name, myApp, would have any properties written to secret/myApp and secret/application available to it. When myApp has the dev profile enabled, properties written to all of the above paths would be available to it, with properties in the first path in the list taking priority over the others.

Accessing Backends Through a Proxy

The configuration server can access a Git or Vault backend through an HTTP or HTTPS proxy. This behavior is controlled for either Git or Vault by settings under proxy.http and proxy.https. These settings are per repository, so if you are using a composite environment repository you must configure proxy settings for each backend in the composite individually. If using a network which requires separate proxy servers for HTTP and HTTPS URLs, you can configure both the HTTP and the HTTPS proxy settings for a single backend.

The following table describes the proxy configuration properties for both HTTP and HTTPS proxies. All of these properties must be prefixed by proxy.http or proxy.https.

Table 3. Proxy Configuration Properties
Property Name Remarks

host

The host of the proxy.

port

The port with which to access the proxy.

nonProxyHosts

Any hosts which the configuration server should access outside the proxy. If values are provided for both proxy.http.nonProxyHosts and proxy.https.nonProxyHosts, the proxy.http value will be used.

username

The username with which to authenticate to the proxy. If values are provided for both proxy.http.username and proxy.https.username, the proxy.http value will be used.

password

The password with which to authenticate to the proxy. If values are provided for both proxy.http.password and proxy.https.password, the proxy.http value will be used.

The following configuration uses an HTTPS proxy to access a Git repository.

spring:
  profiles:
    active: git
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo
          proxy:
            https:
              host: my-proxy.host.io
              password: myproxypassword
              port: '3128'
              username: myproxyusername
              nonProxyHosts: example.com
Sharing Configuration With All Applications

Sharing configuration between all applications varies according to which approach you take, as described in the following topics:

File Based Repositories

With file-based (git, svn, and native) repositories, resources with file names in application* (application.properties, application.yml, application-*.properties, and so on) are shared between all client applications. You can use resources with these file names to configure global defaults and have them be overridden by application-specific files as necessary.

The property overrides feature can also be used for setting global defaults, with placeholders applications allowed to override them locally.

With the “native” profile (a local file system backend) , you should use an explicit search location that is not part of the server’s own configuration. Otherwise, the application* resources in the default search locations get removed because they are part of the server.
Vault Server

When using Vault as a backend, you can share configuration with all applications by placing configuration in secret/application. For example, if you run the following Vault command, all applications using the config server will have the properties foo and baz available to them:

$ vault write secret/application foo=bar baz=bam
CredHub Server

When using CredHub as a backend, you can share configuration with all applications by placing configuration in /application/ or by placing it in the default profile for the application. For example, if you run the following CredHub command, all applications using the config server will have the properties shared.color1 and shared.color2 available to them:

credhub set --name "/application/profile/master/shared" --type=json
value: {"shared.color1": "blue", "shared.color": "red"}
credhub set --name "/my-app/default/master/more-shared" --type=json
value: {"shared.word1": "hello", "shared.word2": "world"}
JDBC Backend

Spring Cloud Config Server supports JDBC (relational database) as a backend for configuration properties. You can enable this feature by adding spring-jdbc to the classpath and using the jdbc profile or by adding a bean of type JdbcEnvironmentRepository. If you include the right dependencies on the classpath (see the user guide for more details on that), Spring Boot configures a data source.

The database needs to have a table called PROPERTIES with columns called APPLICATION, PROFILE, and LABEL (with the usual Environment meaning), plus KEY and VALUE for the key and value pairs in Properties style. All fields are of type String in Java, so you can make them VARCHAR of whatever length you need. Property values behave in the same way as they would if they came from Spring Boot properties files named {application}-{profile}.properties, including all the encryption and decryption, which will be applied as post-processing steps (that is, not in the repository implementation directly).

Redis Backend

Spring Cloud Config Server supports Redis as a backend for configuration properties. You can enable this feature by adding a dependency to Spring Data Redis.

pom.xml
<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-redis</artifactId>
    </dependency>
</dependencies>

The following configuration uses Spring Data RedisTemplate to access a Redis. We can use spring.redis.* properties to override default connection settings.

spring:
  profiles:
    active: redis
  redis:
    host: redis
    port: 16379

The properties should be stored as fields in a hash. The name of hash should be the same as spring.application.name property or conjunction of spring.application.name and spring.profiles.active[n].

HMSET sample-app server.port "8100" sample.topic.name "test" test.property1 "property1"

After executing the command visible above a hash should contain the following keys with values:

HGETALL sample-app
{
  "server.port": "8100",
  "sample.topic.name": "test",
  "test.property1": "property1"
}
When no profile is specified default will be used.
AWS S3 Backend

Spring Cloud Config Server supports AWS S3 as a backend for configuration properties. You can enable this feature by adding a dependency to the AWS Java SDK For Amazon S3.

pom.xml
<dependencies>
    <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-java-sdk-s3</artifactId>
    </dependency>
</dependencies>

The following configuration uses the AWS S3 client to access configuration files. We can use spring.awss3.* properties to select the bucket where your configuration is stored.

spring:
  profiles:
    active: awss3
  cloud:
    config:
      server:
        awss3:
          region: us-east-1
          bucket: bucket1

It is also possible to specify an AWS URL to override the standard endpoint of your S3 service with spring.awss3.endpoint. This allows support for beta regions of S3, and other S3 compatible storage APIs.

Credentials are found using the Default AWS Credential Provider Chain. Versioned and encrypted buckets are supported without further configuration.

Configuration files are stored in your bucket as {application}-{profile}.properties, {application}-{profile}.yml or {application}-{profile}.json. An optional label can be provided to specify a directory path to the file.

When no profile is specified default will be used.
CredHub Backend

Spring Cloud Config Server supports CredHub as a backend for configuration properties. You can enable this feature by adding a dependency to Spring CredHub.

pom.xml
<dependencies>
    <dependency>
        <groupId>org.springframework.credhub</groupId>
        <artifactId>spring-credhub-starter</artifactId>
    </dependency>
</dependencies>

The following configuration uses mutual TLS to access a CredHub:

spring:
  profiles:
    active: credhub
  cloud:
    config:
      server:
        credhub:
          url: https://credhub:8844

The properties should be stored as JSON, such as:

credhub set --name "/demo-app/default/master/toggles" --type=json
value: {"toggle.button": "blue", "toggle.link": "red"}
credhub set --name "/demo-app/default/master/abs" --type=json
value: {"marketing.enabled": true, "external.enabled": false}

All client applications with the name spring.cloud.config.name=demo-app will have the following properties available to them:

{
    toggle.button: "blue",
    toggle.link: "red",
    marketing.enabled: true,
    external.enabled: false
}
When no profile is specified default will be used and when no label is specified master will be used as a default value. NOTE: Values added to application will be shared by all the applications.
OAuth 2.0

You can authenticate with OAuth 2.0 using UAA as a provider.

pom.xml
<dependencies>
    <dependency>
        <groupId>org.springframework.security</groupId>
        <artifactId>spring-security-config</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.security</groupId>
        <artifactId>spring-security-oauth2-client</artifactId>
    </dependency>
</dependencies>

The following configuration uses OAuth 2.0 and UAA to access a CredHub:

spring:
  profiles:
    active: credhub
  cloud:
    config:
      server:
        credhub:
          url: https://credhub:8844
          oauth2:
            registration-id: credhub-client
  security:
    oauth2:
      client:
        registration:
          credhub-client:
            provider: uaa
            client-id: credhub_config_server
            client-secret: asecret
            authorization-grant-type: client_credentials
        provider:
          uaa:
            token-uri: https://uaa:8443/oauth/token
The used UAA client-id should have credhub.read as scope.
Composite Environment Repositories

In some scenarios, you may wish to pull configuration data from multiple environment repositories. To do so, you can enable the composite profile in your configuration server’s application properties or YAML file. If, for example, you want to pull configuration data from a Subversion repository as well as two Git repositories, you can set the following properties for your configuration server:

spring:
  profiles:
    active: composite
  cloud:
    config:
      server:
        composite:
        -
          type: svn
          uri: file:///path/to/svn/repo
        -
          type: git
          uri: file:///path/to/rex/git/repo
        -
          type: git
          uri: file:///path/to/walter/git/repo

Using this configuration, precedence is determined by the order in which repositories are listed under the composite key. In the above example, the Subversion repository is listed first, so a value found in the Subversion repository will override values found for the same property in one of the Git repositories. A value found in the rex Git repository will be used before a value found for the same property in the walter Git repository.

If you want to pull configuration data only from repositories that are each of distinct types, you can enable the corresponding profiles, rather than the composite profile, in your configuration server’s application properties or YAML file. If, for example, you want to pull configuration data from a single Git repository and a single HashiCorp Vault server, you can set the following properties for your configuration server:

spring:
  profiles:
    active: git, vault
  cloud:
    config:
      server:
        git:
          uri: file:///path/to/git/repo
          order: 2
        vault:
          host: 127.0.0.1
          port: 8200
          order: 1

Using this configuration, precedence can be determined by an order property. You can use the order property to specify the priority order for all your repositories. The lower the numerical value of the order property, the higher priority it has. The priority order of a repository helps resolve any potential conflicts between repositories that contain values for the same properties.

If your composite environment includes a Vault server as in the previous example, you must include a Vault token in every request made to the configuration server. See Vault Backend.
Any type of failure when retrieving values from an environment repository results in a failure for the entire composite environment.
When using a composite environment, it is important that all repositories contain the same labels. If you have an environment similar to those in the preceding examples and you request configuration data with the master label but the Subversion repository does not contain a branch called master, the entire request fails.
Custom Composite Environment Repositories

In addition to using one of the environment repositories from Spring Cloud, you can also provide your own EnvironmentRepository bean to be included as part of a composite environment. To do so, your bean must implement the EnvironmentRepository interface. If you want to control the priority of your custom EnvironmentRepository within the composite environment, you should also implement the Ordered interface and override the getOrdered method. If you do not implement the Ordered interface, your EnvironmentRepository is given the lowest priority.

Property Overrides

The Config Server has an “overrides” feature that lets the operator provide configuration properties to all applications. The overridden properties cannot be accidentally changed by the application with the normal Spring Boot hooks. To declare overrides, add a map of name-value pairs to spring.cloud.config.server.overrides, as shown in the following example:

spring:
  cloud:
    config:
      server:
        overrides:
          foo: bar

The preceding examples causes all applications that are config clients to read foo=bar, independent of their own configuration.

A configuration system cannot force an application to use configuration data in any particular way. Consequently, overrides are not enforceable. However, they do provide useful default behavior for Spring Cloud Config clients.
Normally, Spring environment placeholders with ${} can be escaped (and resolved on the client) by using backslash (\) to escape the $ or the {. For example, \${app.foo:bar} resolves to bar, unless the app provides its own app.foo.
In YAML, you do not need to escape the backslash itself. However, in properties files, you do need to escape the backslash, when you configure the overrides on the server.

You can change the priority of all overrides in the client to be more like default values, letting applications supply their own values in environment variables or System properties, by setting the spring.cloud.config.overrideNone=true flag (the default is false) in the remote repository.

4.2.2. Health Indicator

Config Server comes with a Health Indicator that checks whether the configured EnvironmentRepository is working. By default, it asks the EnvironmentRepository for an application named app, the default profile, and the default label provided by the EnvironmentRepository implementation.

You can configure the Health Indicator to check more applications along with custom profiles and custom labels, as shown in the following example:

spring:
  cloud:
    config:
      server:
        health:
          repositories:
            myservice:
              label: mylabel
            myservice-dev:
              name: myservice
              profiles: development

You can disable the Health Indicator by setting spring.cloud.config.server.health.enabled=false.

4.2.3. Security

You can secure your Config Server in any way that makes sense to you (from physical network security to OAuth2 bearer tokens), because Spring Security and Spring Boot offer support for many security arrangements.

To use the default Spring Boot-configured HTTP Basic security, include Spring Security on the classpath (for example, through spring-boot-starter-security). The default is a username of user and a randomly generated password. A random password is not useful in practice, so we recommend you configure the password (by setting spring.security.user.password) and encrypt it (see below for instructions on how to do that).

4.2.4. Encryption and Decryption

To use the encryption and decryption features you need the full-strength JCE installed in your JVM (it is not included by default). You can download the “Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files” from Oracle and follow the installation instructions (essentially, you need to replace the two policy files in the JRE lib/security directory with the ones that you downloaded).

If the remote property sources contain encrypted content (values starting with {cipher}), they are decrypted before sending to clients over HTTP. The main advantage of this setup is that the property values need not be in plain text when they are “at rest” (for example, in a git repository). If a value cannot be decrypted, it is removed from the property source and an additional property is added with the same key but prefixed with invalid and a value that means “not applicable” (usually <n/a>). This is largely to prevent cipher text being used as a password and accidentally leaking.

If you set up a remote config repository for config client applications, it might contain an application.yml similar to the following:

application.yml
spring:
  datasource:
    username: dbuser
    password: '{cipher}FKSAJDFGYOS8F7GLHAKERGFHLSAJ'

Encrypted values in a .properties file must not be wrapped in quotes. Otherwise, the value is not decrypted. The following example shows values that would work:

application.properties
spring.datasource.username: dbuser
spring.datasource.password: {cipher}FKSAJDFGYOS8F7GLHAKERGFHLSAJ

You can safely push this plain text to a shared git repository, and the secret password remains protected.

The server also exposes /encrypt and /decrypt endpoints (on the assumption that these are secured and only accessed by authorized agents). If you edit a remote config file, you can use the Config Server to encrypt values by POSTing to the /encrypt endpoint, as shown in the following example:

$ curl localhost:8888/encrypt -d mysecret
682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda
If the value you encrypt has characters in it that need to be URL encoded, you should use the --data-urlencode option to curl to make sure they are encoded properly.
Be sure not to include any of the curl command statistics in the encrypted value. Outputting the value to a file can help avoid this problem.

The inverse operation is also available through /decrypt (provided the server is configured with a symmetric key or a full key pair), as shown in the following example:

$ curl localhost:8888/decrypt -d 682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda
mysecret
If you testing with curl, then use --data-urlencode (instead of -d) or set an explicit Content-Type: text/plain to make sure curl encodes the data correctly when there are special characters ('+' is particularly tricky).

Take the encrypted value and add the {cipher} prefix before you put it in the YAML or properties file and before you commit and push it to a remote (potentially insecure) store.

The /encrypt and /decrypt endpoints also both accept paths in the form of /*/{application}/{profiles}, which can be used to control cryptography on a per-application (name) and per-profile basis when clients call into the main environment resource.

To control the cryptography in this granular way, you must also provide a @Bean of type TextEncryptorLocator that creates a different encryptor per name and profiles. The one that is provided by default does not do so (all encryptions use the same key).

The spring command line client (with Spring Cloud CLI extensions installed) can also be used to encrypt and decrypt, as shown in the following example:

$ spring encrypt mysecret --key foo
682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda
$ spring decrypt --key foo 682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda
mysecret

To use a key in a file (such as an RSA public key for encryption), prepend the key value with "@" and provide the file path, as shown in the following example:

$ spring encrypt mysecret --key @${HOME}/.ssh/id_rsa.pub
AQAjPgt3eFZQXwt8tsHAVv/QHiY5sI2dRcR+...
The --key argument is mandatory (despite having a -- prefix).

4.2.5. Key Management

The Config Server can use a symmetric (shared) key or an asymmetric one (RSA key pair). The asymmetric choice is superior in terms of security, but it is often more convenient to use a symmetric key since it is a single property value to configure in the bootstrap.properties.

To configure a symmetric key, you need to set encrypt.key to a secret String (or use the ENCRYPT_KEY environment variable to keep it out of plain-text configuration files).

You cannot configure an asymmetric key using encrypt.key.

To configure an asymmetric key use a keystore (e.g. as created by the keytool utility that comes with the JDK). The keystore properties are encrypt.keyStore.* with * equal to

Property Description

encrypt.keyStore.location

Contains a Resource location

encrypt.keyStore.password

Holds the password that unlocks the keystore

encrypt.keyStore.alias

Identifies which key in the store to use

encrypt.keyStore.type

The type of KeyStore to create. Defaults to jks.

The encryption is done with the public key, and a private key is needed for decryption. Thus, in principle, you can configure only the public key in the server if you want to only encrypt (and are prepared to decrypt the values yourself locally with the private key). In practice, you might not want to do decrypt locally, because it spreads the key management process around all the clients, instead of concentrating it in the server. On the other hand, it can be a useful option if your config server is relatively insecure and only a handful of clients need the encrypted properties.

4.2.6. Creating a Key Store for Testing

To create a keystore for testing, you can use a command resembling the following:

$ keytool -genkeypair -alias mytestkey -keyalg RSA \
  -dname "CN=Web Server,OU=Unit,O=Organization,L=City,S=State,C=US" \
  -keypass changeme -keystore server.jks -storepass letmein
When using JDK 11 or above you may get the following warning when using the command above. In this case you probably want to make sure the keypass and storepass values match.
Warning:  Different store and key passwords not supported for PKCS12 KeyStores. Ignoring user-specified -keypass value.

Put the server.jks file in the classpath (for instance) and then, in your bootstrap.yml, for the Config Server, create the following settings:

encrypt:
  keyStore:
    location: classpath:/server.jks
    password: letmein
    alias: mytestkey
    secret: changeme

4.2.7. Using Multiple Keys and Key Rotation

In addition to the {cipher} prefix in encrypted property values, the Config Server looks for zero or more {name:value} prefixes before the start of the (Base64 encoded) cipher text. The keys are passed to a TextEncryptorLocator, which can do whatever logic it needs to locate a TextEncryptor for the cipher. If you have configured a keystore (encrypt.keystore.location), the default locator looks for keys with aliases supplied by the key prefix, with a cipher text like resembling the following:

foo:
  bar: `{cipher}{key:testkey}...`

The locator looks for a key named "testkey". A secret can also be supplied by using a {secret:…​} value in the prefix. However, if it is not supplied, the default is to use the keystore password (which is what you get when you build a keystore and do not specify a secret). If you do supply a secret, you should also encrypt the secret using a custom SecretLocator.

When the keys are being used only to encrypt a few bytes of configuration data (that is, they are not being used elsewhere), key rotation is hardly ever necessary on cryptographic grounds. However, you might occasionally need to change the keys (for example, in the event of a security breach). In that case, all the clients would need to change their source config files (for example, in git) and use a new {key:…​} prefix in all the ciphers. Note that the clients need to first check that the key alias is available in the Config Server keystore.

If you want to let the Config Server handle all encryption as well as decryption, the {name:value} prefixes can also be added as plain text posted to the /encrypt endpoint, .

4.2.8. Serving Encrypted Properties

Sometimes you want the clients to decrypt the configuration locally, instead of doing it in the server. In that case, if you provide the encrypt.* configuration to locate a key, you can still have /encrypt and /decrypt endpoints, but you need to explicitly switch off the decryption of outgoing properties by placing spring.cloud.config.server.encrypt.enabled=false in bootstrap.[yml|properties]. If you do not care about the endpoints, it should work if you do not configure either the key or the enabled flag.

4.3. Serving Alternative Formats

The default JSON format from the environment endpoints is perfect for consumption by Spring applications, because it maps directly onto the Environment abstraction. If you prefer, you can consume the same data as YAML or Java properties by adding a suffix (".yml", ".yaml" or ".properties") to the resource path. This can be useful for consumption by applications that do not care about the structure of the JSON endpoints or the extra metadata they provide (for example, an application that is not using Spring might benefit from the simplicity of this approach).

The YAML and properties representations have an additional flag (provided as a boolean query parameter called resolvePlaceholders) to signal that placeholders in the source documents (in the standard Spring ${…​} form) should be resolved in the output before rendering, where possible. This is a useful feature for consumers that do not know about the Spring placeholder conventions.

There are limitations in using the YAML or properties formats, mainly in relation to the loss of metadata. For example, the JSON is structured as an ordered list of property sources, with names that correlate with the source. The YAML and properties forms are coalesced into a single map, even if the origin of the values has multiple sources, and the names of the original source files are lost. Also, the YAML representation is not necessarily a faithful representation of the YAML source in a backing repository either. It is constructed from a list of flat property sources, and assumptions have to be made about the form of the keys.

4.4. Serving Plain Text

Instead of using the Environment abstraction (or one of the alternative representations of it in YAML or properties format), your applications might need generic plain-text configuration files that are tailored to their environment. The Config Server provides these through an additional endpoint at /{application}/{profile}/{label}/{path}, where application, profile, and label have the same meaning as the regular environment endpoint, but path is a path to a file name (such as log.xml). The source files for this endpoint are located in the same way as for the environment endpoints. The same search path is used for properties and YAML files. However, instead of aggregating all matching resources, only the first one to match is returned.

After a resource is located, placeholders in the normal format (${…​}) are resolved by using the effective Environment for the supplied application name, profile, and label. In this way, the resource endpoint is tightly integrated with the environment endpoints.

As with the source files for environment configuration, the profile is used to resolve the file name. So, if you want a profile-specific file, /*/development/*/logback.xml can be resolved by a file called logback-development.xml (in preference to logback.xml).
If you do not want to supply the label and let the server use the default label, you can supply a useDefaultLabel request parameter. Consequently, the preceding example for the default profile could be /sample/default/nginx.conf?useDefaultLabel.

At present, Spring Cloud Config can serve plaintext for git, SVN, native backends, and AWS S3. The support for git, SVN, and native backends is identical. AWS S3 works a bit differently. The following sections show how each one works:

4.4.1. Git, SVN, and Native Backends

Consider the following example for a GIT or SVN repository or a native backend:

application.yml
nginx.conf

The nginx.conf might resemble the following listing:

server {
    listen              80;
    server_name         ${nginx.server.name};
}

application.yml might resemble the following listing:

nginx:
  server:
    name: example.com
---
spring:
  profiles: development
nginx:
  server:
    name: develop.com

The /sample/default/master/nginx.conf resource might be as follows:

server {
    listen              80;
    server_name         example.com;
}

/sample/development/master/nginx.conf might be as follows:

server {
    listen              80;
    server_name         develop.com;
}

4.4.2. AWS S3

To enable serving plain text for AWS s3, the Config Server application needs to include a dependency on Spring Cloud AWS. For details on how to set up that dependency, see the Spring Cloud AWS Reference Guide. Then you need to configure Spring Cloud AWS, as described in the Spring Cloud AWS Reference Guide.

4.4.3. Decrypting Plain Text

By default, encrypted values in plain text files are not decrypted. In order to enable decryption for plain text files, set spring.cloud.config.server.encrypt.enabled=true and spring.cloud.config.server.encrypt.plainTextEncrypt=true in bootstrap.[yml|properties]

Decrypting plain text files is only supported for YAML, JSON, and properties file extensions.

If this feature is enabled, and an unsupported file extention is requested, any encrypted values in the file will not be decrypted.

4.5. Embedding the Config Server

The Config Server runs best as a standalone application. However, if need be, you can embed it in another application. To do so, use the @EnableConfigServer annotation. An optional property named spring.cloud.config.server.bootstrap can be useful in this case. It is a flag to indicate whether the server should configure itself from its own remote repository. By default, the flag is off, because it can delay startup. However, when embedded in another application, it makes sense to initialize the same way as any other application. When setting spring.cloud.config.server.bootstrap to true you must also use a composite environment repository configuration. For example

spring:
  application:
    name: configserver
  profiles:
    active: composite
  cloud:
    config:
      server:
        composite:
          - type: native
            search-locations: ${HOME}/Desktop/config
        bootstrap: true
If you use the bootstrap flag, the config server needs to have its name and repository URI configured in bootstrap.yml.

To change the location of the server endpoints, you can (optionally) set spring.cloud.config.server.prefix (for example, /config), to serve the resources under a prefix. The prefix should start but not end with a /. It is applied to the @RequestMappings in the Config Server (that is, underneath the Spring Boot server.servletPath and server.contextPath prefixes).

If you want to read the configuration for an application directly from the backend repository (instead of from the config server), you basically want an embedded config server with no endpoints. You can switch off the endpoints entirely by not using the @EnableConfigServer annotation (set spring.cloud.config.server.bootstrap=true).

4.6. Push Notifications and Spring Cloud Bus

Many source code repository providers (such as Github, Gitlab, Gitea, Gitee, Gogs, or Bitbucket) notify you of changes in a repository through a webhook. You can configure the webhook through the provider’s user interface as a URL and a set of events in which you are interested. For instance, Github uses a POST to the webhook with a JSON body containing a list of commits and a header (X-Github-Event) set to push. If you add a dependency on the spring-cloud-config-monitor library and activate the Spring Cloud Bus in your Config Server, then a /monitor endpoint is enabled.

When the webhook is activated, the Config Server sends a RefreshRemoteApplicationEvent targeted at the applications it thinks might have changed. The change detection can be strategized. However, by default, it looks for changes in files that match the application name (for example, foo.properties is targeted at the foo application, while application.properties is targeted at all applications). The strategy to use when you want to override the behavior is PropertyPathNotificationExtractor, which accepts the request headers and body as parameters and returns a list of file paths that changed.

The default configuration works out of the box with Github, Gitlab, Gitea, Gitee, Gogs or Bitbucket. In addition to the JSON notifications from Github, Gitlab, Gitee, or Bitbucket, you can trigger a change notification by POSTing to /monitor with form-encoded body parameters in the pattern of path={application}. Doing so broadcasts to applications matching the {application} pattern (which can contain wildcards).

The RefreshRemoteApplicationEvent is transmitted only if the spring-cloud-bus is activated in both the Config Server and in the client application.
The default configuration also detects filesystem changes in local git repositories. In that case, the webhook is not used. However, as soon as you edit a config file, a refresh is broadcast.

4.7. Spring Cloud Config Client

A Spring Boot application can take immediate advantage of the Spring Config Server (or other external property sources provided by the application developer). It also picks up some additional useful features related to Environment change events.

4.7.1. Config First Bootstrap

The default behavior for any application that has the Spring Cloud Config Client on the classpath is as follows: When a config client starts, it binds to the Config Server (through the spring.cloud.config.uri bootstrap configuration property) and initializes Spring Environment with remote property sources.

The net result of this behavior is that all client applications that want to consume the Config Server need a bootstrap.yml (or an environment variable) with the server address set in spring.cloud.config.uri (it defaults to "http://localhost:8888").

4.7.2. Discovery First Bootstrap

If you use a DiscoveryClient implementation, such as Spring Cloud Netflix and Eureka Service Discovery or Spring Cloud Consul, you can have the Config Server register with the Discovery Service. However, in the default “Config First” mode, clients cannot take advantage of the registration.

If you prefer to use DiscoveryClient to locate the Config Server, you can do so by setting spring.cloud.config.discovery.enabled=true (the default is false). The net result of doing so is that client applications all need a bootstrap.yml (or an environment variable) with the appropriate discovery configuration. For example, with Spring Cloud Netflix, you need to define the Eureka server address (for example, in eureka.client.serviceUrl.defaultZone). The price for using this option is an extra network round trip on startup, to locate the service registration. The benefit is that, as long as the Discovery Service is a fixed point, the Config Server can change its coordinates. The default service ID is configserver, but you can change that on the client by setting spring.cloud.config.discovery.serviceId (and on the server, in the usual way for a service, such as by setting spring.application.name).

The discovery client implementations all support some kind of metadata map (for example, we have eureka.instance.metadataMap for Eureka). Some additional properties of the Config Server may need to be configured in its service registration metadata so that clients can connect correctly. If the Config Server is secured with HTTP Basic, you can configure the credentials as user and password. Also, if the Config Server has a context path, you can set configPath. For example, the following YAML file is for a Config Server that is a Eureka client:

bootstrap.yml
eureka:
  instance:
    ...
    metadataMap:
      user: osufhalskjrtl
      password: lviuhlszvaorhvlo5847
      configPath: /config

4.7.3. Config Client Fail Fast

In some cases, you may want to fail startup of a service if it cannot connect to the Config Server. If this is the desired behavior, set the bootstrap configuration property spring.cloud.config.fail-fast=true to make the client halt with an Exception.

4.7.4. Config Client Retry

If you expect that the config server may occasionally be unavailable when your application starts, you can make it keep trying after a failure. First, you need to set spring.cloud.config.fail-fast=true. Then you need to add spring-retry and spring-boot-starter-aop to your classpath. The default behavior is to retry six times with an initial backoff interval of 1000ms and an exponential multiplier of 1.1 for subsequent backoffs. You can configure these properties (and others) by setting the spring.cloud.config.retry.* configuration properties.

To take full control of the retry behavior, add a @Bean of type RetryOperationsInterceptor with an ID of configServerRetryInterceptor. Spring Retry has a RetryInterceptorBuilder that supports creating one.

4.7.5. Locating Remote Configuration Resources

The Config Service serves property sources from /{application}/{profile}/{label}, where the default bindings in the client app are as follows:

  • "name" = ${spring.application.name}

  • "profile" = ${spring.profiles.active} (actually Environment.getActiveProfiles())

  • "label" = "master"

When setting the property ${spring.application.name} do not prefix your app name with the reserved word application- to prevent issues resolving the correct property source.

You can override all of them by setting spring.cloud.config.* (where * is name, profile or label). The label is useful for rolling back to previous versions of configuration. With the default Config Server implementation, it can be a git label, branch name, or commit ID. Label can also be provided as a comma-separated list. In that case, the items in the list are tried one by one until one succeeds. This behavior can be useful when working on a feature branch. For instance, you might want to align the config label with your branch but make it optional (in that case, use spring.cloud.config.label=myfeature,develop).

4.7.6. Specifying Multiple Urls for the Config Server

To ensure high availability when you have multiple instances of Config Server deployed and expect one or more instances to be unavailable from time to time, you can either specify multiple URLs (as a comma-separated list under the spring.cloud.config.uri property) or have all your instances register in a Service Registry like Eureka ( if using Discovery-First Bootstrap mode ). Note that doing so ensures high availability only when the Config Server is not running (that is, when the application has exited) or when a connection timeout has occurred. For example, if the Config Server returns a 500 (Internal Server Error) response or the Config Client receives a 401 from the Config Server (due to bad credentials or other causes), the Config Client does not try to fetch properties from other URLs. An error of that kind indicates a user issue rather than an availability problem.

If you use HTTP basic security on your Config Server, it is currently possible to support per-Config Server auth credentials only if you embed the credentials in each URL you specify under the spring.cloud.config.uri property. If you use any other kind of security mechanism, you cannot (currently) support per-Config Server authentication and authorization.

4.7.7. Configuring Timeouts

If you want to configure timeout thresholds:

  • Read timeouts can be configured by using the property spring.cloud.config.request-read-timeout.

  • Connection timeouts can be configured by using the property spring.cloud.config.request-connect-timeout.

4.7.8. Security

If you use HTTP Basic security on the server, clients need to know the password (and username if it is not the default). You can specify the username and password through the config server URI or via separate username and password properties, as shown in the following example:

bootstrap.yml
spring:
  cloud:
    config:
     uri: https://user:[email protected]ny.com

The following example shows an alternate way to pass the same information:

bootstrap.yml
spring:
  cloud:
    config:
     uri: https://myconfig.mycompany.com
     username: user
     password: secret

The spring.cloud.config.password and spring.cloud.config.username values override anything that is provided in the URI.

If you deploy your apps on Cloud Foundry, the best way to provide the password is through service credentials (such as in the URI, since it does not need to be in a config file). The following example works locally and for a user-provided service on Cloud Foundry named configserver:

bootstrap.yml
spring:
  cloud:
    config:
     uri: ${vcap.services.configserver.credentials.uri:http://user:[email protected]:8888}

If you use another form of security, you might need to provide a RestTemplate to the ConfigServicePropertySourceLocator (for example, by grabbing it in the bootstrap context and injecting it).

Health Indicator

The Config Client supplies a Spring Boot Health Indicator that attempts to load configuration from the Config Server. The health indicator can be disabled by setting health.config.enabled=false. The response is also cached for performance reasons. The default cache time to live is 5 minutes. To change that value, set the health.config.time-to-live property (in milliseconds).

Providing A Custom RestTemplate

In some cases, you might need to customize the requests made to the config server from the client. Typically, doing so involves passing special Authorization headers to authenticate requests to the server. To provide a custom RestTemplate:

  1. Create a new configuration bean with an implementation of PropertySourceLocator, as shown in the following example:

CustomConfigServiceBootstrapConfiguration.java
@Configuration
public class CustomConfigServiceBootstrapConfiguration {
    @Bean
    public ConfigServicePropertySourceLocator configServicePropertySourceLocator() {
        ConfigClientProperties clientProperties = configClientProperties();
       ConfigServicePropertySourceLocator configServicePropertySourceLocator =  new ConfigServicePropertySourceLocator(clientProperties);
        configServicePropertySourceLocator.setRestTemplate(customRestTemplate(clientProperties));
        return configServicePropertySourceLocator;
    }
}
For a simplified approach to adding Authorization headers, the spring.cloud.config.headers.* property can be used instead.
  1. In resources/META-INF, create a file called spring.factories and specify your custom configuration, as shown in the following example:

spring.factories
org.springframework.cloud.bootstrap.BootstrapConfiguration = com.my.config.client.CustomConfigServiceBootstrapConfiguration
Vault

When using Vault as a backend to your config server, the client needs to supply a token for the server to retrieve values from Vault. This token can be provided within the client by setting spring.cloud.config.token in bootstrap.yml, as shown in the following example:

bootstrap.yml
spring:
  cloud:
    config:
      token: YourVaultToken

4.7.9. Nested Keys In Vault

Vault supports the ability to nest keys in a value stored in Vault, as shown in the following example:

echo -n '{"appA": {"secret": "appAsecret"}, "bar": "baz"}' | vault write secret/myapp -

This command writes a JSON object to your Vault. To access these values in Spring, you would use the traditional dot(.) annotation, as shown in the following example

@Value("${appA.secret}")
String name = "World";

The preceding code would sets the value of the name variable to appAsecret.

5. Spring Cloud Netflix

Hoxton.SR3

This project provides Netflix OSS integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations you can quickly enable and configure the common patterns inside your application and build large distributed systems with battle-tested Netflix components. The patterns provided include Service Discovery (Eureka), Circuit Breaker (Hystrix), Intelligent Routing (Zuul) and Client Side Load Balancing (Ribbon).

5.1. Service Discovery: Eureka Clients

Service Discovery is one of the key tenets of a microservice-based architecture. Trying to hand-configure each client or some form of convention can be difficult to do and can be brittle. Eureka is the Netflix Service Discovery Server and Client. The server can be configured and deployed to be highly available, with each server replicating state about the registered services to the others.

5.1.1. How to Include Eureka Client

To include the Eureka Client in your project, use the starter with a group ID of org.springframework.cloud and an artifact ID of spring-cloud-starter-netflix-eureka-client. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

5.1.2. Registering with Eureka

When a client registers with Eureka, it provides meta-data about itself — such as host, port, health indicator URL, home page, and other details. Eureka receives heartbeat messages from each instance belonging to a service. If the heartbeat fails over a configurable timetable, the instance is normally removed from the registry.

The following example shows a minimal Eureka client application:

@SpringBootApplication
@RestController
public class Application {

    @RequestMapping("/")
    public String home() {
        return "Hello world";
    }

    public static void main(String[] args) {
        new SpringApplicationBuilder(Application.class).web(true).run(args);
    }

}

Note that the preceding example shows a normal Spring Boot application. By having spring-cloud-starter-netflix-eureka-client on the classpath, your application automatically registers with the Eureka Server. Configuration is required to locate the Eureka server, as shown in the following example:

application.yml
eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/

In the preceding example, defaultZone is a magic string fallback value that provides the service URL for any client that does not express a preference (in other words, it is a useful default).

The defaultZone property is case sensitive and requires camel case because the serviceUrl property is a Map<String, String>. Therefore, the defaultZone property does not follow the normal Spring Boot snake-case convention of default-zone.

The default application name (that is, the service ID), virtual host, and non-secure port (taken from the Environment) are ${spring.application.name}, ${spring.application.name} and ${server.port}, respectively.

Having spring-cloud-starter-netflix-eureka-client on the classpath makes the app into both a Eureka “instance” (that is, it registers itself) and a “client” (it can query the registry to locate other services). The instance behaviour is driven by eureka.instance.* configuration keys, but the defaults are fine if you ensure that your application has a value for spring.application.name (this is the default for the Eureka service ID or VIP).

See EurekaInstanceConfigBean and EurekaClientConfigBean for more details on the configurable options.

To disable the Eureka Discovery Client, you can set eureka.client.enabled to false. Eureka Discovery Client will also be disabled when spring.cloud.discovery.enabled is set to false.

5.1.3. Authenticating with the Eureka Server

HTTP basic authentication is automatically added to your eureka client if one of the eureka.client.serviceUrl.defaultZone URLs has credentials embedded in it (curl style, as follows: user:[email protected]:8761/eureka). For more complex needs, you can create a @Bean of type DiscoveryClientOptionalArgs and inject ClientFilter instances into it, all of which is applied to the calls from the client to the server.

Because of a limitation in Eureka, it is not possible to support per-server basic auth credentials, so only the first set that are found is used.

5.1.4. Status Page and Health Indicator

The status page and health indicators for a Eureka instance default to /info and /health respectively, which are the default locations of useful endpoints in a Spring Boot Actuator application. You need to change these, even for an Actuator application if you use a non-default context path or servlet path (such as server.servletPath=/custom). The following example shows the default values for the two settings:

application.yml
eureka:
  instance:
    statusPageUrlPath: ${server.servletPath}/info
    healthCheckUrlPath: ${server.servletPath}/health

These links show up in the metadata that is consumed by clients and are used in some scenarios to decide whether to send requests to your application, so it is helpful if they are accurate.

In Dalston it was also required to set the status and health check URLs when changing that management context path. This requirement was removed beginning in Edgware.

5.1.5. Registering a Secure Application

If your app wants to be contacted over HTTPS, you can set two flags in the EurekaInstanceConfig:

  • eureka.instance.[nonSecurePortEnabled]=[false]

  • eureka.instance.[securePortEnabled]=[true]

Doing so makes Eureka publish instance information that shows an explicit preference for secure communication. The Spring Cloud DiscoveryClient always returns a URI starting with https for a service configured this way. Similarly, when a service is configured this way, the Eureka (native) instance information has a secure health check URL.

Because of the way Eureka works internally, it still publishes a non-secure URL for the status and home pages unless you also override those explicitly. You can use placeholders to configure the eureka instance URLs, as shown in the following example:

application.yml
eureka:
  instance:
    statusPageUrl: https://${eureka.hostname}/info
    healthCheckUrl: https://${eureka.hostname}/health
    homePageUrl: https://${eureka.hostname}/

(Note that ${eureka.hostname} is a native placeholder only available in later versions of Eureka. You could achieve the same thing with Spring placeholders as well — for example, by using ${eureka.instance.hostName}.)

If your application runs behind a proxy, and the SSL termination is in the proxy (for example, if you run in Cloud Foundry or other platforms as a service), then you need to ensure that the proxy “forwarded” headers are intercepted and handled by the application. If the Tomcat container embedded in a Spring Boot application has explicit configuration for the 'X-Forwarded-\*` headers, this happens automatically. The links rendered by your app to itself being wrong (the wrong host, port, or protocol) is a sign that you got this configuration wrong.

5.1.6. Eureka’s Health Checks

By default, Eureka uses the client heartbeat to determine if a client is up. Unless specified otherwise, the Discovery Client does not propagate the current health check status of the application, per the Spring Boot Actuator. Consequently, after successful registration, Eureka always announces that the application is in 'UP' state. This behavior can be altered by enabling Eureka health checks, which results in propagating application status to Eureka. As a consequence, every other application does not send traffic to applications in states other then 'UP'. The following example shows how to enable health checks for the client:

application.yml
eureka:
  client:
    healthcheck:
      enabled: true
eureka.client.healthcheck.enabled=true should only be set in application.yml. Setting the value in bootstrap.yml causes undesirable side effects, such as registering in Eureka with an UNKNOWN status.

If you require more control over the health checks, consider implementing your own com.netflix.appinfo.HealthCheckHandler.

5.1.7. Eureka Metadata for Instances and Clients

It is worth spending a bit of time understanding how the Eureka metadata works, so you can use it in a way that makes sense in your platform. There is standard metadata for information such as hostname, IP address, port numbers, the status page, and health check. These are published in the service registry and used by clients to contact the services in a straightforward way. Additional metadata can be added to the instance registration in the eureka.instance.metadataMap, and this metadata is accessible in the remote clients. In general, additional metadata does not change the behavior of the client, unless the client is made aware of the meaning of the metadata. There are a couple of special cases, described later in this document, where Spring Cloud already assigns meaning to the metadata map.

Using Eureka on Cloud Foundry

Cloud Foundry has a global router so that all instances of the same app have the same hostname (other PaaS solutions with a similar architecture have the same arrangement). This is not necessarily a barrier to using Eureka. However, if you use the router (recommended or even mandatory, depending on the way your platform was set up), you need to explicitly set the hostname and port numbers (secure or non-secure) so that they use the router. You might also want to use instance metadata so that you can distinguish between the instances on the client (for example, in a custom load balancer). By default, the eureka.instance.instanceId is vcap.application.instance_id, as shown in the following example:

application.yml
eureka:
  instance:
    hostname: ${vcap.application.uris[0]}
    nonSecurePort: 80

Depending on the way the security rules are set up in your Cloud Foundry instance, you might be able to register and use the IP address of the host VM for direct service-to-service calls. This feature is not yet available on Pivotal Web Services (PWS).

Using Eureka on AWS

If the application is planned to be deployed to an AWS cloud, the Eureka instance must be configured to be AWS-aware. You can do so by customizing the EurekaInstanceConfigBean as follows:

@Bean
@Profile("!default")
public EurekaInstanceConfigBean eurekaInstanceConfig(InetUtils inetUtils) {
  EurekaInstanceConfigBean b = new EurekaInstanceConfigBean(inetUtils);
  AmazonInfo info = AmazonInfo.Builder.newBuilder().autoBuild("eureka");
  b.setDataCenterInfo(info);
  return b;
}
Changing the Eureka Instance ID

A vanilla Netflix Eureka instance is registered with an ID that is equal to its host name (that is, there is only one service per host). Spring Cloud Eureka provides a sensible default, which is defined as follows:

${spring.cloud.client.hostname}:${spring.application.name}:${spring.application.instance_id:${server.port}}}

An example is myhost:myappname:8080.

By using Spring Cloud, you can override this value by providing a unique identifier in eureka.instance.instanceId, as shown in the following example:

application.yml
eureka:
  instance:
    instanceId: ${spring.application.name}:${vcap.application.instance_id:${spring.application.instance_id:${random.value}}}

With the metadata shown in the preceding example and multiple service instances deployed on localhost, the random value is inserted there to make the instance unique. In Cloud Foundry, the vcap.application.instance_id is populated automatically in a Spring Boot application, so the random value is not needed.

5.1.8. Using the EurekaClient

Once you have an application that is a discovery client, you can use it to discover service instances from the Eureka Server. One way to do so is to use the native com.netflix.discovery.EurekaClient (as opposed to the Spring Cloud DiscoveryClient), as shown in the following example:

@Autowired
private EurekaClient discoveryClient;

public String serviceUrl() {
    InstanceInfo instance = discoveryClient.getNextServerFromEureka("STORES", false);
    return instance.getHomePageUrl();
}

Do not use the EurekaClient in a @PostConstruct method or in a @Scheduled method (or anywhere where the ApplicationContext might not be started yet). It is initialized in a SmartLifecycle (with phase=0), so the earliest you can rely on it being available is in another SmartLifecycle with a higher phase.

EurekaClient without Jersey

By default, EurekaClient uses Jersey for HTTP communication. If you wish to avoid dependencies from Jersey, you can exclude it from your dependencies. Spring Cloud auto-configures a transport client based on Spring RestTemplate. The following example shows Jersey being excluded:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
    <exclusions>
        <exclusion>
            <groupId>com.sun.jersey</groupId>
            <artifactId>jersey-client</artifactId>
        </exclusion>
        <exclusion>
            <groupId>com.sun.jersey</groupId>
            <artifactId>jersey-core</artifactId>
        </exclusion>
        <exclusion>
            <groupId>com.sun.jersey.contribs</groupId>
            <artifactId>jersey-apache-client4</artifactId>
        </exclusion>
    </exclusions>
</dependency>

5.1.9. Alternatives to the Native Netflix EurekaClient

You need not use the raw Netflix EurekaClient. Also, it is usually more convenient to use it behind a wrapper of some sort. Spring Cloud has support for Feign (a REST client builder) and Spring RestTemplate through the logical Eureka service identifiers (VIPs) instead of physical URLs. To configure Ribbon with a fixed list of physical servers, you can set <client>.ribbon.listOfServers to a comma-separated list of physical addresses (or hostnames), where <client> is the ID of the client.

You can also use the org.springframework.cloud.client.discovery.DiscoveryClient, which provides a simple API (not specific to Netflix) for discovery clients, as shown in the following example:

@Autowired
private DiscoveryClient discoveryClient;

public String serviceUrl() {
    List<ServiceInstance> list = discoveryClient.getInstances("STORES");
    if (list != null && list.size() > 0 ) {
        return list.get(0).getUri();
    }
    return null;
}

5.1.10. Why Is It so Slow to Register a Service?

Being an instance also involves a periodic heartbeat to the registry (through the client’s serviceUrl) with a default duration of 30 seconds. A service is not available for discovery by clients until the instance, the server, and the client all have the same metadata in their local cache (so it could take 3 heartbeats). You can change the period by setting eureka.instance.leaseRenewalIntervalInSeconds. Setting it to a value of less than 30 speeds up the process of getting clients connected to other services. In production, it is probably better to stick with the default, because of internal computations in the server that make assumptions about the lease renewal period.

5.1.11. Zones

If you have deployed Eureka clients to multiple zones, you may prefer that those clients use services within the same zone before trying services in another zone. To set that up, you need to configure your Eureka clients correctly.

First, you need to make sure you have Eureka servers deployed to each zone and that they are peers of each other. See the section on zones and regions for more information.

Next, you need to tell Eureka which zone your service is in. You can do so by using the metadataMap property. For example, if service 1 is deployed to both zone 1 and zone 2, you need to set the following Eureka properties in service 1:

Service 1 in Zone 1

eureka.instance.metadataMap.zone = zone1
eureka.client.preferSameZoneEureka = true

Service 1 in Zone 2

eureka.instance.metadataMap.zone = zone2
eureka.client.preferSameZoneEureka = true

5.1.12. Refreshing Eureka Clients

By default, the EurekaClient bean is refreshable, meaning the Eureka client properties can be changed and refreshed. When a refresh occurs clients will be unregistered from the Eureka server and there might be a brief moment of time where all instance of a given service are not available. One way to eliminate this from happening is to disable the ability to refresh Eureka clients. To do this set eureka.client.refresh.enable=false.

5.1.13. Using Eureka with Spring Cloud LoadBalancer

We offer support for the Spring Cloud LoadBalancer ZonePreferenceServiceInstanceListSupplier. The zone value from the Eureka instance metadata (eureka.instance.metadataMap.zone) is used for setting the value of spring-clod-loadbalancer-zone property that is used to filter service instances by zone.

If that is missing and if the spring.cloud.loadbalancer.eureka.approximateZoneFromHostname flag is set to true, it can use the domain name from the server hostname as a proxy for the zone.

If there is no other source of zone data, then a guess is made, based on the client configuration (as opposed to the instance configuration). We take eureka.client.availabilityZones, which is a map from region name to a list of zones, and pull out the first zone for the instance’s own region (that is, the eureka.client.region, which defaults to "us-east-1", for compatibility with native Netflix).

5.2. Service Discovery: Eureka Server

This section describes how to set up a Eureka server.

5.2.1. How to Include Eureka Server

To include Eureka Server in your project, use the starter with a group ID of org.springframework.cloud and an artifact ID of spring-cloud-starter-netflix-eureka-server. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

If your project already uses Thymeleaf as its template engine, the Freemarker templates of the Eureka server may not be loaded correctly. In this case it is necessary to configure the template loader manually:
application.yml
spring:
  freemarker:
    template-loader-path: classpath:/templates/
    prefer-file-system-access: false

5.2.2. How to Run a Eureka Server

The following example shows a minimal Eureka server:

@SpringBootApplication
@EnableEurekaServer
public class Application {

    public static void main(String[] args) {
        new SpringApplicationBuilder(Application.class).web(true).run(args);
    }

}

The server has a home page with a UI and HTTP API endpoints for the normal Eureka functionality under /eureka/*.

The following links have some Eureka background reading: flux capacitor and google group discussion.

Due to Gradle’s dependency resolution rules and the lack of a parent bom feature, depending on spring-cloud-starter-netflix-eureka-server can cause failures on application startup. To remedy this issue, add the Spring Boot Gradle plugin and import the Spring cloud starter parent bom as follows:

build.gradle
buildscript {
  dependencies {
    classpath("org.springframework.boot:spring-boot-gradle-plugin:{spring-boot-docs-version}")
  }
}

apply plugin: "spring-boot"

dependencyManagement {
  imports {
    mavenBom "org.springframework.cloud:spring-cloud-dependencies:{spring-cloud-version}"
  }
}

5.2.3. High Availability, Zones and Regions

The Eureka server does not have a back end store, but the service instances in the registry all have to send heartbeats to keep their registrations up to date (so this can be done in memory). Clients also have an in-memory cache of Eureka registrations (so they do not have to go to the registry for every request to a service).

By default, every Eureka server is also a Eureka client and requires (at least one) service URL to locate a peer. If you do not provide it, the service runs and works, but it fills your logs with a lot of noise about not being able to register with the peer.

See also below for details of Ribbon support on the client side for Zones and Regions.

5.2.4. Standalone Mode

The combination of the two caches (client and server) and the heartbeats make a standalone Eureka server fairly resilient to failure, as long as there is some sort of monitor or elastic runtime (such as Cloud Foundry) keeping it alive. In standalone mode, you might prefer to switch off the client side behavior so that it does not keep trying and failing to reach its peers. The following example shows how to switch off the client-side behavior:

application.yml (Standalone Eureka Server)
server:
  port: 8761

eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

Notice that the serviceUrl is pointing to the same host as the local instance.

5.2.5. Peer Awareness

Eureka can be made even more resilient and available by running multiple instances and asking them to register with each other. In fact, this is the default behavior, so all you need to do to make it work is add a valid serviceUrl to a peer, as shown in the following example:

application.yml (Two Peer Aware Eureka Servers)
---
spring:
  profiles: peer1
eureka:
  instance:
    hostname: peer1
  client:
    serviceUrl:
      defaultZone: https://peer2/eureka/

---
spring:
  profiles: peer2
eureka:
  instance:
    hostname: peer2
  client:
    serviceUrl:
      defaultZone: https://peer1/eureka/

In the preceding example, we have a YAML file that can be used to run the same server on two hosts (peer1 and peer2) by running it in different Spring profiles. You could use this configuration to test the peer awareness on a single host (there is not much value in doing that in production) by manipulating /etc/hosts to resolve the host names. In fact, the eureka.instance.hostname is not needed if you are running on a machine that knows its own hostname (by default, it is looked up by using java.net.InetAddress).

You can add multiple peers to a system, and, as long as they are all connected to each other by at least one edge, they synchronize the registrations amongst themselves. If the peers are physically separated (inside a data center or between multiple data centers), then the system can, in principle, survive “split-brain” type failures. You can add multiple peers to a system, and as long as they are all directly connected to each other, they will synchronize the registrations amongst themselves.

application.yml (Three Peer Aware Eureka Servers)
eureka:
  client:
    serviceUrl:
      defaultZone: https://peer1/eureka/,http://peer2/eureka/,http://peer3/eureka/

---
spring:
  profiles: peer1
eureka:
  instance:
    hostname: peer1

---
spring:
  profiles: peer2
eureka:
  instance:
    hostname: peer2

---
spring:
  profiles: peer3
eureka:
  instance:
    hostname: peer3

5.2.6. When to Prefer IP Address

In some cases, it is preferable for Eureka to advertise the IP addresses of services rather than the hostname. Set eureka.instance.preferIpAddress to true and, when the application registers with eureka, it uses its IP address rather than its hostname.

If the hostname cannot be determined by Java, then the IP address is sent to Eureka. Only explict way of setting the hostname is by setting eureka.instance.hostname property. You can set your hostname at the run-time by using an environment variable — for example, eureka.instance.hostname=${HOST_NAME}.

5.2.7. Securing The Eureka Server

You can secure your Eureka server simply by adding Spring Security to your server’s classpath via spring-boot-starter-security. By default when Spring Security is on the classpath it will require that a valid CSRF token be sent with every request to the app. Eureka clients will not generally possess a valid cross site request forgery (CSRF) token you will need to disable this requirement for the /eureka/** endpoints. For example:

@EnableWebSecurity
class WebSecurityConfig extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.csrf().ignoringAntMatchers("/eureka/**");
        super.configure(http);
    }
}

For more information on CSRF see the Spring Security documentation.

A demo Eureka Server can be found in the Spring Cloud Samples repo.

5.2.8. Disabling Ribbon with Eureka Server and Client starters

spring-cloud-starter-netflix-eureka-server and spring-cloud-starter-netflix-eureka-client come along with a spring-cloud-starter-netflix-ribbon. Since Ribbon load-balancer is now in maintenance mode, we suggest switching to using the Spring Cloud LoadBalancer, also included in Eureka starters, instead.

In order to that, you can set the value of spring.cloud.loadbalancer.ribbon.enabled property to false.

You can then also exclude ribbon-related dependencies from Eureka starters in your build files, like so:

<dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
            <exclusions>
                <exclusion>
                    <groupId>org.springframework.cloud</groupId>
                    <artifactId>spring-cloud-starter-ribbon</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>com.netflix.ribbon</groupId>
                    <artifactId>ribbon-eureka</artifactId>
                </exclusion>
            </exclusions>
</dependency>

5.2.9. JDK 11 Support

The JAXB modules which the Eureka server depends upon were removed in JDK 11. If you intend to use JDK 11 when running a Eureka server you must include these dependencies in your POM or Gradle file.

<dependency>
    <groupId>org.glassfish.jaxb</groupId>
    <artifactId>jaxb-runtime</artifactId>
</dependency>

5.3. Circuit Breaker: Spring Cloud Circuit Breaker With Hystrix

5.3.1. Disabling Spring Cloud Circuit Breaker Hystrix

You can disable the auto-configuration by setting spring.cloud.circuitbreaker.hystrix.enabled to false.

5.3.2. Configuring Hystrix Circuit Breakers

Default Configuration

To provide a default configuration for all of your circuit breakers create a Customize bean that is passed a HystrixCircuitBreakerFactory or ReactiveHystrixCircuitBreakerFactory. The configureDefault method can be used to provide a default configuration.

@Bean
public Customizer<HystrixCircuitBreakerFactory> defaultConfig() {
    return factory -> factory.configureDefault(id -> HystrixCommand.Setter
            .withGroupKey(HystrixCommandGroupKey.Factory.asKey(id))
            .andCommandPropertiesDefaults(HystrixCommandProperties.Setter()
            .withExecutionTimeoutInMilliseconds(4000)));
}
Reactive Example
@Bean
public Customizer<ReactiveHystrixCircuitBreakerFactory> defaultConfig() {
    return factory -> factory.configureDefault(id -> HystrixObservableCommand.Setter
            .withGroupKey(HystrixCommandGroupKey.Factory.asKey(id))
            .andCommandPropertiesDefaults(HystrixCommandProperties.Setter()
                    .withExecutionTimeoutInMilliseconds(4000)));
}
Specific Circuit Breaker Configuration

Similarly to providing a default configuration, you can create a Customize bean this is passed a HystrixCircuitBreakerFactory

@Bean
public Customizer<HystrixCircuitBreakerFactory> customizer() {
    return factory -> factory.configure(builder -> builder.commandProperties(
                    HystrixCommandProperties.Setter().withExecutionTimeoutInMilliseconds(2000)), "foo", "bar");
}
Reactive Example
@Bean
public Customizer<ReactiveHystrixCircuitBreakerFactory> customizer() {
    return factory -> factory.configure(builder -> builder.commandProperties(
                    HystrixCommandProperties.Setter().withExecutionTimeoutInMilliseconds(2000)), "foo", "bar");
}

5.4. Circuit Breaker: Hystrix Clients

Netflix has created a library called Hystrix that implements the circuit breaker pattern. In a microservice architecture, it is common to have multiple layers of service calls, as shown in the following example:

Hystrix
Figure 1. Microservice Graph

A service failure in the lower level of services can cause cascading failure all the way up to the user. When calls to a particular service exceed circuitBreaker.requestVolumeThreshold (default: 20 requests) and the failure percentage is greater than circuitBreaker.errorThresholdPercentage (default: >50%) in a rolling window defined by metrics.rollingStats.timeInMilliseconds (default: 10 seconds), the circuit opens and the call is not made. In cases of error and an open circuit, a fallback can be provided by the developer.

HystrixFallback
Figure 2. Hystrix fallback prevents cascading failures

Having an open circuit stops cascading failures and allows overwhelmed or failing services time to recover. The fallback can be another Hystrix protected call, static data, or a sensible empty value. Fallbacks may be chained so that the first fallback makes some other business call, which in turn falls back to static data.

5.4.1. How to Include Hystrix

To include Hystrix in your project, use the starter with a group ID of org.springframework.cloud and a artifact ID of spring-cloud-starter-netflix-hystrix. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

The following example shows a minimal Eureka server with a Hystrix circuit breaker:

@SpringBootApplication
@EnableCircuitBreaker
public class Application {

    public static void main(String[] args) {
        new SpringApplicationBuilder(Application.class).web(true).run(args);
    }

}

@Component
public class StoreIntegration {

    @HystrixCommand(fallbackMethod = "defaultStores")
    public Object getStores(Map<String, Object> parameters) {
        //do stuff that might fail
    }

    public Object defaultStores(Map<String, Object> parameters) {
        return /* something useful */;
    }
}

The @HystrixCommand is provided by a Netflix contrib library called “javanica”. Spring Cloud automatically wraps Spring beans with that annotation in a proxy that is connected to the Hystrix circuit breaker. The circuit breaker calculates when to open and close the circuit and what to do in case of a failure.

To configure the @HystrixCommand you can use the commandProperties attribute with a list of @HystrixProperty annotations. See here for more details. See the Hystrix wiki for details on the properties available.

5.4.2. Propagating the Security Context or Using Spring Scopes

If you want some thread local context to propagate into a @HystrixCommand, the default declaration does not work, because it executes the command in a thread pool (in case of timeouts). You can switch Hystrix to use the same thread as the caller through configuration or directly in the annotation, by asking it to use a different “Isolation Strategy”. The following example demonstrates setting the thread in the annotation:

@HystrixCommand(fallbackMethod = "stubMyService",
    commandProperties = {
      @HystrixProperty(name="execution.isolation.strategy", value="SEMAPHORE")
    }
)
...

The same thing applies if you are using @SessionScope or @RequestScope. If you encounter a runtime exception that says it cannot find the scoped context, you need to use the same thread.

You also have the option to set the hystrix.shareSecurityContext property to true. Doing so auto-configures a Hystrix concurrency strategy plugin hook to transfer the SecurityContext from your main thread to the one used by the Hystrix command. Hystrix does not let multiple Hystrix concurrency strategy be registered so an extension mechanism is available by declaring your own HystrixConcurrencyStrategy as a Spring bean. Spring Cloud looks for your implementation within the Spring context and wrap it inside its own plugin.

5.4.3. Health Indicator

The state of the connected circuit breakers are also exposed in the /health endpoint of the calling application, as shown in the following example:

{
    "hystrix": {
        "openCircuitBreakers": [
            "StoreIntegration::getStoresByLocationLink"
        ],
        "status": "CIRCUIT_OPEN"
    },
    "status": "UP"
}

5.4.4. Hystrix Metrics Stream

To enable the Hystrix metrics stream, include a dependency on spring-boot-starter-actuator and set management.endpoints.web.exposure.include: hystrix.stream. Doing so exposes the /actuator/hystrix.stream as a management endpoint, as shown in the following example:

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>

5.5. Circuit Breaker: Hystrix Dashboard

One of the main benefits of Hystrix is the set of metrics it gathers about each HystrixCommand. The Hystrix Dashboard displays the health of each circuit breaker in an efficient manner.

Hystrix
Figure 3. Hystrix Dashboard

5.6. Hystrix Timeouts And Ribbon Clients

When using Hystrix commands that wrap Ribbon clients you want to make sure your Hystrix timeout is configured to be longer than the configured Ribbon timeout, including any potential retries that might be made. For example, if your Ribbon connection timeout is one second and the Ribbon client might retry the request three times, than your Hystrix timeout should be slightly more than three seconds.

5.6.1. How to Include the Hystrix Dashboard

To include the Hystrix Dashboard in your project, use the starter with a group ID of org.springframework.cloud and an artifact ID of spring-cloud-starter-netflix-hystrix-dashboard. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

To run the Hystrix Dashboard, annotate your Spring Boot main class with @EnableHystrixDashboard. Then visit /hystrix and point the dashboard to an individual instance’s /hystrix.stream endpoint in a Hystrix client application.

When connecting to a /hystrix.stream endpoint that uses HTTPS, the certificate used by the server must be trusted by the JVM. If the certificate is not trusted, you must import the certificate into the JVM in order for the Hystrix Dashboard to make a successful connection to the stream endpoint.

5.6.2. Turbine

Looking at an individual instance’s Hystrix data is not very useful in terms of the overall health of the system. Turbine is an application that aggregates all of the relevant /hystrix.stream endpoints into a combined /turbine.stream for use in the Hystrix Dashboard. Individual instances are located through Eureka. Running Turbine requires annotating your main class with the @EnableTurbine annotation (for example, by using spring-cloud-starter-netflix-turbine to set up the classpath). All of the documented configuration properties from the Turbine 1 wiki apply. The only difference is that the turbine.instanceUrlSuffix does not need the port prepended, as this is handled automatically unless turbine.instanceInsertPort=false.

By default, Turbine looks for the /hystrix.stream endpoint on a registered instance by looking up its hostName and port entries in Eureka and then appending /hystrix.stream to it. If the instance’s metadata contains management.port, it is used instead of the port value for the /hystrix.stream endpoint. By default, the metadata entry called management.port is equal to the management.port configuration property. It can be overridden though with following configuration:
eureka:
  instance:
    metadata-map:
      management.port: ${management.port:8081}

The turbine.appConfig configuration key is a list of Eureka serviceIds that turbine uses to lookup instances. The turbine stream is then used in the Hystrix dashboard with a URL similar to the following:

The cluster parameter can be omitted if the name is default. The cluster parameter must match an entry in turbine.aggregator.clusterConfig. Values returned from Eureka are upper-case. Consequently, the following example works if there is an application called customers registered with Eureka:

turbine:
  aggregator:
    clusterConfig: CUSTOMERS
  appConfig: customers

If you need to customize which cluster names should be used by Turbine (because you do not want to store cluster names in turbine.aggregator.clusterConfig configuration), provide a bean of type TurbineClustersProvider.

The clusterName can be customized by a SPEL expression in turbine.clusterNameExpression with root as an instance of InstanceInfo. The default value is appName, which means that the Eureka serviceId becomes the cluster key (that is, the InstanceInfo for customers has an appName of CUSTOMERS). A different example is turbine.clusterNameExpression=aSGName, which gets the cluster name from the AWS ASG name. The following listing shows another example:

turbine:
  aggregator:
    clusterConfig: SYSTEM,USER
  appConfig: customers,stores,ui,admin
  clusterNameExpression: metadata['cluster']

In the preceding example, the cluster name from four services is pulled from their metadata map and is expected to have values that include SYSTEM and USER.

To use the “default” cluster for all apps, you need a string literal expression (with single quotes and escaped with double quotes if it is in YAML as well):

turbine:
  appConfig: customers,stores
  clusterNameExpression: "'default'"

Spring Cloud provides a spring-cloud-starter-netflix-turbine that has all the dependencies you need to get a Turbine server running. To add Turbine, create a Spring Boot application and annotate it with @EnableTurbine.

By default, Spring Cloud lets Turbine use the host and port to allow multiple processes per host, per cluster. If you want the native Netflix behavior built into Turbine to not allow multiple processes per host, per cluster (the key to the instance ID is the hostname), set turbine.combineHostPort=false.
Clusters Endpoint

In some situations it might be useful for other applications to know what custers have been configured in Turbine. To support this you can use the /clusters endpoint which will return a JSON array of all the configured clusters.

GET /clusters
[
  {
    "name": "RACES",
    "link": "http://localhost:8383/turbine.stream?cluster=RACES"
  },
  {
    "name": "WEB",
    "link": "http://localhost:8383/turbine.stream?cluster=WEB"
  }
]

This endpoint can be disabled by setting turbine.endpoints.clusters.enabled to false.

5.6.3. Turbine Stream

In some environments (such as in a PaaS setting), the classic Turbine model of pulling metrics from all the distributed Hystrix commands does not work. In that case, you might want to have your Hystrix commands push metrics to Turbine. Spring Cloud enables that with messaging. To do so on the client, add a dependency to spring-cloud-netflix-hystrix-stream and the spring-cloud-starter-stream-* of your choice. See the Spring Cloud Stream documentation for details on the brokers and how to configure the client credentials. It should work out of the box for a local broker.

On the server side, create a Spring Boot application and annotate it with @EnableTurbineStream. The Turbine Stream server requires the use of Spring Webflux, therefore spring-boot-starter-webflux needs to be included in your project. By default spring-boot-starter-webflux is included when adding spring-cloud-starter-netflix-turbine-stream to your application.

You can then point the Hystrix Dashboard to the Turbine Stream Server instead of individual Hystrix streams. If Turbine Stream is running on port 8989 on myhost, then put myhost:8989 in the stream input field in the Hystrix Dashboard. Circuits are prefixed by their respective serviceId, followed by a dot (.), and then the circuit name.

Spring Cloud provides a spring-cloud-starter-netflix-turbine-stream that has all the dependencies you need to get a Turbine Stream server running. You can then add the Stream binder of your choice — such as spring-cloud-starter-stream-rabbit.

Turbine Stream server also supports the cluster parameter. Unlike Turbine server, Turbine Stream uses eureka serviceIds as cluster names and these are not configurable.

If Turbine Stream server is running on port 8989 on my.turbine.server and you have two eureka serviceIds customers and products in your environment, the following URLs will be available on your Turbine Stream server. default and empty cluster name will provide all metrics that Turbine Stream server receives.

https://my.turbine.sever:8989/turbine.stream?cluster=customers
https://my.turbine.sever:8989/turbine.stream?cluster=products
https://my.turbine.sever:8989/turbine.stream?cluster=default
https://my.turbine.sever:8989/turbine.stream

So, you can use eureka serviceIds as cluster names for your Turbine dashboard (or any compatible dashboard). You don’t need to configure any properties like turbine.appConfig, turbine.clusterNameExpression and turbine.aggregator.clusterConfig for your Turbine Stream server.

Turbine Stream server gathers all metrics from the configured input channel with Spring Cloud Stream. It means that it doesn’t gather Hystrix metrics actively from each instance. It just can provide metrics that were already gathered into the input channel by each instance.

5.7. Client Side Load Balancer: Ribbon

Ribbon is a client-side load balancer that gives you a lot of control over the behavior of HTTP and TCP clients. Feign already uses Ribbon, so, if you use @FeignClient, this section also applies.

A central concept in Ribbon is that of the named client. Each load balancer is part of an ensemble of components that work together to contact a remote server on demand, and the ensemble has a name that you give it as an application developer (for example, by using the @FeignClient annotation). On demand, Spring Cloud creates a new ensemble as an ApplicationContext for each named client by using RibbonClientConfiguration. This contains (amongst other things) an ILoadBalancer, a RestClient, and a ServerListFilter.

5.7.1. How to Include Ribbon

To include Ribbon in your project, use the starter with a group ID of org.springframework.cloud and an artifact ID of spring-cloud-starter-netflix-ribbon. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

5.7.2. Customizing the Ribbon Client

You can configure some bits of a Ribbon client by using external properties in <client>.ribbon.*, which is similar to using the Netflix APIs natively, except that you can use Spring Boot configuration files. The native options can be inspected as static fields in CommonClientConfigKey (part of ribbon-core).

Spring Cloud also lets you take full control of the client by declaring additional configuration (on top of the RibbonClientConfiguration) using @RibbonClient, as shown in the following example:

@Configuration
@RibbonClient(name = "custom", configuration = CustomConfiguration.class)
public class TestConfiguration {
}

In this case, the client is composed from the components already in RibbonClientConfiguration, together with any in CustomConfiguration (where the latter generally overrides the former).

The CustomConfiguration class must be a @Configuration class, but take care that it is not in a @ComponentScan for the main application context. Otherwise, it is shared by all the @RibbonClients. If you use @ComponentScan (or @SpringBootApplication), you need to take steps to avoid it being included (for instance, you can put it in a separate, non-overlapping package or specify the packages to scan explicitly in the @ComponentScan).

The following table shows the beans that Spring Cloud Netflix provides by default for Ribbon:

Bean Type Bean Name Class Name

IClientConfig

ribbonClientConfig

DefaultClientConfigImpl

IRule

ribbonRule

ZoneAvoidanceRule

IPing

ribbonPing

DummyPing

ServerList<Server>

ribbonServerList

ConfigurationBasedServerList

ServerListFilter<Server>

ribbonServerListFilter

ZonePreferenceServerListFilter

ILoadBalancer

ribbonLoadBalancer

ZoneAwareLoadBalancer

ServerListUpdater

ribbonServerListUpdater

PollingServerListUpdater

Creating a bean of one of those type and placing it in a @RibbonClient configuration (such as FooConfiguration above) lets you override each one of the beans described, as shown in the following example:

@Configuration(proxyBeanMethods = false)
protected static class FooConfiguration {

    @Bean
    public ZonePreferenceServerListFilter serverListFilter() {
        ZonePreferenceServerListFilter filter = new ZonePreferenceServerListFilter();
        filter.setZone("myTestZone");
        return filter;
    }

    @Bean
    public IPing ribbonPing() {
        return new PingUrl();
    }

}

The include statement in the preceding example replaces NoOpPing with PingUrl and provides a custom serverListFilter.

5.7.3. Customizing the Default for All Ribbon Clients

A default configuration can be provided for all Ribbon Clients by using the @RibbonClients annotation and registering a default configuration, as shown in the following example:

@RibbonClients(defaultConfiguration = DefaultRibbonConfig.class)
public class RibbonClientDefaultConfigurationTestsConfig {

    public static class BazServiceList extends ConfigurationBasedServerList {

        public BazServiceList(IClientConfig config) {
            super.initWithNiwsConfig(config);
        }

    }

}

@Configuration(proxyBeanMethods = false)
class DefaultRibbonConfig {

    @Bean
    public IRule ribbonRule() {
        return new BestAvailableRule();
    }

    @Bean
    public IPing ribbonPing() {
        return new PingUrl();
    }

    @Bean
    public ServerList<Server> ribbonServerList(IClientConfig config) {
        return new RibbonClientDefaultConfigurationTestsConfig.BazServiceList(config);
    }

    @Bean
    public ServerListSubsetFilter serverListFilter() {
        ServerListSubsetFilter filter = new ServerListSubsetFilter();
        return filter;
    }

}

5.7.4. Customizing the Ribbon Client by Setting Properties

Starting with version 1.2.0, Spring Cloud Netflix now supports customizing Ribbon clients by setting properties to be compatible with the Ribbon documentation.

This lets you change behavior at start up time in different environments.

The following list shows the supported properties>:

  • <clientName>.ribbon.NFLoadBalancerClassName: Should implement ILoadBalancer

  • <clientName>.ribbon.NFLoadBalancerRuleClassName: Should implement IRule

  • <clientName>.ribbon.NFLoadBalancerPingClassName: Should implement IPing

  • <clientName>.ribbon.NIWSServerListClassName: Should implement ServerList

  • <clientName>.ribbon.NIWSServerListFilterClassName: Should implement ServerListFilter

Classes defined in these properties have precedence over beans defined by using @RibbonClient(configuration=MyRibbonConfig.class) and the defaults provided by Spring Cloud Netflix.

To set the IRule for a service name called users, you could set the following properties:

application.yml
users:
  ribbon:
    NIWSServerListClassName: com.netflix.loadbalancer.ConfigurationBasedServerList
    NFLoadBalancerRuleClassName: com.netflix.loadbalancer.WeightedResponseTimeRule

See the Ribbon documentation for implementations provided by Ribbon.

5.7.5. Using Ribbon with Eureka

When Eureka is used in conjunction with Ribbon (that is, both are on the classpath), the ribbonServerList is overridden with an extension of DiscoveryEnabledNIWSServerList, which populates the list of servers from Eureka. It also replaces the IPing interface with NIWSDiscoveryPing, which delegates to Eureka to determine if a server is up. The ServerList that is installed by default is a DomainExtractingServerList. Its purpose is to make metadata available to the load balancer without using AWS AMI metadata (which is what Netflix relies on). By default, the server list is constructed with “zone” information, as provided in the instance metadata (so, on the remote clients, set eureka.instance.metadataMap.zone). If that is missing and if the approximateZoneFromHostname flag is set, it can use the domain name from the server hostname as a proxy for the zone. Once the zone information is available, it can be used in a ServerListFilter. By default, it is used to locate a server in the same zone as the client, because the default is a ZonePreferenceServerListFilter. By default, the zone of the client is determined in the same way as the remote instances (that is, through eureka.instance.metadataMap.zone).

The orthodox “archaius” way to set the client zone is through a configuration property called "@zone". If it is available, Spring Cloud uses that in preference to all other settings (note that the key must be quoted in YAML configuration).
If there is no other source of zone data, then a guess is made, based on the client configuration (as opposed to the instance configuration). We take eureka.client.availabilityZones, which is a map from region name to a list of zones, and pull out the first zone for the instance’s own region (that is, the eureka.client.region, which defaults to "us-east-1", for compatibility with native Netflix).

5.7.6. Example: How to Use Ribbon Without Eureka

Eureka is a convenient way to abstract the discovery of remote servers so that you do not have to hard code their URLs in clients. However, if you prefer not to use Eureka, Ribbon and Feign also work. Suppose you have declared a @RibbonClient for "stores", and Eureka is not in use (and not even on the classpath). The Ribbon client defaults to a configured server list. You can supply the configuration as follows:

application.yml
stores:
  ribbon:
    listOfServers: example.com,google.com

5.7.7. Example: Disable Eureka Use in Ribbon

Setting the ribbon.eureka.enabled property to false explicitly disables the use of Eureka in Ribbon, as shown in the following example:

application.yml
ribbon:
  eureka:
   enabled: false

5.7.8. Using the Ribbon API Directly

You can also use the LoadBalancerClient directly, as shown in the following example:

public class MyClass {
    @Autowired
    private LoadBalancerClient loadBalancer;

    public void doStuff() {
        ServiceInstance instance = loadBalancer.choose("stores");
        URI storesUri = URI.create(String.format("https://%s:%s", instance.getHost(), instance.getPort()));
        // ... do something with the URI
    }
}

5.7.9. Caching of Ribbon Configuration

Each Ribbon named client has a corresponding child application Context that Spring Cloud maintains. This application context is lazily loaded on the first request to the named client. This lazy loading behavior can be changed to instead eagerly load these child application contexts at startup, by specifying the names of the Ribbon clients, as shown in the following example:

application.yml
ribbon:
  eager-load:
    enabled: true
    clients: client1, client2, client3

5.7.10. How to Configure Hystrix Thread Pools

If you change zuul.ribbonIsolationStrategy to THREAD, the thread isolation strategy for Hystrix is used for all routes. In that case, the HystrixThreadPoolKey is set to RibbonCommand as the default. It means that HystrixCommands for all routes are executed in the same Hystrix thread pool. This behavior can be changed with the following configuration:

application.yml
zuul:
  threadPool:
    useSeparateThreadPools: true

The preceding example results in HystrixCommands being executed in the Hystrix thread pool for each route.

In this case, the default HystrixThreadPoolKey is the same as the service ID for each route. To add a prefix to HystrixThreadPoolKey, set zuul.threadPool.threadPoolKeyPrefix to the value that you want to add, as shown in the following example:

application.yml
zuul:
  threadPool:
    useSeparateThreadPools: true
    threadPoolKeyPrefix: zuulgw

5.7.11. How to Provide a Key to Ribbon’s IRule

If you need to provide your own IRule implementation to handle a special routing requirement like a “canary” test, pass some information to the choose method of IRule.

com.netflix.loadbalancer.IRule.java
public interface IRule{
    public Server choose(Object key);
         :

You can provide some information that is used by your IRule implementation to choose a target server, as shown in the following example:

RequestContext.getCurrentContext()
              .set(FilterConstants.LOAD_BALANCER_KEY, "canary-test");

If you put any object into the RequestContext with a key of FilterConstants.LOAD_BALANCER_KEY, it is passed to the choose method of the IRule implementation. The code shown in the preceding example must be executed before RibbonRoutingFilter is executed. Zuul’s pre filter is the best place to do that. You can access HTTP headers and query parameters through the RequestContext in pre filter, so it can be used to determine the LOAD_BALANCER_KEY that is passed to Ribbon. If you do not put any value with LOAD_BALANCER_KEY in RequestContext, null is passed as a parameter of the choose method.

5.8. External Configuration: Archaius

Archaius is the Netflix client-side configuration library. It is the library used by all of the Netflix OSS components for configuration. Archaius is an extension of the Apache Commons Configuration project. It allows updates to configuration by either polling a source for changes or by letting a source push changes to the client. Archaius uses Dynamic<Type>Property classes as handles to properties, as shown in the following example:

Archaius Example
class ArchaiusTest {
    DynamicStringProperty myprop = DynamicPropertyFactory
            .getInstance()
            .getStringProperty("my.prop");

    void doSomething() {
        OtherClass.someMethod(myprop.get());
    }
}

Archaius has its own set of configuration files and loading priorities. Spring applications should generally not use Archaius directly, but the need to configure the Netflix tools natively remains. Spring Cloud has a Spring Environment Bridge so that Archaius can read properties from the Spring Environment. This bridge allows Spring Boot projects to use the normal configuration toolchain while letting them configure the Netflix tools as documented (for the most part).

5.9. Router and Filter: Zuul

Routing is an integral part of a microservice architecture. For example, / may be mapped to your web application, /api/users is mapped to the user service and /api/shop is mapped to the shop service. Zuul is a JVM-based router and server-side load balancer from Netflix.

Netflix uses Zuul for the following:

  • Authentication

  • Insights

  • Stress Testing

  • Canary Testing

  • Dynamic Routing

  • Service Migration

  • Load Shedding

  • Security

  • Static Response handling

  • Active/Active traffic management

Zuul’s rule engine lets rules and filters be written in essentially any JVM language, with built-in support for Java and Groovy.

The configuration property zuul.max.host.connections has been replaced by two new properties, zuul.host.maxTotalConnections and zuul.host.maxPerRouteConnections, which default to 200 and 20 respectively.
The default Hystrix isolation pattern (ExecutionIsolationStrategy) for all routes is SEMAPHORE. zuul.ribbonIsolationStrategy can be changed to THREAD if that isolation pattern is preferred.

5.9.1. How to Include Zuul

To include Zuul in your project, use the starter with a group ID of org.springframework.cloud and a artifact ID of spring-cloud-starter-netflix-zuul. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

5.9.2. Embedded Zuul Reverse Proxy

Spring Cloud has created an embedded Zuul proxy to ease the development of a common use case where a UI application wants to make proxy calls to one or more back end services. This feature is useful for a user interface to proxy to the back end services it requires, avoiding the need to manage CORS and authentication concerns independently for all the back ends.

To enable it, annotate a Spring Boot main class with @EnableZuulProxy. Doing so causes local calls to be forwarded to the appropriate service. By convention, a service with an ID of users receives requests from the proxy located at /users (with the prefix stripped). The proxy uses Ribbon to locate an instance to which to forward through discovery. All requests are executed in a hystrix command, so failures appear in Hystrix metrics. Once the circuit is open, the proxy does not try to contact the service.

the Zuul starter does not include a discovery client, so, for routes based on service IDs, you need to provide one of those on the classpath as well (Eureka is one choice).

To skip having a service automatically added, set zuul.ignored-services to a list of service ID patterns. If a service matches a pattern that is ignored but is also included in the explicitly configured routes map, it is unignored, as shown in the following example:

application.yml
 zuul:
  ignoredServices: '*'
  routes:
    users: /myusers/**

In the preceding example, all services are ignored, except for users.

To augment or change the proxy routes, you can add external configuration, as follows:

application.yml
 zuul:
  routes:
    users: /myusers/**

The preceding example means that HTTP calls to /myusers get forwarded to the users service (for example /myusers/101 is forwarded to /101).

To get more fine-grained control over a route, you can specify the path and the serviceId independently, as follows:

application.yml
 zuul:
  routes:
    users:
      path: /myusers/**
      serviceId: users_service

The preceding example means that HTTP calls to /myusers get forwarded to the users_service service. The route must have a path that can be specified as an ant-style pattern, so /myusers/* only matches one level, but /myusers/** matches hierarchically.

The location of the back end can be specified as either a serviceId (for a service from discovery) or a url (for a physical location), as shown in the following example:

application.yml
 zuul:
  routes:
    users:
      path: /myusers/**
      url: https://example.com/users_service

These simple url-routes do not get executed as a HystrixCommand, nor do they load-balance multiple URLs with Ribbon. To achieve those goals, you can specify a serviceId with a static list of servers, as follows:

application.yml
zuul:
  routes:
    echo:
      path: /myusers/**
      serviceId: myusers-service
      stripPrefix: true

hystrix:
  command:
    myusers-service:
      execution:
        isolation:
          thread:
            timeoutInMilliseconds: ...

myusers-service:
  ribbon:
    NIWSServerListClassName: com.netflix.loadbalancer.ConfigurationBasedServerList
    listOfServers: https://example1.com,http://example2.com
    ConnectTimeout: 1000
    ReadTimeout: 3000
    MaxTotalHttpConnections: 500
    MaxConnectionsPerHost: 100

Another method is specifiying a service-route and configuring a Ribbon client for the serviceId (doing so requires disabling Eureka support in Ribbon — see above for more information), as shown in the following example:

application.yml
zuul:
  routes:
    users:
      path: /myusers/**
      serviceId: users

ribbon:
  eureka:
    enabled: false

users:
  ribbon:
    listOfServers: example.com,google.com

You can provide a convention between serviceId and routes by using regexmapper. It uses regular-expression named groups to extract variables from serviceId and inject them into a route pattern, as shown in the following example:

ApplicationConfiguration.java
@Bean
public PatternServiceRouteMapper serviceRouteMapper() {
    return new PatternServiceRouteMapper(
        "(?<name>^.+)-(?<version>v.+$)",
        "${version}/${name}");
}

The preceding example means that a serviceId of myusers-v1 is mapped to route /v1/myusers/**. Any regular expression is accepted, but all named groups must be present in both servicePattern and routePattern. If servicePattern does not match a serviceId, the default behavior is used. In the preceding example, a serviceId of myusers is mapped to the "/myusers/**" route (with no version detected). This feature is disabled by default and only applies to discovered services.

To add a prefix to all mappings, set zuul.prefix to a value, such as /api. By default, the proxy prefix is stripped from the request before the request is forwarded by (you can switch this behavior off with zuul.stripPrefix=false). You can also switch off the stripping of the service-specific prefix from individual routes, as shown in the following example:

application.yml
 zuul:
  routes:
    users:
      path: /myusers/**
      stripPrefix: false
zuul.stripPrefix only applies to the prefix set in zuul.prefix. It does not have any effect on prefixes defined within a given route’s path.

In the preceding example, requests to /myusers/101 are forwarded to /myusers/101 on the users service.

The zuul.routes entries actually bind to an object of type ZuulProperties. If you look at the properties of that object, you can see that it also has a retryable flag. Set that flag to true to have the Ribbon client automatically retry failed requests. You can also set that flag to true when you need to modify the parameters of the retry operations that use the Ribbon client configuration.

By default, the X-Forwarded-Host header is added to the forwarded requests. To turn it off, set zuul.addProxyHeaders = false. By default, the prefix path is stripped, and the request to the back end picks up a X-Forwarded-Prefix header (/myusers in the examples shown earlier).

If you set a default route (/), an application with @EnableZuulProxy could act as a standalone server. For example, zuul.route.home: / would route all traffic ("/**") to the "home" service.

If more fine-grained ignoring is needed, you can specify specific patterns to ignore. These patterns are evaluated at the start of the route location process, which means prefixes should be included in the pattern to warrant a match. Ignored patterns span all services and supersede any other route specification. The following example shows how to create ignored patterns:

application.yml
 zuul:
  ignoredPatterns: /**/admin/**
  routes:
    users: /myusers/**

The preceding example means that all calls (such as /myusers/101) are forwarded to /101 on the users service. However, calls including /admin/ do not resolve.

If you need your routes to have their order preserved, you need to use a YAML file, as the ordering is lost when using a properties file. The following example shows such a YAML file:
application.yml
 zuul:
  routes:
    users:
      path: /myusers/**
    legacy:
      path: /**

If you were to use a properties file, the legacy path might end up in front of the users path, rendering the users path unreachable.

5.9.3. Zuul Http Client

The default HTTP client used by Zuul is now backed by the Apache HTTP Client instead of the deprecated Ribbon RestClient. To use RestClient or okhttp3.OkHttpClient, set ribbon.restclient.enabled=true or ribbon.okhttp.enabled=true, respectively. If you would like to customize the Apache HTTP client or the OK HTTP client, provide a bean of type CloseableHttpClient or OkHttpClient.

5.9.4. Cookies and Sensitive Headers

You can share headers between services in the same system, but you probably do not want sensitive headers leaking downstream into external servers. You can specify a list of ignored headers as part of the route configuration. Cookies play a special role, because they have well defined semantics in browsers, and they are always to be treated as sensitive. If the consumer of your proxy is a browser, then cookies for downstream services also cause problems for the user, because they all get jumbled up together (all downstream services look like they come from the same place).

If you are careful with the design of your services, (for example, if only one of the downstream services sets cookies), you might be able to let them flow from the back end all the way up to the caller. Also, if your proxy sets cookies and all your back-end services are part of the same system, it can be natural to simply share them (and, for instance, use Spring Session to link them up to some shared state). Other than that, any cookies that get set by downstream services are likely to be not useful to the caller, so it is recommended that you make (at least) Set-Cookie and Cookie into sensitive headers for routes that are not part of your domain. Even for routes that are part of your domain, try to think carefully about what it means before letting cookies flow between them and the proxy.

The sensitive headers can be configured as a comma-separated list per route, as shown in the following example:

application.yml
 zuul:
  routes:
    users:
      path: /myusers/**
      sensitiveHeaders: Cookie,Set-Cookie,Authorization
      url: https://downstream
This is the default value for sensitiveHeaders, so you need not set it unless you want it to be different. This is new in Spring Cloud Netflix 1.1 (in 1.0, the user had no control over headers, and all cookies flowed in both directions).

The sensitiveHeaders are a blacklist, and the default is not empty. Consequently, to make Zuul send all headers (except the ignored ones), you must explicitly set it to the empty list. Doing so is necessary if you want to pass cookie or authorization headers to your back end. The following example shows how to use sensitiveHeaders:

application.yml
 zuul:
  routes:
    users:
      path: /myusers/**
      sensitiveHeaders:
      url: https://downstream

You can also set sensitive headers, by setting zuul.sensitiveHeaders. If sensitiveHeaders is set on a route, it overrides the global sensitiveHeaders setting.

5.9.5. Ignored Headers

In addition to the route-sensitive headers, you can set a global value called zuul.ignoredHeaders for values (both request and response) that should be discarded during interactions with downstream services. By default, if Spring Security is not on the classpath, these are empty. Otherwise, they are initialized to a set of well known “security” headers (for example, involving caching) as specified by Spring Security. The assumption in this case is that the downstream services might add these headers, too, but we want the values from the proxy. To not discard these well known security headers when Spring Security is on the classpath, you can set zuul.ignoreSecurityHeaders to false. Doing so can be useful if you disabled the HTTP Security response headers in Spring Security and want the values provided by downstream services.

5.9.6. Management Endpoints

By default, if you use @EnableZuulProxy with the Spring Boot Actuator, you enable two additional endpoints:

  • Routes

  • Filters

Routes Endpoint

A GET to the routes endpoint at /routes returns a list of the mapped routes:

GET /routes
{
  /stores/**: "http://localhost:8081"
}

Additional route details can be requested by adding the ?format=details query string to /routes. Doing so produces the following output:

GET /routes/details
{
  "/stores/**": {
    "id": "stores",
    "fullPath": "/stores/**",
    "location": "http://localhost:8081",
    "path": "/**",
    "prefix": "/stores",
    "retryable": false,
    "customSensitiveHeaders": false,
    "prefixStripped": true
  }
}

A POST to /routes forces a refresh of the existing routes (for example, when there have been changes in the service catalog). You can disable this endpoint by setting endpoints.routes.enabled to false.

the routes should respond automatically to changes in the service catalog, but the POST to /routes is a way to force the change to happen immediately.
Filters Endpoint

A GET to the filters endpoint at /filters returns a map of Zuul filters by type. For each filter type in the map, you get a list of all the filters of that type, along with their details.

5.9.7. Strangulation Patterns and Local Forwards

A common pattern when migrating an existing application or API is to “strangle” old endpoints, slowly replacing them with different implementations. The Zuul proxy is a useful tool for this because you can use it to handle all traffic from the clients of the old endpoints but redirect some of the requests to new ones.

The following example shows the configuration details for a “strangle” scenario:

application.yml
 zuul:
  routes:
    first:
      path: /first/**
      url: https://first.example.com
    second:
      path: /second/**
      url: forward:/second
    third:
      path: /third/**
      url: forward:/3rd
    legacy:
      path: /**
      url: https://legacy.example.com

In the preceding example, we are strangle the “legacy” application, which is mapped to all requests that do not match one of the other patterns. Paths in /first/** have been extracted into a new service with an external URL. Paths in /second/** are forwarded so that they can be handled locally (for example, with a normal Spring @RequestMapping). Paths in /third/** are also forwarded but with a different prefix (/third/foo is forwarded to /3rd/foo).

The ignored patterns aren’t completely ignored, they just are not handled by the proxy (so they are also effectively forwarded locally).

5.9.8. Uploading Files through Zuul

If you use @EnableZuulProxy, you can use the proxy paths to upload files and it should work, so long as the files are small. For large files there is an alternative path that bypasses the Spring DispatcherServlet (to avoid multipart processing) in "/zuul/*". In other words, if you have zuul.routes.customers=/customers/**, then you can POST large files to /zuul/customers/*. The servlet path is externalized via zuul.servletPath. If the proxy route takes you through a Ribbon load balancer, extremely large files also require elevated timeout settings, as shown in the following example:

application.yml
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds: 60000
ribbon:
  ConnectTimeout: 3000
  ReadTimeout: 60000

Note that, for streaming to work with large files, you need to use chunked encoding in the request (which some browsers do not do by default), as shown in the following example:

$ curl -v -H "Transfer-Encoding: chunked" \
    -F "[email protected]" localhost:9999/zuul/simple/file

5.9.9. Query String Encoding

When processing the incoming request, query params are decoded so that they can be available for possible modifications in Zuul filters. They are then re-encoded the back end request is rebuilt in the route filters. The result can be different than the original input if (for example) it was encoded with Javascript’s encodeURIComponent() method. While this causes no issues in most cases, some web servers can be picky with the encoding of complex query string.

To force the original encoding of the query string, it is possible to pass a special flag to ZuulProperties so that the query string is taken as is with the HttpServletRequest::getQueryString method, as shown in the following example:

application.yml
 zuul:
  forceOriginalQueryStringEncoding: true
This special flag works only with SimpleHostRoutingFilter. Also, you loose the ability to easily override query parameters with RequestContext.getCurrentContext().setRequestQueryParams(someOverriddenParameters), because the query string is now fetched directly on the original HttpServletRequest.

5.9.10. Request URI Encoding

When processing the incoming request, request URI is decoded before matching them to routes. The request URI is then re-encoded when the back end request is rebuilt in the route filters. This can cause some unexpected behavior if your URI includes the encoded "/" character.

To use the original request URI, it is possible to pass a special flag to 'ZuulProperties' so that the URI will be taken as is with the HttpServletRequest::getRequestURI method, as shown in the following example:

application.yml
 zuul:
  decodeUrl: false
If you are overriding request URI using requestURI RequestContext attribute and this flag is set to false, then the URL set in the request context will not be encoded. It will be your responsibility to make sure the URL is already encoded.

5.9.11. Plain Embedded Zuul

If you use @EnableZuulServer (instead of @EnableZuulProxy), you can also run a Zuul server without proxying or selectively switch on parts of the proxying platform. Any beans that you add to the application of type ZuulFilter are installed automatically (as they are with @EnableZuulProxy) but without any of the proxy filters being added automatically.

In that case, the routes into the Zuul server are still specified by configuring "zuul.routes.*", but there is no service discovery and no proxying. Consequently, the "serviceId" and "url" settings are ignored. The following example maps all paths in "/api/**" to the Zuul filter chain:

application.yml
 zuul:
  routes:
    api: /api/**

5.9.12. Disable Zuul Filters

Zuul for Spring Cloud comes with a number of ZuulFilter beans enabled by default in both proxy and server mode. See the Zuul filters package for the list of filters that you can enable. If you want to disable one, set zuul.<SimpleClassName>.<filterType>.disable=true. By convention, the package after filters is the Zuul filter type. For example to disable org.springframework.cloud.netflix.zuul.filters.post.SendResponseFilter, set zuul.SendResponseFilter.post.disable=true.

5.9.13. Providing Hystrix Fallbacks For Routes

When a circuit for a given route in Zuul is tripped, you can provide a fallback response by creating a bean of type FallbackProvider. Within this bean, you need to specify the route ID the fallback is for and provide a ClientHttpResponse to return as a fallback. The following example shows a relatively simple FallbackProvider implementation:

class MyFallbackProvider implements FallbackProvider {

    @Override
    public String getRoute() {
        return "customers";
    }

    @Override
    public ClientHttpResponse fallbackResponse(String route, final Throwable cause) {
        if (cause instanceof HystrixTimeoutException) {
            return response(HttpStatus.GATEWAY_TIMEOUT);
        } else {
            return response(HttpStatus.INTERNAL_SERVER_ERROR);
        }
    }

    private ClientHttpResponse response(final HttpStatus status) {
        return new ClientHttpResponse() {
            @Override
            public HttpStatus getStatusCode() throws IOException {
                return status;
            }

            @Override
            public int getRawStatusCode() throws IOException {
                return status.value();
            }

            @Override
            public String getStatusText() throws IOException {
                return status.getReasonPhrase();
            }

            @Override
            public void close() {
            }

            @Override
            public InputStream getBody() throws IOException {
                return new ByteArrayInputStream("fallback".getBytes());
            }

            @Override
            public HttpHeaders getHeaders() {
                HttpHeaders headers = new HttpHeaders();
                headers.setContentType(MediaType.APPLICATION_JSON);
                return headers;
            }
        };
    }
}

The following example shows how the route configuration for the previous example might appear:

zuul:
  routes:
    customers: /customers/**

If you would like to provide a default fallback for all routes, you can create a bean of type FallbackProvider and have the getRoute method return * or null, as shown in the following example:

class MyFallbackProvider implements FallbackProvider {
    @Override
    public String getRoute() {
        return "*";
    }

    @Override
    public ClientHttpResponse fallbackResponse(String route, Throwable throwable) {
        return new ClientHttpResponse() {
            @Override
            public HttpStatus getStatusCode() throws IOException {
                return HttpStatus.OK;
            }

            @Override
            public int getRawStatusCode() throws IOException {
                return 200;
            }

            @Override
            public String getStatusText() throws IOException {
                return "OK";
            }

            @Override
            public void close() {

            }

            @Override
            public InputStream getBody() throws IOException {
                return new ByteArrayInputStream("fallback".getBytes());
            }

            @Override
            public HttpHeaders getHeaders() {
                HttpHeaders headers = new HttpHeaders();
                headers.setContentType(MediaType.APPLICATION_JSON);
                return headers;
            }
        };
    }
}

5.9.14. Zuul Timeouts

If you want to configure the socket timeouts and read timeouts for requests proxied through Zuul, you have two options, based on your configuration:

  • If Zuul uses service discovery, you need to configure these timeouts with the ribbon.ReadTimeout and ribbon.SocketTimeout Ribbon properties.

If you have configured Zuul routes by specifying URLs, you need to use zuul.host.connect-timeout-millis and zuul.host.socket-timeout-millis.

5.9.15. Rewriting the Location header

If Zuul is fronting a web application, you may need to re-write the Location header when the web application redirects through a HTTP status code of 3XX. Otherwise, the browser redirects to the web application’s URL instead of the Zuul URL. You can configure a LocationRewriteFilter Zuul filter to re-write the Location header to the Zuul’s URL. It also adds back the stripped global and route-specific prefixes. The following example adds a filter by using a Spring Configuration file:

import org.springframework.cloud.netflix.zuul.filters.post.LocationRewriteFilter;
...

@Configuration
@EnableZuulProxy
public class ZuulConfig {
    @Bean
    public LocationRewriteFilter locationRewriteFilter() {
        return new LocationRewriteFilter();
    }
}
Use this filter carefully. The filter acts on the Location header of ALL 3XX response codes, which may not be appropriate in all scenarios, such as when redirecting the user to an external URL.

5.9.16. Enabling Cross Origin Requests

By default Zuul routes all Cross Origin requests (CORS) to the services. If you want instead Zuul to handle these requests it can be done by providing custom WebMvcConfigurer bean:

@Bean
public WebMvcConfigurer corsConfigurer() {
    return new WebMvcConfigurer() {
        public void addCorsMappings(CorsRegistry registry) {
            registry.addMapping("/path-1/**")
                    .allowedOrigins("https://allowed-origin.com")
                    .allowedMethods("GET", "POST");
        }
    };
}

In the example above, we allow GET and POST methods from allowed-origin.com to send cross-origin requests to the endpoints starting with path-1. You can apply CORS configuration to a specific path pattern or globally for the whole application, using /** mapping. You can customize properties: allowedOrigins,allowedMethods,allowedHeaders,exposedHeaders,allowCredentials and maxAge via this configuration.

5.9.17. Metrics

Zuul will provide metrics under the Actuator metrics endpoint for any failures that might occur when routing requests. These metrics can be viewed by hitting /actuator/metrics. The metrics will have a name that has the format ZUUL::EXCEPTION:errorCause:statusCode.

5.9.18. Zuul Developer Guide

For a general overview of how Zuul works, see the Zuul Wiki.

The Zuul Servlet

Zuul is implemented as a Servlet. For the general cases, Zuul is embedded into the Spring Dispatch mechanism. This lets Spring MVC be in control of the routing. In this case, Zuul buffers requests. If there is a need to go through Zuul without buffering requests (for example, for large file uploads), the Servlet is also installed outside of the Spring Dispatcher. By default, the servlet has an address of /zuul. This path can be changed with the zuul.servlet-path property.

Zuul RequestContext

To pass information between filters, Zuul uses a RequestContext. Its data is held in a ThreadLocal specific to each request. Information about where to route requests, errors, and the actual HttpServletRequest and HttpServletResponse are stored there. The RequestContext extends ConcurrentHashMap, so anything can be stored in the context. FilterConstants contains the keys used by the filters installed by Spring Cloud Netflix (more on these later).

@EnableZuulProxy vs. @EnableZuulServer

Spring Cloud Netflix installs a number of filters, depending on which annotation was used to enable Zuul. @EnableZuulProxy is a superset of @EnableZuulServer. In other words, @EnableZuulProxy contains all the filters installed by @EnableZuulServer. The additional filters in the “proxy” enable routing functionality. If you want a “blank” Zuul, you should use @EnableZuulServer.

@EnableZuulServer Filters

@EnableZuulServer creates a SimpleRouteLocator that loads route definitions from Spring Boot configuration files.

The following filters are installed (as normal Spring Beans):

  • Pre filters:

    • ServletDetectionFilter: Detects whether the request is through the Spring Dispatcher. Sets a boolean with a key of FilterConstants.IS_DISPATCHER_SERVLET_REQUEST_KEY.

    • FormBodyWrapperFilter: Parses form data and re-encodes it for downstream requests.

    • DebugFilter: If the debug request parameter is set, sets RequestContext.setDebugRouting() and RequestContext.setDebugRequest() to true.

  • Route filters:

    • SendForwardFilter: Forwards requests by using the Servlet RequestDispatcher. The forwarding location is stored in the RequestContext attribute, FilterConstants.FORWARD_TO_KEY. This is useful for forwarding to endpoints in the current application.

  • Post filters:

    • SendResponseFilter: Writes responses from proxied requests to the current response.

  • Error filters:

    • SendErrorFilter: Forwards to /error (by default) if RequestContext.getThrowable() is not null. You can change the default forwarding path (/error) by setting the error.path property.

@EnableZuulProxy Filters

Creates a DiscoveryClientRouteLocator that loads route definitions from a DiscoveryClient (such as Eureka) as well as from properties. A route is created for each serviceId from the DiscoveryClient. As new services are added, the routes are refreshed.

In addition to the filters described earlier, the following filters are installed (as normal Spring Beans):

  • Pre filters:

    • PreDecorationFilter: Determines where and how to route, depending on the supplied RouteLocator. It also sets various proxy-related headers for downstream requests.

  • Route filters:

    • RibbonRoutingFilter: Uses Ribbon, Hystrix, and pluggable HTTP clients to send requests. Service IDs are found in the RequestContext attribute, FilterConstants.SERVICE_ID_KEY. This filter can use different HTTP clients:

      • Apache HttpClient: The default client.

      • Squareup OkHttpClient v3: Enabled by having the com.squareup.okhttp3:okhttp library on the classpath and setting ribbon.okhttp.enabled=true.

      • Netflix Ribbon HTTP client: Enabled by setting ribbon.restclient.enabled=true. This client has limitations, including that it does not support the PATCH method, but it also has built-in retry.

    • SimpleHostRoutingFilter: Sends requests to predetermined URLs through an Apache HttpClient. URLs are found in RequestContext.getRouteHost().

Custom Zuul Filter Examples

Most of the following "How to Write" examples below are included Sample Zuul Filters project. There are also examples of manipulating the request or response body in that repository.

This section includes the following examples:

How to Write a Pre Filter

Pre filters set up data in the RequestContext for use in filters downstream. The main use case is to set information required for route filters. The following example shows a Zuul pre filter:

public class QueryParamPreFilter extends ZuulFilter {
    @Override
    public int filterOrder() {
        return PRE_DECORATION_FILTER_ORDER - 1; // run before PreDecoration
    }

    @Override
    public String filterType() {
        return PRE_TYPE;
    }

    @Override
    public boolean shouldFilter() {
        RequestContext ctx = RequestContext.getCurrentContext();
        return !ctx.containsKey(FORWARD_TO_KEY) // a filter has already forwarded
                && !ctx.containsKey(SERVICE_ID_KEY); // a filter has already determined serviceId
    }
    @Override
    public Object run() {
        RequestContext ctx = RequestContext.getCurrentContext();
        HttpServletRequest request = ctx.getRequest();
        if (request.getParameter("sample") != null) {
            // put the serviceId in `RequestContext`
            ctx.put(SERVICE_ID_KEY, request.getParameter("foo"));
        }
        return null;
    }
}

The preceding filter populates SERVICE_ID_KEY from the sample request parameter. In practice, you should not do that kind of direct mapping. Instead, the service ID should be looked up from the value of sample instead.

Now that SERVICE_ID_KEY is populated, PreDecorationFilter does not run and RibbonRoutingFilter runs.

If you want to route to a full URL, call ctx.setRouteHost(url) instead.

To modify the path to which routing filters forward, set the REQUEST_URI_KEY.

How to Write a Route Filter

Route filters run after pre filters and make requests to other services. Much of the work here is to translate request and response data to and from the model required by the client. The following example shows a Zuul route filter:

public class OkHttpRoutingFilter extends ZuulFilter {
    @Autowired
    private ProxyRequestHelper helper;

    @Override
    public String filterType() {
        return ROUTE_TYPE;
    }

    @Override
    public int filterOrder() {
        return SIMPLE_HOST_ROUTING_FILTER_ORDER - 1;
    }

    @Override
    public boolean shouldFilter() {
        return RequestContext.getCurrentContext().getRouteHost() != null
                && RequestContext.getCurrentContext().sendZuulResponse();
    }

    @Override
    public Object run() {
        OkHttpClient httpClient = new OkHttpClient.Builder()
                // customize
                .build();

        RequestContext context = RequestContext.getCurrentContext();
        HttpServletRequest request = context.getRequest();

        String method = request.getMethod();

        String uri = this.helper.buildZuulRequestURI(request);

        Headers.Builder headers = new Headers.Builder();
        Enumeration<String> headerNames = request.getHeaderNames();
        while (headerNames.hasMoreElements()) {
            String name = headerNames.nextElement();
            Enumeration<String> values = request.getHeaders(name);

            while (values.hasMoreElements()) {
                String value = values.nextElement();
                headers.add(name, value);
            }
        }

        InputStream inputStream = request.getInputStream();

        RequestBody requestBody = null;
        if (inputStream != null && HttpMethod.permitsRequestBody(method)) {
            MediaType mediaType = null;
            if (headers.get("Content-Type") != null) {
                mediaType = MediaType.parse(headers.get("Content-Type"));
            }
            requestBody = RequestBody.create(mediaType, StreamUtils.copyToByteArray(inputStream));
        }

        Request.Builder builder = new Request.Builder()
                .headers(headers.build())
                .url(uri)
                .method(method, requestBody);

        Response response = httpClient.newCall(builder.build()).execute();

        LinkedMultiValueMap<String, String> responseHeaders = new LinkedMultiValueMap<>();

        for (Map.Entry<String, List<String>> entry : response.headers().toMultimap().entrySet()) {
            responseHeaders.put(entry.getKey(), entry.getValue());
        }

        this.helper.setResponse(response.code(), response.body().byteStream(),
                responseHeaders);
        context.setRouteHost(null); // prevent SimpleHostRoutingFilter from running
        return null;
    }
}

The preceding filter translates Servlet request information into OkHttp3 request information, executes an HTTP request, and translates OkHttp3 response information to the Servlet response.

How to Write a Post Filter

Post filters typically manipulate the response. The following filter adds a random UUID as the X-Sample header:

public class AddResponseHeaderFilter extends ZuulFilter {
    @Override
    public String filterType() {
        return POST_TYPE;
    }

    @Override
    public int filterOrder() {
        return SEND_RESPONSE_FILTER_ORDER - 1;
    }

    @Override
    public boolean shouldFilter() {
        return true;
    }

    @Override
    public Object run() {
        RequestContext context = RequestContext.getCurrentContext();
        HttpServletResponse servletResponse = context.getResponse();
        servletResponse.addHeader("X-Sample", UUID.randomUUID().toString());
        return null;
    }
}
Other manipulations, such as transforming the response body, are much more complex and computationally intensive.
How Zuul Errors Work

If an exception is thrown during any portion of the Zuul filter lifecycle, the error filters are executed. The SendErrorFilter is only run if RequestContext.getThrowable() is not null. It then sets specific javax.servlet.error.* attributes in the request and forwards the request to the Spring Boot error page.

Zuul Eager Application Context Loading

Zuul internally uses Ribbon for calling the remote URLs. By default, Ribbon clients are lazily loaded by Spring Cloud on first call. This behavior can be changed for Zuul by using the following configuration, which results eager loading of the child Ribbon related Application contexts at application startup time. The following example shows how to enable eager loading:

application.yml
zuul:
  ribbon:
    eager-load:
      enabled: true

5.10. Polyglot support with Sidecar

Do you have non-JVM languages with which you want to take advantage of Eureka, Ribbon, and Config Server? The Spring Cloud Netflix Sidecar was inspired by Netflix Prana. It includes an HTTP API to get all of the instances (by host and port) for a given service. You can also proxy service calls through an embedded Zuul proxy that gets its route entries from Eureka. The Spring Cloud Config Server can be accessed directly through host lookup or through the Zuul Proxy. The non-JVM application should implement a health check so the Sidecar can report to Eureka whether the app is up or down.

To include Sidecar in your project, use the dependency with a group ID of org.springframework.cloud and artifact ID or spring-cloud-netflix-sidecar.

To enable the Sidecar, create a Spring Boot application with @EnableSidecar. This annotation includes @EnableCircuitBreaker, @EnableDiscoveryClient, and @EnableZuulProxy. Run the resulting application on the same host as the non-JVM application.

To configure the side car, add sidecar.port and sidecar.health-uri to application.yml. The sidecar.port property is the port on which the non-JVM application listens. This is so the Sidecar can properly register the application with Eureka. The sidecar.secure-port-enabled options provides a way to enable secure port for traffic. The sidecar.health-uri is a URI accessible on the non-JVM application that mimics a Spring Boot health indicator. It should return a JSON document that resembles the following:

health-uri-document
{
  "status":"UP"
}

The following application.yml example shows sample configuration for a Sidecar application:

application.yml
server:
  port: 5678
spring:
  application:
    name: sidecar

sidecar:
  port: 8000
  health-uri: http://localhost:8000/health.json

The API for the DiscoveryClient.getInstances() method is /hosts/{serviceId}. The following example response for /hosts/customers returns two instances on different hosts:

/hosts/customers
[
    {
        "host": "myhost",
        "port": 9000,
        "uri": "https://myhost:9000",
        "serviceId": "CUSTOMERS",
        "secure": false
    },
    {
        "host": "myhost2",
        "port": 9000,
        "uri": "https://myhost2:9000",
        "serviceId": "CUSTOMERS",
        "secure": false
    }
]

This API is accessible to the non-JVM application (if the sidecar is on port 5678) at localhost:5678/hosts/{serviceId}.

The Zuul proxy automatically adds routes for each service known in Eureka to /<serviceId>, so the customers service is available at /customers. The non-JVM application can access the customer service at localhost:5678/customers (assuming the sidecar is listening on port 5678).

If the Config Server is registered with Eureka, the non-JVM application can access it through the Zuul proxy. If the serviceId of the ConfigServer is configserver and the Sidecar is on port 5678, then it can be accessed at localhost:5678/configserver.

Non-JVM applications can take advantage of the Config Server’s ability to return YAML documents. For example, a call to sidecar.local.spring.io:5678/configserver/default-master.yml might result in a YAML document resembling the following:

eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/
  password: password
info:
  description: Spring Cloud Samples
  url: https://github.com/spring-cloud-samples

To enable the health check request to accept all certificates when using HTTPs set sidecar.accept-all-ssl-certificates to `true.

5.11. Retrying Failed Requests

Spring Cloud Netflix offers a variety of ways to make HTTP requests. You can use a load balanced RestTemplate, Ribbon, or Feign. No matter how you choose to create your HTTP requests, there is always a chance that a request may fail. When a request fails, you may want to have the request be retried automatically. To do so when using Sping Cloud Netflix, you need to include Spring Retry on your application’s classpath. When Spring Retry is present, load-balanced RestTemplates, Feign, and Zuul automatically retry any failed requests (assuming your configuration allows doing so).

5.11.1. BackOff Policies

By default, no backoff policy is used when retrying requests. If you would like to configure a backoff policy, you need to create a bean of type LoadBalancedRetryFactory and override the createBackOffPolicy method for a given service, as shown in the following example:

@Configuration
public class MyConfiguration {
    @Bean
    LoadBalancedRetryFactory retryFactory() {
        return new LoadBalancedRetryFactory() {
            @Override
            public BackOffPolicy createBackOffPolicy(String service) {
                return new ExponentialBackOffPolicy();
            }
        };
    }
}

5.11.2. Configuration

When you use Ribbon with Spring Retry, you can control the retry functionality by configuring certain Ribbon properties. To do so, set the client.ribbon.MaxAutoRetries, client.ribbon.MaxAutoRetriesNextServer, and client.ribbon.OkToRetryOnAllOperations properties. See the Ribbon documentation for a description of what these properties do.

Enabling client.ribbon.OkToRetryOnAllOperations includes retrying POST requests, which can have an impact on the server’s resources, due to the buffering of the request body.
The property names are case-sensitive, and since some of these properties are defined in the Netflix Ribbon project, they are in Pascal Case and the ones from Spring Cloud are in Camel Case.

In addition, you may want to retry requests when certain status codes are returned in the response. You can list the response codes you would like the Ribbon client to retry by setting the clientName.ribbon.retryableStatusCodes property, as shown in the following example:

clientName:
  ribbon:
    retryableStatusCodes: 404,502

You can also create a bean of type LoadBalancedRetryPolicy and implement the retryableStatusCode method to retry a request given the status code.

Zuul

You can turn off Zuul’s retry functionality by setting zuul.retryable to false. You can also disable retry functionality on a route-by-route basis by setting zuul.routes.routename.retryable to false.

5.12. HTTP Clients

Spring Cloud Netflix automatically creates the HTTP client used by Ribbon, Feign, and Zuul for you. However, you can also provide your own HTTP clients customized as you need them to be. To do so, you can create a bean of type CloseableHttpClient if you are using the Apache HTTP Client or OkHttpClient if you are using OK HTTP.

When you create your own HTTP client, you are also responsible for implementing the correct connection management strategies for these clients. Doing so improperly can result in resource management issues.

5.13. Modules In Maintenance Mode

Placing a module in maintenance mode means that the Spring Cloud team will no longer be adding new features to the module. We will fix blocker bugs and security issues, and we will also consider and review small pull requests from the community.

We intend to continue to support these modules for a period of at least a year from the general availability of the Greenwich release train.

The following Spring Cloud Netflix modules and corresponding starters will be placed into maintenance mode:

  • spring-cloud-netflix-archaius

  • spring-cloud-netflix-hystrix-contract

  • spring-cloud-netflix-hystrix-dashboard

  • spring-cloud-netflix-hystrix-stream

  • spring-cloud-netflix-hystrix

  • spring-cloud-netflix-ribbon

  • spring-cloud-netflix-turbine-stream

  • spring-cloud-netflix-turbine

  • spring-cloud-netflix-zuul

This does not include the Eureka or concurrency-limits modules.

5.14. Configuration properties

To see the list of all Spring Cloud Netflix related configuration properties please check the Appendix page.

6. Spring Cloud OpenFeign

Hoxton.SR3

This project provides OpenFeign integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms.

6.1. Declarative REST Client: Feign

Feign is a declarative web service client. It makes writing web service clients easier. To use Feign create an interface and annotate it. It has pluggable annotation support including Feign annotations and JAX-RS annotations. Feign also supports pluggable encoders and decoders. Spring Cloud adds support for Spring MVC annotations and for using the same HttpMessageConverters used by default in Spring Web. Spring Cloud integrates Ribbon and Eureka, as well as Spring Cloud LoadBalancer to provide a load-balanced http client when using Feign.

6.1.1. How to Include Feign

To include Feign in your project use the starter with group org.springframework.cloud and artifact id spring-cloud-starter-openfeign. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

Example spring boot app

@SpringBootApplication
@EnableFeignClients
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

}
StoreClient.java
@FeignClient("stores")
public interface StoreClient {
    @RequestMapping(method = RequestMethod.GET, value = "/stores")
    List<Store> getStores();

    @RequestMapping(method = RequestMethod.POST, value = "/stores/{storeId}", consumes = "application/json")
    Store update(@PathVariable("storeId") Long storeId, Store store);
}

In the @FeignClient annotation the String value ("stores" above) is an arbitrary client name, which is used to create either a Ribbon load-balancer (see below for details of Ribbon support) or Spring Cloud LoadBalancer. You can also specify a URL using the url attribute (absolute value or just a hostname). The name of the bean in the application context is the fully qualified name of the interface. To specify your own alias value you can use the qualifier value of the @FeignClient annotation.

The load-balancer client above will want to discover the physical addresses for the "stores" service. If your application is a Eureka client then it will resolve the service in the Eureka service registry. If you don’t want to use Eureka, you can simply configure a list of servers in your external configuration (see above for example).

In order to maintain backward compatibility, is used as the default load-balancer implementation. However, Spring Cloud Netflix Ribbon is now in maintenance mode, so we recommend using Spring Cloud LoadBalancer instead. To do this, set the value of spring.cloud.loadbalancer.ribbon.enabled to false.

6.1.2. Overriding Feign Defaults

A central concept in Spring Cloud’s Feign support is that of the named client. Each feign client is part of an ensemble of components that work together to contact a remote server on demand, and the ensemble has a name that you give it as an application developer using the @FeignClient annotation. Spring Cloud creates a new ensemble as an ApplicationContext on demand for each named client using FeignClientsConfiguration. This contains (amongst other things) an feign.Decoder, a feign.Encoder, and a feign.Contract. It is possible to override the name of that ensemble by using the contextId attribute of the @FeignClient annotation.

Spring Cloud lets you take full control of the feign client by declaring additional configuration (on top of the FeignClientsConfiguration) using @FeignClient. Example:

@FeignClient(name = "stores", configuration = FooConfiguration.class)
public interface StoreClient {
    //..
}

In this case the client is composed from the components already in FeignClientsConfiguration together with any in FooConfiguration (where the latter will override the former).

FooConfiguration does not need to be annotated with @Configuration. However, if it is, then take care to exclude it from any @ComponentScan that would otherwise include this configuration as it will become the default source for feign.Decoder, feign.Encoder, feign.Contract, etc., when specified. This can be avoided by putting it in a separate, non-overlapping package from any @ComponentScan or @SpringBootApplication, or it can be explicitly excluded in @ComponentScan.
The serviceId attribute is now deprecated in favor of the name attribute.
Using contextId attribute of the @FeignClient annotation in addition to changing the name of the ApplicationContext ensemble, it will override the alias of the client name and it will be used as part of the name of the configuration bean created for that client.
Previously, using the url attribute, did not require the name attribute. Using name is now required.

Placeholders are supported in the name and url attributes.

@FeignClient(name = "${feign.name}", url = "${feign.url}")
public interface StoreClient {
    //..
}

Spring Cloud Netflix provides the following beans by default for feign (BeanType beanName: ClassName):

  • Decoder feignDecoder: ResponseEntityDecoder (which wraps a SpringDecoder)

  • Encoder feignEncoder: SpringEncoder

  • Logger feignLogger: Slf4jLogger

  • Contract feignContract: SpringMvcContract

  • Feign.Builder feignBuilder: HystrixFeign.Builder

  • Client feignClient: if Ribbon is in the classpath and is enabled it is a LoadBalancerFeignClient, otherwise if Spring Cloud LoadBalancer is in the classpath, FeignBlockingLoadBalancerClient is used. If none of them is in the classpath, the default feign client is used.

spring-cloud-starter-openfeign contains both spring-cloud-starter-netflix-ribbon and spring-cloud-starter-loadbalancer.

The OkHttpClient and ApacheHttpClient feign clients can be used by setting feign.okhttp.enabled or feign.httpclient.enabled to true, respectively, and having them on the classpath. You can customize the HTTP client used by providing a bean of either org.apache.http.impl.client.CloseableHttpClient when using Apache or okhttp3.OkHttpClient when using OK HTTP.

Spring Cloud Netflix does not provide the following beans by default for feign, but still looks up beans of these types from the application context to create the feign client:

  • Logger.Level

  • Retryer

  • ErrorDecoder

  • Request.Options

  • Collection<RequestInterceptor>

  • SetterFactory

  • QueryMapEncoder

Creating a bean of one of those type and placing it in a @FeignClient configuration (such as FooConfiguration above) allows you to override each one of the beans described. Example:

@Configuration
public class FooConfiguration {
    @Bean
    public Contract feignContract() {
        return new feign.Contract.Default();
    }

    @Bean
    public BasicAuthRequestInterceptor basicAuthRequestInterceptor() {
        return new BasicAuthRequestInterceptor("user", "password");
    }
}

This replaces the SpringMvcContract with feign.Contract.Default and adds a RequestInterceptor to the collection of RequestInterceptor.

@FeignClient also can be configured using configuration properties.

application.yml

feign:
  client:
    config:
      feignName:
        connectTimeout: 5000
        readTimeout: 5000
        loggerLevel: full
        errorDecoder: com.example.SimpleErrorDecoder
        retryer: com.example.SimpleRetryer
        requestInterceptors:
          - com.example.FooRequestInterceptor
          - com.example.BarRequestInterceptor
        decode404: false
        encoder: com.example.SimpleEncoder
        decoder: com.example.SimpleDecoder
        contract: com.example.SimpleContract

Default configurations can be specified in the @EnableFeignClients attribute defaultConfiguration in a similar manner as described above. The difference is that this configuration will apply to all feign clients.

If you prefer using configuration properties to configured all @FeignClient, you can create configuration properties with default feign name.

application.yml

feign:
  client:
    config:
      default:
        connectTimeout: 5000
        readTimeout: 5000
        loggerLevel: basic

If we create both @Configuration bean and configuration properties, configuration properties will win. It will override @Configuration values. But if you want to change the priority to @Configuration, you can change feign.client.default-to-properties to false.

If you need to use ThreadLocal bound variables in your RequestInterceptor`s you will need to either set the thread isolation strategy for Hystrix to `SEMAPHORE or disable Hystrix in Feign.

application.yml

# To disable Hystrix in Feign
feign:
  hystrix:
    enabled: false

# To set thread isolation to SEMAPHORE
hystrix:
  command:
    default:
      execution:
        isolation:
          strategy: SEMAPHORE

If we want to create multiple feign clients with the same name or url so that they would point to the same server but each with a different custom configuration then we have to use contextId attribute of the @FeignClient in order to avoid name collision of these configuration beans.

@FeignClient(contextId = "fooClient", name = "stores", configuration = FooConfiguration.class)
public interface FooClient {
    //..
}
@FeignClient(contextId = "barClient", name = "stores", configuration = BarConfiguration.class)
public interface BarClient {
    //..
}

6.1.3. Creating Feign Clients Manually

In some cases it might be necessary to customize your Feign Clients in a way that is not possible using the methods above. In this case you can create Clients using the Feign Builder API. Below is an example which creates two Feign Clients with the same interface but configures each one with a separate request interceptor.

@Import(FeignClientsConfiguration.class)
class FooController {

    private FooClient fooClient;

    private FooClient adminClient;

        @Autowired
    public FooController(Decoder decoder, Encoder encoder, Client client, Contract contract) {
        this.fooClient = Feign.builder().client(client)
                .encoder(encoder)
                .decoder(decoder)
                .contract(contract)
                .requestInterceptor(new BasicAuthRequestInterceptor("user", "user"))
                .target(FooClient.class, "https://PROD-SVC");

        this.adminClient = Feign.builder().client(client)
                .encoder(encoder)
                .decoder(decoder)
                .contract(contract)
                .requestInterceptor(new BasicAuthRequestInterceptor("admin", "admin"))
                .target(FooClient.class, "https://PROD-SVC");
    }
}
In the above example FeignClientsConfiguration.class is the default configuration provided by Spring Cloud Netflix.
PROD-SVC is the name of the service the Clients will be making requests to.
The Feign Contract object defines what annotations and values are valid on interfaces. The autowired Contract bean provides supports for SpringMVC annotations, instead of the default Feign native annotations.

6.1.4. Feign Hystrix Support

If Hystrix is on the classpath and feign.hystrix.enabled=true, Feign will wrap all methods with a circuit breaker. Returning a com.netflix.hystrix.HystrixCommand is also available. This lets you use reactive patterns (with a call to .toObservable() or .observe() or asynchronous use (with a call to .queue()).

To disable Hystrix support on a per-client basis create a vanilla Feign.Builder with the "prototype" scope, e.g.:

@Configuration
public class FooConfiguration {
        @Bean
    @Scope("prototype")
    public Feign.Builder feignBuilder() {
        return Feign.builder();
    }
}
Prior to the Spring Cloud Dalston release, if Hystrix was on the classpath Feign would have wrapped all methods in a circuit breaker by default. This default behavior was changed in Spring Cloud Dalston in favor for an opt-in approach.

6.1.5. Feign Hystrix Fallbacks

Hystrix supports the notion of a fallback: a default code path that is executed when they circuit is open or there is an error. To enable fallbacks for a given @FeignClient set the fallback attribute to the class name that implements the fallback. You also need to declare your implementation as a Spring bean.

@FeignClient(name = "hello", fallback = HystrixClientFallback.class)
protected interface HystrixClient {
    @RequestMapping(method = RequestMethod.GET, value = "/hello")
    Hello iFailSometimes();
}

static class HystrixClientFallback implements HystrixClient {
    @Override
    public Hello iFailSometimes() {
        return new Hello("fallback");
    }
}

If one needs access to the cause that made the fallback trigger, one can use the fallbackFactory attribute inside @FeignClient.

@FeignClient(name = "hello", fallbackFactory = HystrixClientFallbackFactory.class)
protected interface HystrixClient {
    @RequestMapping(method = RequestMethod.GET, value = "/hello")
    Hello iFailSometimes();
}

@Component
static class HystrixClientFallbackFactory implements FallbackFactory<HystrixClient> {
    @Override
    public HystrixClient create(Throwable cause) {
        return new HystrixClient() {
            @Override
            public Hello iFailSometimes() {
                return new Hello("fallback; reason was: " + cause.getMessage());
            }
        };
    }
}
There is a limitation with the implementation of fallbacks in Feign and how Hystrix fallbacks work. Fallbacks are currently not supported for methods that return com.netflix.hystrix.HystrixCommand and rx.Observable.

6.1.6. Feign and @Primary

When using Feign with Hystrix fallbacks, there are multiple beans in the ApplicationContext of the same type. This will cause @Autowired to not work because there isn’t exactly one bean, or one marked as primary. To work around this, Spring Cloud Netflix marks all Feign instances as @Primary, so Spring Framework will know which bean to inject. In some cases, this may not be desirable. To turn off this behavior set the primary attribute of @FeignClient to false.

@FeignClient(name = "hello", primary = false)
public interface HelloClient {
    // methods here
}

6.1.7. Feign Inheritance Support

Feign supports boilerplate apis via single-inheritance interfaces. This allows grouping common operations into convenient base interfaces.

UserService.java
public interface UserService {

    @RequestMapping(method = RequestMethod.GET, value ="/users/{id}")
    User getUser(@PathVariable("id") long id);
}
UserResource.java
@RestController
public class UserResource implements UserService {

}
UserClient.java
package project.user;

@FeignClient("users")
public interface UserClient extends UserService {

}
It is generally not advisable to share an interface between a server and a client. It introduces tight coupling, and also actually doesn’t work with Spring MVC in its current form (method parameter mapping is not inherited).

6.1.8. Feign request/response compression

You may consider enabling the request or response GZIP compression for your Feign requests. You can do this by enabling one of the properties:

feign.compression.request.enabled=true
feign.compression.response.enabled=true

Feign request compression gives you settings similar to what you may set for your web server:

feign.compression.request.enabled=true
feign.compression.request.mime-types=text/xml,application/xml,application/json
feign.compression.request.min-request-size=2048

These properties allow you to be selective about the compressed media types and minimum request threshold length.

For http clients except OkHttpClient, default gzip decoder can be enabled to decode gzip response in UTF-8 encoding:

feign.compression.response.enabled=true
feign.compression.response.useGzipDecoder=true

6.1.9. Feign logging

A logger is created for each Feign client created. By default the name of the logger is the full class name of the interface used to create the Feign client. Feign logging only responds to the DEBUG level.

application.yml
logging.level.project.user.UserClient: DEBUG

The Logger.Level object that you may configure per client, tells Feign how much to log. Choices are:

  • NONE, No logging (DEFAULT).

  • BASIC, Log only the request method and URL and the response status code and execution time.

  • HEADERS, Log the basic information along with request and response headers.

  • FULL, Log the headers, body, and metadata for both requests and responses.

For example, the following would set the Logger.Level to FULL:

@Configuration
public class FooConfiguration {
    @Bean
    Logger.Level feignLoggerLevel() {
        return Logger.Level.FULL;
    }
}

6.1.10. Feign @QueryMap support

The OpenFeign @QueryMap annotation provides support for POJOs to be used as GET parameter maps. Unfortunately, the default OpenFeign QueryMap annotation is incompatible with Spring because it lacks a value property.

Spring Cloud OpenFeign provides an equivalent @SpringQueryMap annotation, which is used to annotate a POJO or Map parameter as a query parameter map.

For example, the Params class defines parameters param1 and param2:

// Params.java
public class Params {
    private String param1;
    private String param2;

    // [Getters and setters omitted for brevity]
}

The following feign client uses the Params class by using the @SpringQueryMap annotation:

@FeignClient("demo")
public interface DemoTemplate {

    @GetMapping(path = "/demo")
    String demoEndpoint(@SpringQueryMap Params params);
}

If you need more control over the generated query parameter map, you can implement a custom QueryMapEncoder bean.

6.1.11. HATEOAS support

Spring provides some APIs to create REST representations that follow the HATEOAS principle, Spring Hateoas and Spring Data REST.

If your project use the org.springframework.boot:spring-boot-starter-hateoas starter or the org.springframework.boot:spring-boot-starter-data-rest starter, Feign HATEOAS support is enabled by default.

When HATEOAS support is enabled, Feign clients are allowed to serialize and deserialize HATEOAS representation models: EntityModel, CollectionModel and PagedModel.

@FeignClient("demo")
public interface DemoTemplate {

    @GetMapping(path = "/stores")
    CollectionModel<Store> getStores();
}

6.1.12. Troubleshooting

Early Initialization Errors

Depending on how you are using your Feign clients you may see initialization errors when starting your application. To work around this problem you can use an ObjectProvider when autowiring your client.

@Autowired
ObjectProvider<TestFeginClient> testFeginClient;

6.2. Configuration properties

To see the list of all Sleuth related configuration properties please check the Appendix page.

7. Spring Cloud Bus

Spring Cloud Bus links the nodes of a distributed system with a lightweight message broker. This broker can then be used to broadcast state changes (such as configuration changes) or other management instructions. A key idea is that the bus is like a distributed actuator for a Spring Boot application that is scaled out. However, it can also be used as a communication channel between apps. This project provides starters for either an AMQP broker or Kafka as the transport.

Spring Cloud is released under the non-restrictive Apache 2.0 license. If you would like to contribute to this section of the documentation or if you find an error, please find the source code and issue trackers in the project at github.

7.1. Quick Start

Spring Cloud Bus works by adding Spring Boot autconfiguration if it detects itself on the classpath. To enable the bus, add spring-cloud-starter-bus-amqp or spring-cloud-starter-bus-kafka to your dependency management. Spring Cloud takes care of the rest. Make sure the broker (RabbitMQ or Kafka) is available and configured. When running on localhost, you need not do anything. If you run remotely, use Spring Cloud Connectors or Spring Boot conventions to define the broker credentials, as shown in the following example for Rabbit:

application.yml
spring:
  rabbitmq:
    host: mybroker.com
    port: 5672
    username: user
    password: secret

The bus currently supports sending messages to all nodes listening or all nodes for a particular service (as defined by Eureka). The /bus/* actuator namespace has some HTTP endpoints. Currently, two are implemented. The first, /bus/env, sends key/value pairs to update each node’s Spring Environment. The second, /bus/refresh, reloads each application’s configuration, as though they had all been pinged on their /refresh endpoint.

The Spring Cloud Bus starters cover Rabbit and Kafka, because those are the two most common implementations. However, Spring Cloud Stream is quite flexible, and the binder works with spring-cloud-bus.

7.2. Bus Endpoints

Spring Cloud Bus provides two endpoints, /actuator/bus-refresh and /actuator/bus-env that correspond to individual actuator endpoints in Spring Cloud Commons, /actuator/refresh and /actuator/env respectively.

7.2.1. Bus Refresh Endpoint

The /actuator/bus-refresh endpoint clears the RefreshScope cache and rebinds @ConfigurationProperties. See the Refresh Scope documentation for more information.

To expose the /actuator/bus-refresh endpoint, you need to add following configuration to your application:

management.endpoints.web.exposure.include=bus-refresh

7.2.2. Bus Env Endpoint

The /actuator/bus-env endpoint updates each instances environment with the specified key/value pair across multiple instances.

To expose the /actuator/bus-env endpoint, you need to add following configuration to your application:

management.endpoints.web.exposure.include=bus-env

The /actuator/bus-env endpoint accepts POST requests with the following shape:

{
    "name": "key1",
    "value": "value1"
}

7.3. Addressing an Instance

Each instance of the application has a service ID, whose value can be set with spring.cloud.bus.id and whose value is expected to be a colon-separated list of identifiers, in order from least specific to most specific. The default value is constructed from the environment as a combination of the spring.application.name and server.port (or spring.application.index, if set). The default value of the ID is constructed in the form of app:index:id, where:

  • app is the vcap.application.name, if it exists, or spring.application.name

  • index is the vcap.application.instance_index, if it exists, spring.application.index, local.server.port, server.port, or 0 (in that order).

  • id is the vcap.application.instance_id, if it exists, or a random value.

The HTTP endpoints accept a “destination” path parameter, such as /bus-refresh/customers:9000, where destination is a service ID. If the ID is owned by an instance on the bus, it processes the message, and all other instances ignore it.

7.4. Addressing All Instances of a Service

The “destination” parameter is used in a Spring PathMatcher (with the path separator as a colon — :) to determine if an instance processes the message. Using the example from earlier, /bus-env/customers:** targets all instances of the “customers” service regardless of the rest of the service ID.

7.5. Service ID Must Be Unique

The bus tries twice to eliminate processing an event — once from the original ApplicationEvent and once from the queue. To do so, it checks the sending service ID against the current service ID. If multiple instances of a service have the same ID, events are not processed. When running on a local machine, each service is on a different port, and that port is part of the ID. Cloud Foundry supplies an index to differentiate. To ensure that the ID is unique outside Cloud Foundry, set spring.application.index to something unique for each instance of a service.

7.6. Customizing the Message Broker

Spring Cloud Bus uses Spring Cloud Stream to broadcast the messages. So, to get messages to flow, you need only include the binder implementation of your choice in the classpath. There are convenient starters for the bus with AMQP (RabbitMQ) and Kafka (spring-cloud-starter-bus-[amqp|kafka]). Generally speaking, Spring Cloud Stream relies on Spring Boot autoconfiguration conventions for configuring middleware. For instance, the AMQP broker address can be changed with spring.rabbitmq.* configuration properties. Spring Cloud Bus has a handful of native configuration properties in spring.cloud.bus.* (for example, spring.cloud.bus.destination is the name of the topic to use as the external middleware). Normally, the defaults suffice.

To learn more about how to customize the message broker settings, consult the Spring Cloud Stream documentation.

7.7. Tracing Bus Events

Bus events (subclasses of RemoteApplicationEvent) can be traced by setting spring.cloud.bus.trace.enabled=true. If you do so, the Spring Boot TraceRepository (if it is present) shows each event sent and all the acks from each service instance. The following example comes from the /trace endpoint:

{
  "timestamp": "2015-11-26T10:24:44.411+0000",
  "info": {
    "signal": "spring.cloud.bus.ack",
    "type": "RefreshRemoteApplicationEvent",
    "id": "c4d374b7-58ea-4928-a312-31984def293b",
    "origin": "stores:8081",
    "destination": "*:**"
  }
  },
  {
  "timestamp": "2015-11-26T10:24:41.864+0000",
  "info": {
    "signal": "spring.cloud.bus.sent",
    "type": "RefreshRemoteApplicationEvent",
    "id": "c4d374b7-58ea-4928-a312-31984def293b",
    "origin": "customers:9000",
    "destination": "*:**"
  }
  },
  {
  "timestamp": "2015-11-26T10:24:41.862+0000",
  "info": {
    "signal": "spring.cloud.bus.ack",
    "type": "RefreshRemoteApplicationEvent",
    "id": "c4d374b7-58ea-4928-a312-31984def293b",
    "origin": "customers:9000",
    "destination": "*:**"
  }
}

The preceding trace shows that a RefreshRemoteApplicationEvent was sent from customers:9000, broadcast to all services, and received (acked) by customers:9000 and stores:8081.

To handle the ack signals yourself, you could add an @EventListener for the AckRemoteApplicationEvent and SentApplicationEvent types to your app (and enable tracing). Alternatively, you could tap into the TraceRepository and mine the data from there.

Any Bus application can trace acks. However, sometimes, it is useful to do this in a central service that can do more complex queries on the data or forward it to a specialized tracing service.

7.8. Broadcasting Your Own Events

The Bus can carry any event of type RemoteApplicationEvent. The default transport is JSON, and the deserializer needs to know which types are going to be used ahead of time. To register a new type, you must put it in a subpackage of org.springframework.cloud.bus.event.

To customise the event name, you can use @JsonTypeName on your custom class or rely on the default strategy, which is to use the simple name of the class.

Both the producer and the consumer need access to the class definition.

7.8.1. Registering events in custom packages

If you cannot or do not want to use a subpackage of org.springframework.cloud.bus.event for your custom events, you must specify which packages to scan for events of type RemoteApplicationEvent by using the @RemoteApplicationEventScan annotation. Packages specified with @RemoteApplicationEventScan include subpackages.

For example, consider the following custom event, called MyEvent:

package com.acme;

public class MyEvent extends RemoteApplicationEvent {
    ...
}

You can register that event with the deserializer in the following way:

package com.acme;

@Configuration
@RemoteApplicationEventScan
public class BusConfiguration {
    ...
}

Without specifying a value, the package of the class where @RemoteApplicationEventScan is used is registered. In this example, com.acme is registered by using the package of BusConfiguration.

You can also explicitly specify the packages to scan by using the value, basePackages or basePackageClasses properties on @RemoteApplicationEventScan, as shown in the following example:

package com.acme;

@Configuration
//@RemoteApplicationEventScan({"com.acme", "foo.bar"})
//@RemoteApplicationEventScan(basePackages = {"com.acme", "foo.bar", "fizz.buzz"})
@RemoteApplicationEventScan(basePackageClasses = BusConfiguration.class)
public class BusConfiguration {
    ...
}

All of the preceding examples of @RemoteApplicationEventScan are equivalent, in that the com.acme package is registered by explicitly specifying the packages on @RemoteApplicationEventScan.

You can specify multiple base packages to scan.

7.9. Configuration properties

To see the list of all Bus related configuration properties please check the Appendix page.

8. Spring Cloud Sleuth

Adrian Cole, Spencer Gibb, Marcin Grzejszczak, Dave Syer, Jay Bryant

Hoxton.SR3

8.1. Introduction

Spring Cloud Sleuth implements a distributed tracing solution for Spring Cloud.

8.1.1. Terminology

Spring Cloud Sleuth borrows Dapper’s terminology.

Span: The basic unit of work. For example, sending an RPC is a new span, as is sending a response to an RPC. Spans are identified by a unique 64-bit ID for the span and another 64-bit ID for the trace the span is a part of. Spans also have other data, such as descriptions, timestamped events, key-value annotations (tags), the ID of the span that caused them, and process IDs (normally IP addresses).

Spans can be started and stopped, and they keep track of their timing information. Once you create a span, you must stop it at some point in the future.

The initial span that starts a trace is called a root span. The value of the ID of that span is equal to the trace ID.

Trace: A set of spans forming a tree-like structure. For example, if you run a distributed big-data store, a trace might be formed by a PUT request.

Annotation: Used to record the existence of an event in time. With Brave instrumentation, we no longer need to set special events for Zipkin to understand who the client and server are, where the request started, and where it ended. For learning purposes, however, we mark these events to highlight what kind of an action took place.

  • cs: Client Sent. The client has made a request. This annotation indicates the start of the span.

  • sr: Server Received: The server side got the request and started processing it. Subtracting the cs timestamp from this timestamp reveals the network latency.

  • ss: Server Sent. Annotated upon completion of request processing (when the response got sent back to the client). Subtracting the sr timestamp from this timestamp reveals the time needed by the server side to process the request.

  • cr: Client Received. Signifies the end of the span. The client has successfully received the response from the server side. Subtracting the cs timestamp from this timestamp reveals the whole time needed by the client to receive the response from the server.

The following image shows how Span and Trace look in a system, together with the Zipkin annotations:

Trace Info propagation

Each color of a note signifies a span (there are seven spans - from A to G). Consider the following note:

Trace Id = X
Span Id = D
Client Sent

This note indicates that the current span has Trace Id set to X and Span Id set to D. Also, the Client Sent event took place.

The following image shows how parent-child relationships of spans look:

Parent child relationship

8.1.2. Purpose

The following sections refer to the example shown in the preceding image.

Distributed Tracing with Zipkin

This example has seven spans. If you go to traces in Zipkin, you can see this number in the second trace, as shown in the following image:

Traces

However, if you pick a particular trace, you can see four spans, as shown in the following image:

Traces Info propagation
When you pick a particular trace, you see merged spans. That means that, if there were two spans sent to Zipkin with Server Received and Server Sent or Client Received and Client Sent annotations, they are presented as a single span.

Why is there a difference between the seven and four spans in this case?

  • One span comes from the http:/start span. It has the Server Received (sr) and Server Sent (ss) annotations.

  • Two spans come from the RPC call from service1 to service2 to the http:/foo endpoint. The Client Sent (cs) and Client Received (cr) events took place on the service1 side. Server Received (sr) and Server Sent (ss) events took place on the service2 side. These two spans form one logical span related to an RPC call.

  • Two spans come from the RPC call from service2 to service3 to the http:/bar endpoint. The Client Sent (cs) and Client Received (cr) events took place on the service2 side. The Server Received (sr) and Server Sent (ss) events took place on the service3 side. These two spans form one logical span related to an RPC call.

  • Two spans come from the RPC call from service2 to service4 to the http:/baz endpoint. The Client Sent (cs) and Client Received (cr) events took place on the service2 side. Server Received (sr) and Server Sent (ss) events took place on the service4 side. These two spans form one logical span related to an RPC call.

So, if we count the physical spans, we have one from http:/start, two from service1 calling service2, two from service2 calling service3, and two from service2 calling service4. In sum, we have a total of seven spans.

Logically, we see the information of four total Spans because we have one span related to the incoming request to service1 and three spans related to RPC calls.

Visualizing errors

Zipkin lets you visualize errors in your trace. When an exception was thrown and was not caught, we set proper tags on the span, which Zipkin can then properly colorize. You could see in the list of traces one trace that is red. That appears because an exception was thrown.

If you click that trace, you see a similar picture, as follows:

Error Traces

If you then click on one of the spans, you see the following

Error Traces Info propagation

The span shows the reason for the error and the whole stack trace related to it.

Distributed Tracing with Brave

Starting with version 2.0.0, Spring Cloud Sleuth uses Brave as the tracing library. Consequently, Sleuth no longer takes care of storing the context but delegates that work to Brave.

Due to the fact that Sleuth had different naming and tagging conventions than Brave, we decided to follow Brave’s conventions from now on. However, if you want to use the legacy Sleuth approaches, you can set the spring.sleuth.http.legacy.enabled property to true.

Live examples
Zipkin deployed on Pivotal Web Services
Click the Pivotal Web Services icon to see it live!Click the Pivotal Web Services icon to see it live!

The dependency graph in Zipkin should resemble the following image:

Dependencies
Zipkin deployed on Pivotal Web Services
Click the Pivotal Web Services icon to see it live!Click the Pivotal Web Services icon to see it live!
Log correlation

When using grep to read the logs of those four applications by scanning for a trace ID equal to (for example) 2485ec27856c56f4, you get output resembling the following:

service1.log:2016-02-26 11:15:47.561  INFO [service1,2485ec27856c56f4,2485ec27856c56f4,true] 68058 --- [nio-8081-exec-1] i.s.c.sleuth.docs.service1.Application   : Hello from service1. Calling service2
service2.log:2016-02-26 11:15:47.710  INFO [service2,2485ec27856c56f4,9aa10ee6fbde75fa,true] 68059 --- [nio-8082-exec-1] i.s.c.sleuth.docs.service2.Application   : Hello from service2. Calling service3 and then service4
service3.log:2016-02-26 11:15:47.895  INFO [service3,2485ec27856c56f4,1210be13194bfe5,true] 68060 --- [nio-8083-exec-1] i.s.c.sleuth.docs.service3.Application   : Hello from service3
service2.log:2016-02-26 11:15:47.924  INFO [service2,2485ec27856c56f4,9aa10ee6fbde75fa,true] 68059 --- [nio-8082-exec-1] i.s.c.sleuth.docs.service2.Application   : Got response from service3 [Hello from service3]
service4.log:2016-02-26 11:15:48.134  INFO [service4,2485ec27856c56f4,1b1845262ffba49d,true] 68061 --- [nio-8084-exec-1] i.s.c.sleuth.docs.service4.Application   : Hello from service4
service2.log:2016-02-26 11:15:48.156  INFO [service2,2485ec27856c56f4,9aa10ee6fbde75fa,true] 68059 --- [nio-8082-exec-1] i.s.c.sleuth.docs.service2.Application   : Got response from service4 [Hello from service4]
service1.log:2016-02-26 11:15:48.182  INFO [service1,2485ec27856c56f4,2485ec27856c56f4,true] 68058 --- [nio-8081-exec-1] i.s.c.sleuth.docs.service1.Application   : Got response from service2 [Hello from service2, response from service3 [Hello from service3] and from service4 [Hello from service4]]

If you use a log aggregating tool (such as Kibana, Splunk, and others), you can order the events that took place. An example from Kibana would resemble the following image:

Log correlation with Kibana

If you want to use Logstash, the following listing shows the Grok pattern for Logstash:

filter {
  # pattern matching logback pattern
  grok {
    match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}\s+%{LOGLEVEL:severity}\s+\[%{DATA:service},%{DATA:trace},%{DATA:span},%{DATA:exportable}\]\s+%{DATA:pid}\s+---\s+\[%{DATA:thread}\]\s+%{DATA:class}\s+:\s+%{GREEDYDATA:rest}" }
  }
  date {
    match => ["timestamp", "ISO8601"]
  }
  mutate {
    remove_field => ["timestamp"]
  }
}
If you want to use Grok together with the logs from Cloud Foundry, you have to use the following pattern:
filter {
  # pattern matching logback pattern
  grok {
    match => { "message" => "(?m)OUT\s+%{TIMESTAMP_ISO8601:timestamp}\s+%{LOGLEVEL:severity}\s+\[%{DATA:service},%{DATA:trace},%{DATA:span},%{DATA:exportable}\]\s+%{DATA:pid}\s+---\s+\[%{DATA:thread}\]\s+%{DATA:class}\s+:\s+%{GREEDYDATA:rest}" }
  }
  date {
    match => ["timestamp", "ISO8601"]
  }
  mutate {
    remove_field => ["timestamp"]
  }
}
JSON Logback with Logstash

Often, you do not want to store your logs in a text file but in a JSON file that Logstash can immediately pick. To do so, you have to do the following (for readability, we pass the dependencies in the groupId:artifactId:version notation).

Dependencies Setup

  1. Ensure that Logback is on the classpath (ch.qos.logback:logback-core).

  2. Add Logstash Logback encode. For example, to use version 4.6, add net.logstash.logback:logstash-logback-encoder:4.6.

Logback Setup

Consider the following example of a Logback configuration file (named logback-spring.xml).

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
    ​
    <springProperty scope="context" name="springAppName" source="spring.application.name"/>
    <!-- Example for logging into the build folder of your project -->
    <property name="LOG_FILE" value="${BUILD_FOLDER:-build}/${springAppName}"/>​

    <!-- You can override this to have a custom pattern -->
    <property name="CONSOLE_LOG_PATTERN"
              value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>

    <!-- Appender to log to console -->
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <!-- Minimum logging level to be presented in the console logs-->
            <level>DEBUG</level>
        </filter>
        <encoder>
            <pattern>${CONSOLE_LOG_PATTERN}</pattern>
            <charset>utf8</charset>
        </encoder>
    </appender>

    <!-- Appender to log to file -->​
    <appender name="flatfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOG_FILE}</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.gz</fileNamePattern>
            <maxHistory>7</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>${CONSOLE_LOG_PATTERN}</pattern>
            <charset>utf8</charset>
        </encoder>
    </appender>
    ​
    <!-- Appender to log to file in a JSON format -->
    <appender name="logstash" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOG_FILE}.json</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${LOG_FILE}.json.%d{yyyy-MM-dd}.gz</fileNamePattern>
            <maxHistory>7</maxHistory>
        </rollingPolicy>
        <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <pattern>
                    <pattern>
                        {
                        "severity": "%level",
                        "service": "${springAppName:-}",
                        "trace": "%X{X-B3-TraceId:-}",
                        "span": "%X{X-B3-SpanId:-}",
                        "parent": "%X{X-B3-ParentSpanId:-}",
                        "exportable": "%X{X-Span-Export:-}",
                        "baggage": "%X{key:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger{40}",
                        "rest": "%message"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
    </appender>
    ​
    <root level="INFO">
        <appender-ref ref="console"/>
        <!-- uncomment this to have also JSON logs -->
        <!--<appender-ref ref="logstash"/>-->
        <!--<appender-ref ref="flatfile"/>-->
    </root>
</configuration>

That Logback configuration file:

  • Logs information from the application in a JSON format to a build/${spring.application.name}.json file.

  • Has commented out two additional appenders: console and standard log file.

  • Has the same logging pattern as the one presented in the previous section.

If you use a custom logback-spring.xml, you must pass the spring.application.name in the bootstrap rather than the application property file. Otherwise, your custom logback file does not properly read the property.
Propagating Span Context

The span context is the state that must get propagated to any child spans across process boundaries. Part of the Span Context is the Baggage. The trace and span IDs are a required part of the span context. Baggage is an optional part.

Baggage is a set of key:value pairs stored in the span context. Baggage travels together with the trace and is attached to every span. Spring Cloud Sleuth understands that a header is baggage-related if the HTTP header is prefixed with baggage- and, for messaging, it starts with baggage_.

There is currently no limitation of the count or size of baggage items. However, keep in mind that too many can decrease system throughput or increase RPC latency. In extreme cases, too much baggage can crash the application, due to exceeding transport-level message or header capacity.

The following example shows setting baggage on a span:

Span initialSpan = this.tracer.nextSpan().name("span").start();
ExtraFieldPropagation.set(initialSpan.context(), "foo", "bar");
ExtraFieldPropagation.set(initialSpan.context(), "UPPER_CASE", "someValue");
Baggage versus Span Tags

Baggage travels with the trace (every child span contains the baggage of its parent). Zipkin has no knowledge of baggage and does not receive that information.

Starting from Sleuth 2.0.0 you have to pass the baggage key names explicitly in your project configuration. Read more about that setup here

Tags are attached to a specific span. In other words, they are presented only for that particular span. However, you can search by tag to find the trace, assuming a span having the searched tag value exists.

If you want to be able to lookup a span based on baggage, you should add a corresponding entry as a tag in the root span.

The span must be in scope.

The following listing shows integration tests that use baggage:

The setup
spring.sleuth:
  baggage-keys:
    - baz
    - bizarrecase
  propagation-keys:
    - foo
    - upper_case
The code
initialSpan.tag("foo",
        ExtraFieldPropagation.get(initialSpan.context(), "foo"));
initialSpan.tag("UPPER_CASE",
        ExtraFieldPropagation.get(initialSpan.context(), "UPPER_CASE"));

8.1.3. Adding Sleuth to the Project

This section addresses how to add Sleuth to your project with either Maven or Gradle.

To ensure that your application name is properly displayed in Zipkin, set the spring.application.name property in bootstrap.yml.
Only Sleuth (log correlation)

If you want to use only Spring Cloud Sleuth without the Zipkin integration, add the spring-cloud-starter-sleuth module to your project.

The following example shows how to add Sleuth with Maven:

Maven
<dependencyManagement> (1)
      <dependencies>
          <dependency>
              <groupId>org.springframework.cloud</groupId>
              <artifactId>spring-cloud-dependencies</artifactId>
              <version>${release.train.version}</version>
              <type>pom</type>
              <scope>import</scope>
          </dependency>
      </dependencies>
</dependencyManagement>

<dependency> (2)
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
1 We recommend that you add the dependency management through the Spring BOM so that you need not manage versions yourself.
2 Add the dependency to spring-cloud-starter-sleuth.

The following example shows how to add Sleuth with Gradle:

Gradle
dependencyManagement { (1)
    imports {
        mavenBom "org.springframework.cloud:spring-cloud-dependencies:${releaseTrainVersion}"
    }
}

dependencies { (2)
    compile "org.springframework.cloud:spring-cloud-starter-sleuth"
}
1 We recommend that you add the dependency management through the Spring BOM so that you need not manage versions yourself.
2 Add the dependency to spring-cloud-starter-sleuth.
Sleuth with Zipkin via HTTP

If you want both Sleuth and Zipkin, add the spring-cloud-starter-zipkin dependency.

The following example shows how to do so for Maven:

Maven
<dependencyManagement> (1)
      <dependencies>
          <dependency>
              <groupId>org.springframework.cloud</groupId>
              <artifactId>spring-cloud-dependencies</artifactId>
              <version>${release.train.version}</version>
              <type>pom</type>
              <scope>import</scope>
          </dependency>
      </dependencies>
</dependencyManagement>

<dependency> (2)
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-zipkin</artifactId>
</dependency>
1 We recommend that you add the dependency management through the Spring BOM so that you need not manage versions yourself.
2 Add the dependency to spring-cloud-starter-zipkin.

The following example shows how to do so for Gradle:

Gradle
dependencyManagement { (1)
    imports {
        mavenBom "org.springframework.cloud:spring-cloud-dependencies:${releaseTrainVersion}"
    }
}

dependencies { (2)
    compile "org.springframework.cloud:spring-cloud-starter-zipkin"
}
1 We recommend that you add the dependency management through the Spring BOM so that you need not manage versions yourself.
2 Add the dependency to spring-cloud-starter-zipkin.
Sleuth with Zipkin over RabbitMQ or Kafka

If you want to use RabbitMQ or Kafka instead of HTTP, add the spring-rabbit or spring-kafka dependency. The default destination name is zipkin.

If using Kafka, you must set the property spring.zipkin.sender.type property accordingly:

spring.zipkin.sender.type: kafka
spring-cloud-sleuth-stream is deprecated and incompatible with these destinations.

If you want Sleuth over RabbitMQ, add the spring-cloud-starter-zipkin and spring-rabbit dependencies.

The following example shows how to do so for Gradle:

Maven
<dependencyManagement> (1)
      <dependencies>
          <dependency>
              <groupId>org.springframework.cloud</groupId>
              <artifactId>spring-cloud-dependencies</artifactId>
              <version>${release.train.version}</version>
              <type>pom</type>
              <scope>import</scope>
          </dependency>
      </dependencies>
</dependencyManagement>

<dependency> (2)
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-zipkin</artifactId>
</dependency>
<dependency> (3)
    <groupId>org.springframework.amqp</groupId>
    <artifactId>spring-rabbit</artifactId>
</dependency>
1 We recommend that you add the dependency management through the Spring BOM so that you need not manage versions yourself.
2 Add the dependency to spring-cloud-starter-zipkin. That way, all nested dependencies get downloaded.
3 To automatically configure RabbitMQ, add the spring-rabbit dependency.
Gradle
dependencyManagement { (1)
    imports {
        mavenBom "org.springframework.cloud:spring-cloud-dependencies:${releaseTrainVersion}"
    }
}

dependencies {
    compile "org.springframework.cloud:spring-cloud-starter-zipkin" (2)
    compile "org.springframework.amqp:spring-rabbit" (3)
}
1 We recommend that you add the dependency management through the Spring BOM so that you need not manage versions yourself.
2 Add the dependency to spring-cloud-starter-zipkin. That way, all nested dependencies get downloaded.
3 To automatically configure RabbitMQ, add the spring-rabbit dependency.

8.1.4. Overriding the auto-configuration of Zipkin

Spring Cloud Sleuth supports sending traces to multiple tracing systems as of version 2.1.0. In order to get this to work, every tracing system needs to have a Reporter<Span> and Sender. If you want to override the provided beans you need to give them a specific name. To do this you can use respectively ZipkinAutoConfiguration.REPORTER_BEAN_NAME and ZipkinAutoConfiguration.SENDER_BEAN_NAME.

@Configuration
protected static class MyConfig {

    @Bean(ZipkinAutoConfiguration.REPORTER_BEAN_NAME)
    Reporter<zipkin2.Span> myReporter() {
        return AsyncReporter.create(mySender());
    }

    @Bean(ZipkinAutoConfiguration.SENDER_BEAN_NAME)
    MySender mySender() {
        return new MySender();
    }

    static class MySender extends Sender {

        private boolean spanSent = false;

        boolean isSpanSent() {
            return this.spanSent;
        }

        @Override
        public Encoding encoding() {
            return Encoding.JSON;
        }

        @Override
        public int messageMaxBytes() {
            return Integer.MAX_VALUE;
        }

        @Override
        public int messageSizeInBytes(List<byte[]> encodedSpans) {
            return encoding().listSizeInBytes(encodedSpans);
        }

        @Override
        public Call<Void> sendSpans(List<byte[]> encodedSpans) {
            this.spanSent = true;
            return Call.create(null);
        }

    }

}

8.2. Additional Resources

You can watch a video of Reshmi Krishna and Marcin Grzejszczak talking about Spring Cloud Sleuth and Zipkin by clicking here.

You can check different setups of Sleuth and Brave in the openzipkin/sleuth-webmvc-example repository.

8.3. Features

  • Adds trace and span IDs to the Slf4J MDC, so you can extract all the logs from a given trace or span in a log aggregator, as shown in the following example logs:

    2016-02-02 15:30:57.902  INFO [bar,6bfd228dc00d216b,6bfd228dc00d216b,false] 23030 --- [nio-8081-exec-3] ...
    2016-02-02 15:30:58.372 ERROR [bar,6bfd228dc00d216b,6bfd228dc00d216b,false] 23030 --- [nio-8081-exec-3] ...
    2016-02-02 15:31:01.936  INFO [bar,46ab0d418373cbc9,46ab0d418373cbc9,false] 23030 --- [nio-8081-exec-4] ...

    Notice the [appname,traceId,spanId,exportable] entries from the MDC:

    • spanId: The ID of a specific operation that took place.

    • appname: The name of the application that logged the span.

    • traceId: The ID of the latency graph that contains the span.

    • exportable: Whether the log should be exported to Zipkin. When would you like the span not to be exportable? When you want to wrap some operation in a Span and have it written to the logs only.

  • Provides an abstraction over common distributed tracing data models: traces, spans (forming a DAG), annotations, and key-value annotations. Spring Cloud Sleuth is loosely based on HTrace but is compatible with Zipkin (Dapper).

  • Sleuth records timing information to aid in latency analysis. By using sleuth, you can pinpoint causes of latency in your applications.

  • Sleuth is written to not log too much and to not cause your production application to crash. To that end, Sleuth:

    • Propagates structural data about your call graph in-band and the rest out-of-band.

    • Includes opinionated instrumentation of layers such as HTTP.

    • Includes a sampling policy to manage volume.

    • Can report to a Zipkin system for query and visualization.

  • Instruments common ingress and egress points from Spring applications (servlet filter, async endpoints, rest template, scheduled actions, message channels, Zuul filters, and Feign client).

  • Sleuth includes default logic to join a trace across HTTP or messaging boundaries. For example, HTTP propagation works over Zipkin-compatible request headers.

  • Sleuth can propagate context (also known as baggage) between processes. Consequently, if you set a baggage element on a Span, it is sent downstream to other processes over either HTTP or messaging.

  • Provides a way to create or continue spans and add tags and logs through annotations.

  • If spring-cloud-sleuth-zipkin is on the classpath, the app generates and collects Zipkin-compatible traces. By default, it sends them over HTTP to a Zipkin server on localhost (port 9411). You can configure the location of the service by setting spring.zipkin.baseUrl.

    • If you depend on spring-rabbit, your app sends traces to a RabbitMQ broker instead of HTTP.

    • If you depend on spring-kafka, and set spring.zipkin.sender.type: kafka, your app sends traces to a Kafka broker instead of HTTP.

spring-cloud-sleuth-stream is deprecated and should no longer be used.
The SLF4J MDC is always set and logback users immediately see the trace and span IDs in logs per the example shown earlier. Other logging systems have to configure their own formatter to get the same result. The default is as follows: logging.pattern.level set to %5p [${spring.zipkin.service.name:${spring.application.name:-}},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%X{X-Span-Export:-}] (this is a Spring Boot feature for logback users). If you do not use SLF4J, this pattern is NOT automatically applied.

8.3.1. Introduction to Brave

Starting with version 2.0.0, Spring Cloud Sleuth uses Brave as the tracing library. For your convenience, we embed part of the Brave’s docs here.
In the vast majority of cases you need to just use the Tracer or SpanCustomizer beans from Brave that Sleuth provides. The documentation below contains a high overview of what Brave is and how it works.

Brave is a library used to capture and report latency information about distributed operations to Zipkin. Most users do not use Brave directly. They use libraries or frameworks rather than employ Brave on their behalf.

This module includes a tracer that creates and joins spans that model the latency of potentially distributed work. It also includes libraries to propagate the trace context over network boundaries (for example, with HTTP headers).

Tracing

Most importantly, you need a brave.Tracer, configured to report to Zipkin.

The following example setup sends trace data (spans) to Zipkin over HTTP (as opposed to Kafka):

class MyClass {

    private final Tracer tracer;

    // Tracer will be autowired
    MyClass(Tracer tracer) {
        this.tracer = tracer;
    }

    void doSth() {
        Span span = tracer.newTrace().name("encode").start();
        // ...
    }
}
If your span contains a name longer than 50 chars, then that name is truncated to 50 chars. Your names have to be explicit and concrete. Big names lead to latency issues and sometimes even thrown exceptions.

The tracer creates and joins spans that model the latency of potentially distributed work. It can employ sampling to reduce overhead during the process, to reduce the amount of data sent to Zipkin, or both.

Spans returned by a tracer report data to Zipkin when finished or do nothing if unsampled. After starting a span, you can annotate events of interest or add tags containing details or lookup keys.

Spans have a context that includes trace identifiers that place the span at the correct spot in the tree representing the distributed operation.

Local Tracing

When tracing code that never leaves your process, run it inside a scoped span.

@Autowired Tracer tracer;

// Start a new trace or a span within an existing trace representing an operation
ScopedSpan span = tracer.startScopedSpan("encode");
try {
  // The span is in "scope" meaning downstream code such as loggers can see trace IDs
  return encoder.encode();
} catch (RuntimeException | Error e) {
  span.error(e); // Unless you handle exceptions, you might not know the operation failed!
  throw e;
} finally {
  span.finish(); // always finish the span
}

When you need more features, or finer control, use the Span type:

@Autowired Tracer tracer;

// Start a new trace or a span within an existing trace representing an operation
Span span = tracer.nextSpan().name("encode").start();
// Put the span in "scope" so that downstream code such as loggers can see trace IDs
try (SpanInScope ws = tracer.withSpanInScope(span)) {
  return encoder.encode();
} catch (RuntimeException | Error e) {
  span.error(e); // Unless you handle exceptions, you might not know the operation failed!
  throw e;
} finally {
  span.finish(); // note the scope is independent of the span. Always finish a span.
}

Both of the above examples report the exact same span on finish!

In the above example, the span will be either a new root span or the next child in an existing trace.

Customizing Spans

Once you have a span, you can add tags to it. The tags can be used as lookup keys or details. For example, you might add a tag with your runtime version, as shown in the following example:

span.tag("clnt/finagle.version", "6.36.0");

When exposing the ability to customize spans to third parties, prefer brave.SpanCustomizer as opposed to brave.Span. The former is simpler to understand and test and does not tempt users with span lifecycle hooks.

interface MyTraceCallback {
  void request(Request request, SpanCustomizer customizer);
}

Since brave.Span implements brave.SpanCustomizer, you can pass it to users, as shown in the following example:

for (MyTraceCallback callback : userCallbacks) {
  callback.request(request, span);
}
Implicitly Looking up the Current Span

Sometimes, you do not know if a trace is in progress or not, and you do not want users to do null checks. brave.CurrentSpanCustomizer handles this problem by adding data to any span that’s in progress or drops, as shown in the following example:

Ex.

// The user code can then inject this without a chance of it being null.
@Autowired SpanCustomizer span;

void userCode() {
  span.annotate("tx.started");
  ...
}
RPC tracing
Check for instrumentation written here and Zipkin’s list before rolling your own RPC instrumentation.

RPC tracing is often done automatically by interceptors. Behind the scenes, they add tags and events that relate to their role in an RPC operation.

The following example shows how to add a client span:

@Autowired Tracing tracing;
@Autowired Tracer tracer;

// before you send a request, add metadata that describes the operation
span = tracer.nextSpan().name(service + "/" + method).kind(CLIENT);
span.tag("myrpc.version", "1.0.0");
span.remoteServiceName("backend");
span.remoteIpAndPort("172.3.4.1", 8108);

// Add the trace context to the request, so it can be propagated in-band
tracing.propagation().injector(Request::addHeader)
                     .inject(span.context(), request);

// when the request is scheduled, start the span
span.start();

// if there is an error, tag the span
span.tag("error", error.getCode());
// or if there is an exception
span.error(exception);

// when the response is complete, finish the span
span.finish();
One-Way tracing

Sometimes, you need to model an asynchronous operation where there is a request but no response. In normal RPC tracing, you use span.finish() to indicate that the response was received. In one-way tracing, you use span.flush() instead, as you do not expect a response.

The following example shows how a client might model a one-way operation:

@Autowired Tracing tracing;
@Autowired Tracer tracer;

// start a new span representing a client request
oneWaySend = tracer.nextSpan().name(service + "/" + method).kind(CLIENT);

// Add the trace context to the request, so it can be propagated in-band
tracing.propagation().injector(Request::addHeader)
                     .inject(oneWaySend.context(), request);

// fire off the request asynchronously, totally dropping any response
request.execute();

// start the client side and flush instead of finish
oneWaySend.start().flush();

The following example shows how a server might handle a one-way operation:

@Autowired Tracing tracing;
@Autowired Tracer tracer;

// pull the context out of the incoming request
extractor = tracing.propagation().extractor(Request::getHeader);

// convert that context to a span which you can name and add tags to
oneWayReceive = nextSpan(tracer, extractor.extract(request))
    .name("process-request")
    .kind(SERVER)
    ... add tags etc.

// start the server side and flush instead of finish
oneWayReceive.start().flush();

// you should not modify this span anymore as it is complete. However,
// you can create children to represent follow-up work.
next = tracer.newSpan(oneWayReceive.context()).name("step2").start();

8.4. Sampling

Sampling may be employed to reduce the data collected and reported out of process. When a span is not sampled, it adds no overhead (a noop).

Sampling is an up-front decision, meaning that the decision to report data is made at the first operation in a trace and that decision is propagated downstream.

By default, a global sampler applies a single rate to all traced operations. Tracer.Builder.sampler controls this setting, and it defaults to tracing every request.

8.4.1. Declarative sampling

Some applications need to sample based on the type or annotations of a java method.

Most users use a framework interceptor to automate this sort of policy. The following example shows how that might work internally:

@Autowired Tracer tracer;

// derives a sample rate from an annotation on a java method
DeclarativeSampler<Traced> sampler = DeclarativeSampler.create(Traced::sampleRate);

@Around("@annotation(traced)")
public Object traceThing(ProceedingJoinPoint pjp, Traced traced) throws Throwable {
  // When there is no trace in progress, this decides using an annotation
  Sampler decideUsingAnnotation = declarativeSampler.toSampler(traced);
  Tracer tracer = tracer.withSampler(decideUsingAnnotation);

  // This code looks the same as if there was no declarative override
  ScopedSpan span = tracer.startScopedSpan(spanName(pjp));
  try {
    return pjp.proceed();
  } catch (RuntimeException | Error e) {
    span.error(e);
    throw e;
  } finally {
    span.finish();
  }
}

8.4.2. Custom sampling

Depending on what the operation is, you may want to apply different policies. For example, you might not want to trace requests to static resources such as images, or you might want to trace all requests to a new api.

Most users use a framework interceptor to automate this sort of policy. The following example shows how that might work internally:

@Autowired Tracer tracer;
@Autowired Sampler fallback;

Span nextSpan(final Request input) {
  Sampler requestBased = Sampler() {
    @Override public boolean isSampled(long traceId) {
      if (input.url().startsWith("/experimental")) {
        return true;
      } else if (input.url().startsWith("/static")) {
        return false;
      }
      return fallback.isSampled(traceId);
    }
  };
  return tracer.withSampler(requestBased).nextSpan();
}

8.4.3. Sampling in Spring Cloud Sleuth

By default Spring Cloud Sleuth sets all spans to non-exportable. That means that traces appear in logs but not in any remote store. For testing the default is often enough, and it probably is all you need if you use only the logs (for example, with an ELK aggregator). If you export span data to Zipkin, there is also an Sampler.ALWAYS_SAMPLE setting that exports everything, RateLimitingSampler setting that samples X transactions per second (defaults to 1000) or ProbabilityBasedSampler setting that samples a fixed fraction of spans.

The RateLimitingSampler is the default if you use spring-cloud-sleuth-zipkin. You can configure the rate limit by setting spring.sleuth.sampler.rate.

A sampler can be installed by creating a bean definition, as shown in the following example:

@Bean
public Sampler defaultSampler() {
    return Sampler.ALWAYS_SAMPLE;
}
You can set the HTTP header X-B3-Flags to 1, or, when doing messaging, you can set the spanFlags header to 1. Doing so forces the current span to be exportable regardless of the sampling decision.

In order to use the rate-limited sampler set the spring.sleuth.sampler.rate property to choose an amount of traces to accept on a per-second interval. The minimum number is 0 and the max is 2,147,483,647 (max int).

8.5. Propagation

Propagation is needed to ensure activities originating from the same root are collected together in the same trace. The most common propagation approach is to copy a trace context from a client by sending an RPC request to a server receiving it.

For example, when a downstream HTTP call is made, its trace context is encoded as request headers and sent along with it, as shown in the following image:

   Client Span                                                Server Span
┌──────────────────┐                                       ┌──────────────────┐
│                  │                                       │                  │
│   TraceContext   │           Http Request Headers        │   TraceContext   │
│ ┌──────────────┐ │          ┌───────────────────┐        │ ┌──────────────┐ │
│ │ TraceId      │ │          │ X─B3─TraceId      │        │ │ TraceId      │ │
│ │              │ │          │                   │        │ │              │ │
│ │ ParentSpanId │ │ Extract  │ X─B3─ParentSpanId │ Inject │ │ ParentSpanId │ │
│ │              ├─┼─────────>│                   ├────────┼>│              │ │
│ │ SpanId       │ │          │ X─B3─SpanId       │        │ │ SpanId       │ │
│ │              │ │          │                   │        │ │              │ │
│ │ Sampled      │ │          │ X─B3─Sampled      │        │ │ Sampled      │ │
│ └──────────────┘ │          └───────────────────┘        │ └──────────────┘ │
│                  │                                       │                  │
└──────────────────┘                                       └──────────────────┘

The names above are from B3 Propagation, which is built-in to Brave and has implementations in many languages and frameworks.

Most users use a framework interceptor to automate propagation. The next two examples show how that might work for a client and a server.

The following example shows how client-side propagation might work:

@Autowired Tracing tracing;

// configure a function that injects a trace context into a request
injector = tracing.propagation().injector(Request.Builder::addHeader);

// before a request is sent, add the current span's context to it
injector.inject(span.context(), request);

The following example shows how server-side propagation might work:

@Autowired Tracing tracing;
@Autowired Tracer tracer;

// configure a function that extracts the trace context from a request
extractor = tracing.propagation().extractor(Request::getHeader);

// when a server receives a request, it joins or starts a new trace
span = tracer.nextSpan(extractor.extract(request));

8.5.1. Propagating extra fields

Sometimes you need to propagate extra fields, such as a request ID or an alternate trace context. For example, if you are in a Cloud Foundry environment, you might want to pass the request ID, as shown in the following example:

// when you initialize the builder, define the extra field you want to propagate
Tracing.newBuilder().propagationFactory(
  ExtraFieldPropagation.newFactory(B3Propagation.FACTORY, "x-vcap-request-id")
);

// later, you can tag that request ID or use it in log correlation
requestId = ExtraFieldPropagation.get("x-vcap-request-id");

You may also need to propagate a trace context that you are not using. For example, you may be in an Amazon Web Services environment but not be reporting data to X-Ray. To ensure X-Ray can co-exist correctly, pass-through its tracing header, as shown in the following example:

tracingBuilder.propagationFactory(
  ExtraFieldPropagation.newFactory(B3Propagation.FACTORY, "x-amzn-trace-id")
);
In Spring Cloud Sleuth all elements of the tracing builder Tracing.newBuilder() are defined as beans. So if you want to pass a custom PropagationFactory, it’s enough for you to create a bean of that type and we will set it in the Tracing bean.
Prefixed fields

If they follow a common pattern, you can also prefix fields. The following example shows how to propagate x-vcap-request-id the field as-is but send the country-code and user-id fields on the wire as x-baggage-country-code and x-baggage-user-id, respectively:

Tracing.newBuilder().propagationFactory(
  ExtraFieldPropagation.newFactoryBuilder(B3Propagation.FACTORY)
                       .addField("x-vcap-request-id")
                       .addPrefixedFields("x-baggage-", Arrays.asList("country-code", "user-id"))
                       .build()
);

Later, you can call the following code to affect the country code of the current trace context:

ExtraFieldPropagation.set("x-country-code", "FO");
String countryCode = ExtraFieldPropagation.get("x-country-code");

Alternatively, if you have a reference to a trace context, you can use it explicitly, as shown in the following example:

ExtraFieldPropagation.set(span.context(), "x-country-code", "FO");
String countryCode = ExtraFieldPropagation.get(span.context(), "x-country-code");
A difference from previous versions of Sleuth is that, with Brave, you must pass the list of baggage keys. There are the following properties to achieve this. With the spring.sleuth.baggage-keys, you set keys that get prefixed with baggage- for HTTP calls and baggage_ for messaging. You can also use the spring.sleuth.propagation-keys property to pass a list of prefixed keys that are propagated to remote services without any prefix. You can also use the spring.sleuth.local-keys property to pass a list keys that will be propagated locally but will not be propagated over the wire. Notice that there’s no x- in front of the header keys.

In order to automatically set the baggage values to Slf4j’s MDC, you have to set the spring.sleuth.log.slf4j.whitelisted-mdc-keys property with a list of whitelisted baggage and propagation keys. E.g. spring.sleuth.log.slf4j.whitelisted-mdc-keys=foo will set the value of the foo baggage into MDC.

Remember that adding entries to MDC can drastically decrease the performance of your application!

If you want to add the baggage entries as tags, to make it possible to search for spans via the baggage entries, you can set the value of spring.sleuth.propagation.tag.whitelisted-keys with a list of whitelisted baggage keys. To disable the feature you have to pass the spring.sleuth.propagation.tag.enabled=false property.

Extracting a Propagated Context

The TraceContext.Extractor<C> reads trace identifiers and sampling status from an incoming request or message. The carrier is usually a request object or headers.

This utility is used in standard instrumentation (such as HttpServerHandler) but can also be used for custom RPC or messaging code.

TraceContextOrSamplingFlags is usually used only with Tracer.nextSpan(extracted), unless you are sharing span IDs between a client and a server.

Sharing span IDs between Client and Server

A normal instrumentation pattern is to create a span representing the server side of an RPC. Extractor.extract might return a complete trace context when applied to an incoming client request. Tracer.joinSpan attempts to continue this trace, using the same span ID if supported or creating a child span if not. When the span ID is shared, the reported data includes a flag saying so.

The following image shows an example of B3 propagation:

                              ┌───────────────────┐      ┌───────────────────┐
 Incoming Headers             │   TraceContext    │      │   TraceContext    │
┌───────────────────┐(extract)│ ┌───────────────┐ │(join)│ ┌───────────────┐ │
│ X─B3-TraceId      │─────────┼─┼> TraceId      │ │──────┼─┼> TraceId      │ │
│                   │         │ │               │ │      │ │               │ │
│ X─B3-ParentSpanId │─────────┼─┼> ParentSpanId │ │──────┼─┼> ParentSpanId │ │
│                   │         │ │               │ │      │ │               │ │
│ X─B3-SpanId       │─────────┼─┼> SpanId       │ │──────┼─┼> SpanId       │ │
└───────────────────┘         │ │               │ │      │ │               │ │
                              │ │               │ │      │ │  Shared: true │ │
                              │ └───────────────┘ │      │ └───────────────┘ │
                              └───────────────────┘      └───────────────────┘

Some propagation systems forward only the parent span ID, detected when Propagation.Factory.supportsJoin() == false. In this case, a new span ID is always provisioned, and the incoming context determines the parent ID.

The following image shows an example of AWS propagation:

                              ┌───────────────────┐      ┌───────────────────┐
 x-amzn-trace-id              │   TraceContext    │      │   TraceContext    │
┌───────────────────┐(extract)│ ┌───────────────┐ │(join)│ ┌───────────────┐ │
│ Root              │─────────┼─┼> TraceId      │ │──────┼─┼> TraceId      │ │
│                   │         │ │               │ │      │ │               │ │
│ Parent            │─────────┼─┼> SpanId       │ │──────┼─┼> ParentSpanId │ │
└───────────────────┘         │ └───────────────┘ │      │ │               │ │
                              └───────────────────┘      │ │  SpanId: New  │ │
                                                         │ └───────────────┘ │
                                                         └───────────────────┘

Note: Some span reporters do not support sharing span IDs. For example, if you set Tracing.Builder.spanReporter(amazonXrayOrGoogleStackdrive), you should disable join by setting Tracing.Builder.supportsJoin(false). Doing so forces a new child span on Tracer.joinSpan().

Implementing Propagation

TraceContext.Extractor<C> is implemented by a Propagation.Factory plugin. Internally, this code creates the union type, TraceContextOrSamplingFlags, with one of the following: * TraceContext if trace and span IDs were present. * TraceIdContext if a trace ID was present but span IDs were not present. * SamplingFlags if no identifiers were present.

Some Propagation implementations carry extra data from the point of extraction (for example, reading incoming headers) to injection (for example, writing outgoing headers). For example, it might carry a request ID. When implementations have extra data, they handle it as follows: * If a TraceContext were extracted, add the extra data as TraceContext.extra(). * Otherwise, add it as TraceContextOrSamplingFlags.extra(), which Tracer.nextSpan handles.

8.6. Current Tracing Component

Brave supports a “current tracing component” concept, which should only be used when you have no other way to get a reference. This was made for JDBC connections, as they often initialize prior to the tracing component.

The most recent tracing component instantiated is available through Tracing.current(). You can also use Tracing.currentTracer() to get only the tracer. If you use either of these methods, do not cache the result. Instead, look them up each time you need them.

8.7. Current Span

Brave supports a “current span” concept which represents the in-flight operation. You can use Tracer.currentSpan() to add custom tags to a span and Tracer.nextSpan() to create a child of whatever is in-flight.

In Sleuth, you can autowire the Tracer bean to retrieve the current span via tracer.currentSpan() method. To retrieve the current context just call tracer.currentSpan().context(). To get the current trace id as String you can use the traceIdString() method like this: tracer.currentSpan().context().traceIdString().

8.7.1. Setting a span in scope manually

When writing new instrumentation, it is important to place a span you created in scope as the current span. Not only does doing so let users access it with Tracer.currentSpan(), but it also allows customizations such as SLF4J MDC to see the current trace IDs.

Tracer.withSpanInScope(Span) facilitates this and is most conveniently employed by using the try-with-resources idiom. Whenever external code might be invoked (such as proceeding an interceptor or otherwise), place the span in scope, as shown in the following example:

@Autowired Tracer tracer;

try (SpanInScope ws = tracer.withSpanInScope(span)) {
  return inboundRequest.invoke();
} finally { // note the scope is independent of the span
  span.finish();
}

In edge cases, you may need to clear the current span temporarily (for example, launching a task that should not be associated with the current request). To do tso, pass null to withSpanInScope, as shown in the following example:

@Autowired Tracer tracer;

try (SpanInScope cleared = tracer.withSpanInScope(null)) {
  startBackgroundThread();
}

8.8. Instrumentation

Spring Cloud Sleuth automatically instruments all your Spring applications, so you should not have to do anything to activate it. The instrumentation is added by using a variety of technologies according to the stack that is available. For example, for a servlet web application, we use a Filter, and, for Spring Integration, we use ChannelInterceptors.

You can customize the keys used in span tags. To limit the volume of span data, an HTTP request is, by default, tagged only with a handful of metadata, such as the status code, the host, and the URL. You can add request headers by configuring spring.sleuth.keys.http.headers (a list of header names).

Tags are collected and exported only if there is a Sampler that allows it. By default, there is no such Sampler, to ensure that there is no danger of accidentally collecting too much data without configuring something).

8.9. Span lifecycle

You can do the following operations on the Span by means of brave.Tracer:

  • start: When you start a span, its name is assigned and the start timestamp is recorded.

  • close: The span gets finished (the end time of the span is recorded) and, if the span is sampled, it is eligible for collection (for example, to Zipkin).

  • continue: A new instance of span is created. It is a copy of the one that it continues.

  • detach: The span does not get stopped or closed. It only gets removed from the current thread.

  • create with explicit parent: You can create a new span and set an explicit parent for it.

Spring Cloud Sleuth creates an instance of Tracer for you. In order to use it, you can autowire it.

8.9.1. Creating and finishing spans

You can manually create spans by using the Tracer, as shown in the following example:

// Start a span. If there was a span present in this thread it will become
// the `newSpan`'s parent.
Span newSpan = this.tracer.nextSpan().name("calculateTax");
try (Tracer.SpanInScope ws = this.tracer.withSpanInScope(newSpan.start())) {
    // ...
    // You can tag a span
    newSpan.tag("taxValue", taxValue);
    // ...
    // You can log an event on a span
    newSpan.annotate("taxCalculated");
}
finally {
    // Once done remember to finish the span. This will allow collecting
    // the span to send it to Zipkin
    newSpan.finish();
}

In the preceding example, we could see how to create a new instance of the span. If there is already a span in this thread, it becomes the parent of the new span.

Always clean after you create a span. Also, always finish any span that you want to send to Zipkin.
If your span contains a name greater than 50 chars, that name is truncated to 50 chars. Your names have to be explicit and concrete. Big names lead to latency issues and sometimes even exceptions.

8.9.2. Continuing Spans

Sometimes, you do not want to create a new span but you want to continue one. An example of such a situation might be as follows:

  • AOP: If there was already a span created before an aspect was reached, you might not want to create a new span.

  • Hystrix: Executing a Hystrix command is most likely a logical part of the current processing. It is in fact merely a technical implementation detail that you would not necessarily want to reflect in tracing as a separate being.

To continue a span, you can use brave.Tracer, as shown in the following example:

// let's assume that we're in a thread Y and we've received
// the `initialSpan` from thread X
Span continuedSpan = this.tracer.toSpan(newSpan.context());
try {
    // ...
    // You can tag a span
    continuedSpan.tag("taxValue", taxValue);
    // ...
    // You can log an event on a span
    continuedSpan.annotate("taxCalculated");
}
finally {
    // Once done remember to flush the span. That means that
    // it will get reported but the span itself is not yet finished
    continuedSpan.flush();
}

8.9.3. Creating a Span with an explicit Parent

You might want to start a new span and provide an explicit parent of that span. Assume that the parent of a span is in one thread and you want to start a new span in another thread. In Brave, whenever you call nextSpan(), it creates a span in reference to the span that is currently in scope. You can put the span in scope and then call nextSpan(), as shown in the following example:

// let's assume that we're in a thread Y and we've received
// the `initialSpan` from thread X. `initialSpan` will be the parent
// of the `newSpan`
Span newSpan = null;
try (Tracer.SpanInScope ws = this.tracer.withSpanInScope(initialSpan)) {
    newSpan = this.tracer.nextSpan().name("calculateCommission");
    // ...
    // You can tag a span
    newSpan.tag("commissionValue", commissionValue);
    // ...
    // You can log an event on a span
    newSpan.annotate("commissionCalculated");
}
finally {
    // Once done remember to finish the span. This will allow collecting
    // the span to send it to Zipkin. The tags and events set on the
    // newSpan will not be present on the parent
    if (newSpan != null) {
        newSpan.finish();
    }
}
After creating such a span, you must finish it. Otherwise it is not reported (for example, to Zipkin).

8.10. Naming spans

Picking a span name is not a trivial task. A span name should depict an operation name. The name should be low cardinality, so it should not include identifiers.

Since there is a lot of instrumentation going on, some span names are artificial:

  • controller-method-name when received by a Controller with a method name of controllerMethodName

  • async for asynchronous operations done with wrapped Callable and Runnable interfaces.

  • Methods annotated with @Scheduled return the simple name of the class.

Fortunately, for asynchronous processing, you can provide explicit naming.

8.10.1. @SpanName Annotation

You can name the span explicitly by using the @SpanName annotation, as shown in the following example:

@SpanName("calculateTax")
class TaxCountingRunnable implements Runnable {

    @Override
    public void run() {
        // perform logic
    }

}

In this case, when processed in the following manner, the span is named calculateTax:

Runnable runnable = new TraceRunnable(this.tracing, spanNamer,
        new TaxCountingRunnable());
Future<?> future = executorService.submit(runnable);
// ... some additional logic ...
future.get();

8.10.2. toString() method

It is pretty rare to create separate classes for Runnable or Callable. Typically, one creates an anonymous instance of those classes. You cannot annotate such classes. To overcome that limitation, if there is no @SpanName annotation present, we check whether the class has a custom implementation of the toString() method.

Running such code leads to creating a span named calculateTax, as shown in the following example:

Runnable runnable = new TraceRunnable(this.tracing, spanNamer, new Runnable() {
    @Override
    public void run() {
        // perform logic
    }

    @Override
    public String toString() {
        return "calculateTax";
    }
});
Future<?> future = executorService.submit(runnable);
// ... some additional logic ...
future.get();

8.11. Managing Spans with Annotations

You can manage spans with a variety of annotations.

8.11.1. Rationale

There are a number of good reasons to manage spans with annotations, including:

  • API-agnostic means to collaborate with a span. Use of annotations lets users add to a span with no library dependency on a span api. Doing so lets Sleuth change its core API to create less impact to user code.

  • Reduced surface area for basic span operations. Without this feature, you must use the span api, which has lifecycle commands that could be used incorrectly. By only exposing scope, tag, and log functionality, you can collaborate without accidentally breaking span lifecycle.

  • Collaboration with runtime generated code. With libraries such as Spring Data and Feign, the implementations of interfaces are generated at runtime. Consequently, span wrapping of objects was tedious. Now you can provide annotations over interfaces and the arguments of those interfaces.

8.11.2. Creating New Spans

If you do not want to create local spans manually, you can use the @NewSpan annotation. Also, we provide the @SpanTag annotation to add tags in an automated fashion.

Now we can consider some examples of usage.

@NewSpan
void testMethod();

Annotating the method without any parameter leads to creating a new span whose name equals the annotated method name.

@NewSpan("customNameOnTestMethod4")
void testMethod4();

If you provide the value in the annotation (either directly or by setting the name parameter), the created span has the provided value as the name.

// method declaration
@NewSpan(name = "customNameOnTestMethod5")
void testMethod5(@SpanTag("testTag") String param);

// and method execution
this.testBean.testMethod5("test");

You can combine both the name and a tag. Let’s focus on the latter. In this case, the value of the annotated method’s parameter runtime value becomes the value of the tag. In our sample, the tag key is testTag, and the tag value is test.

@NewSpan(name = "customNameOnTestMethod3")
@Override
public void testMethod3() {
}

You can place the @NewSpan annotation on both the class and an interface. If you override the interface’s method and provide a different value for the @NewSpan annotation, the most concrete one wins (in this case customNameOnTestMethod3 is set).

8.11.3. Continuing Spans

If you want to add tags and annotations to an existing span, you can use the @ContinueSpan annotation, as shown in the following example:

// method declaration
@ContinueSpan(log = "testMethod11")
void testMethod11(@SpanTag("testTag11") String param);

// method execution
this.testBean.testMethod11("test");
this.testBean.testMethod13();

(Note that, in contrast with the @NewSpan annotation ,you can also add logs with the log parameter.)

That way, the span gets continued and:

  • Log entries named testMethod11.before and testMethod11.after are created.

  • If an exception is thrown, a log entry named testMethod11.afterFailure is also created.

  • A tag with a key of testTag11 and a value of test is created.

8.11.4. Advanced Tag Setting

There are 3 different ways to add tags to a span. All of them are controlled by the SpanTag annotation. The precedence is as follows:

  1. Try with a bean of TagValueResolver type and a provided name.

  2. If the bean name has not been provided, try to evaluate an expression. We search for a TagValueExpressionResolver bean. The default implementation uses SPEL expression resolution. IMPORTANT You can only reference properties from the SPEL expression. Method execution is not allowed due to security constraints.

  3. If we do not find any expression to evaluate, return the toString() value of the parameter.

Custom extractor

The value of the tag for the following method is computed by an implementation of TagValueResolver interface. Its class name has to be passed as the value of the resolver attribute.

Consider the following annotated method:

@NewSpan
public void getAnnotationForTagValueResolver(
        @SpanTag(key = "test", resolver = TagValueResolver.class) String test) {
}

Now further consider the following TagValueResolver bean implementation:

@Bean(name = "myCustomTagValueResolver")
public TagValueResolver tagValueResolver() {
    return parameter -> "Value from myCustomTagValueResolver";
}

The two preceding examples lead to setting a tag value equal to Value from myCustomTagValueResolver.

Resolving Expressions for a Value

Consider the following annotated method:

@NewSpan
public void getAnnotationForTagValueExpression(@SpanTag(key = "test",
        expression = "'hello' + ' characters'") String test) {
}

No custom implementation of a TagValueExpressionResolver leads to evaluation of the SPEL expression, and a tag with a value of 4 characters is set on the span. If you want to use some other expression resolution mechanism, you can create your own implementation of the bean.

Using the toString() method

Consider the following annotated method:

@NewSpan
public void getAnnotationForArgumentToString(@SpanTag("test") Long param) {
}

Running the preceding method with a value of 15 leads to setting a tag with a String value of "15".

8.12. Customizations

8.12.1. Customizers

With Brave 5.7 you have various options of providing customizers for your project. Brave ships with

  • TracingCustomizer - allows configuration plugins to collaborate on building an instance of Tracing.

  • CurrentTraceContextCustomizer - allows configuration plugins to collaborate on building an instance of CurrentTraceContext.

  • ExtraFieldCustomizer - allows configuration plugins to collaborate on building an instance of ExtraFieldPropagation.Factory.

Sleuth will search for beans of those types and automatically apply customizations.

8.12.2. HTTP

Data Policy

The default span data policy for HTTP requests is described in Brave: github.com/openzipkin/brave/tree/master/instrumentation/http#span-data-policy

To add different data to the span, you need to register a bean of type brave.http.HttpRequestParser or brave.http.HttpResponseParser based on when the data is collected.

The bean names correspond to the request or response side, and whether it is a client or server. For example, sleuthHttpClientRequestParser changes what is collected before a client request is sent to the server.

For your convenience @HttpClientRequestParser, @HttpClientResponseParser and corresponding server annotations can be used to inject the proper beans or to reference the bean names via their static String NAME fields.

Here’s an example adding the HTTP url in addition to defaults:

@Configuration
class Config {
  @Bean(name = { HttpClientRequestParser.NAME, HttpServerRequestParser.NAME })
  HttpRequestParser sleuthHttpServerRequestParser() {
      return (req, context, span) -> {
          HttpRequestParser.DEFAULT.parse(req, context, span);
          String url = req.url();
          if (url != null) {
              span.tag("http.url", url);
          }
      };
  }
}
Sampling

If client /server sampling is required, just register a bean of type brave.sampler.SamplerFunction<HttpRequest> and name the bean sleuthHttpClientSampler for client sampler and sleuthHttpServerSampler for server sampler.

For your convenience the @HttpClientSampler and @HttpServerSampler annotations can be used to inject the proper beans or to reference the bean names via their static String NAME fields.

Check out Brave’s code to see an example of how to make a path-based sampler github.com/openzipkin/brave/tree/master/instrumentation/http#sampling-policy

If you want to completely rewrite the HttpTracing bean you can use the SkipPatternProvider interface to retrieve the URL Pattern for spans that should be not sampled. Below you can see an example of usage of SkipPatternProvider inside a server side, Sampler<HttpRequest>.

@Configuration
class Config {
  @Bean(name = HttpServerSampler.NAME)
  SamplerFunction<HttpRequest> myHttpSampler(SkipPatternProvider provider) {
      Pattern pattern = provider.skipPattern();
      return request -> {
          String url = request.path();
          boolean shouldSkip = pattern.matcher(url).matches();
          if (shouldSkip) {
              return false;
          }
          return null;
      };
  }
}

8.12.3. TracingFilter

You can also modify the behavior of the TracingFilter, which is the component that is responsible for processing the input HTTP request and adding tags basing on the HTTP response. You can customize the tags or modify the response headers by registering your own instance of the TracingFilter bean.

In the following example, we register the TracingFilter bean, add the ZIPKIN-TRACE-ID response header containing the current Span’s trace id, and add a tag with key custom and a value tag to the span.

@Component
@Order(TraceWebServletAutoConfiguration.TRACING_FILTER_ORDER + 1)
class MyFilter extends GenericFilterBean {

    private final Tracer tracer;

    MyFilter(Tracer tracer) {
        this.tracer = tracer;
    }

    @Override
    public void doFilter(ServletRequest request, ServletResponse response,
            FilterChain chain) throws IOException, ServletException {
        Span currentSpan = this.tracer.currentSpan();
        if (currentSpan == null) {
            chain.doFilter(request, response);
            return;
        }
        // for readability we're returning trace id in a hex form
        ((HttpServletResponse) response).addHeader("ZIPKIN-TRACE-ID",
                currentSpan.context().traceIdString());
        // we can also add some custom tags
        currentSpan.tag("custom", "tag");
        chain.doFilter(request, response);
    }

}

8.12.4. Messaging

Sleuth automatically configures the MessagingTracing bean which serves as a foundation for Messaging instrumentation such as Kafka or JMS.

If a customization of producer / consumer sampling of messaging traces is required, just register a bean of type brave.sampler.SamplerFunction<MessagingRequest> and name the bean sleuthProducerSampler for producer sampler and sleuthConsumerSampler for consumer sampler.

For your convenience the @ProducerSampler and @ConsumerSampler annotations can be used to inject the proper beans or to reference the bean names via their static String NAME fields.

Ex. Here’s a sampler that traces 100 consumer requests per second, except for the "alerts" channel. Other requests will use a global rate provided by the Tracing component.

@Configuration
class Config {
}

8.12.5. RPC

Sleuth automatically configures the RpcTracing bean which serves as a foundation for RPC instrumentation such as gRPC or Dubbo.

If a customization of client / server sampling of the RPC traces is required, just register a bean of type brave.sampler.SamplerFunction<RpcRequest> and name the bean sleuthRpcClientSampler for client sampler and sleuthRpcServerSampler for server sampler.

For your convenience the @RpcClientSampler and @RpcServerSampler annotations can be used to inject the proper beans or to reference the bean names via their static String NAME fields.

Ex. Here’s a sampler that traces 100 "GetUserToken" server requests per second. This doesn’t start new traces for requests to the health check service. Other requests will use the global sampling configuration.

@Configuration
class Config {
  @Bean(name = RpcServerSampler.NAME)
  SamplerFunction<RpcRequest> myRpcSampler() {
      Matcher<RpcRequest> userAuth = and(serviceEquals("users.UserService"),
              methodEquals("GetUserToken"));
      return RpcRuleSampler.newBuilder()
              .putRule(serviceEquals("grpc.health.v1.Health"), Sampler.NEVER_SAMPLE)
              .putRule(userAuth, RateLimitingSampler.create(100)).build();
  }
}

8.12.6. Custom service name

By default, Sleuth assumes that, when you send a span to Zipkin, you want the span’s service name to be equal to the value of the spring.application.name property. That is not always the case, though. There are situations in which you want to explicitly provide a different service name for all spans coming from your application. To achieve that, you can pass the following property to your application to override that value (the example is for a service named myService):

spring.zipkin.service.name: myService

8.12.7. Customization of Reported Spans

Before reporting spans (for example, to Zipkin) you may want to modify that span in some way. You can do so by using the FinishedSpanHandler interface.

In Sleuth, we generate spans with a fixed name. Some users want to modify the name depending on values of tags. You can implement the FinishedSpanHandler interface to alter that name.

The following example shows how to register two beans that implement FinishedSpanHandler:

@Bean
FinishedSpanHandler handlerOne() {
    return new FinishedSpanHandler() {
        @Override
        public boolean handle(TraceContext traceContext, MutableSpan span) {
            span.name("foo");
            return true; // keep this span
        }
    };
}

@Bean
FinishedSpanHandler handlerTwo() {
    return new FinishedSpanHandler() {
        @Override
        public boolean handle(TraceContext traceContext, MutableSpan span) {
            span.name(span.name() + " bar");
            return true; // keep this span
        }
    };
}

The preceding example results in changing the name of the reported span to foo bar, just before it gets reported (for example, to Zipkin).

8.12.8. Host Locator

This section is about defining host from service discovery. It is NOT about finding Zipkin through service discovery.

To define the host that corresponds to a particular span, we need to resolve the host name and port. The default approach is to take these values from server properties. If those are not set, we try to retrieve the host name from the network interfaces.

If you have the discovery client enabled and prefer to retrieve the host address from the registered instance in a service registry, you have to set the spring.zipkin.locator.discovery.enabled property (it is applicable for both HTTP-based and Stream-based span reporting), as follows:

spring.zipkin.locator.discovery.enabled: true

8.13. Sending Spans to Zipkin

By default, if you add spring-cloud-starter-zipkin as a dependency to your project, when the span is closed, it is sent to Zipkin over HTTP. The communication is asynchronous. You can configure the URL by setting the spring.zipkin.baseUrl property, as follows:

spring.zipkin.baseUrl: https://192.168.99.100:9411/

If you want to find Zipkin through service discovery, you can pass the Zipkin’s service ID inside the URL, as shown in the following example for zipkinserver service ID:

spring.zipkin.baseUrl: https://zipkinserver/

To disable this feature just set spring.zipkin.discoveryClientEnabled to `false.

When the Discovery Client feature is enabled, Sleuth uses LoadBalancerClient to find the URL of the Zipkin Server. It means that you can set up the load balancing configuration e.g. via Ribbon.

zipkinserver:
  ribbon:
    ListOfServers: host1,host2

If you have web, rabbit, activemq or kafka together on the classpath, you might need to pick the means by which you would like to send spans to zipkin. To do so, set web, rabbit, activemq or kafka to the spring.zipkin.sender.type property. The following example shows setting the sender type for web:

spring.zipkin.sender.type: web

To customize the RestTemplate that sends spans to Zipkin via HTTP, you can register the ZipkinRestTemplateCustomizer bean.

@Configuration
class MyConfig {
    @Bean ZipkinRestTemplateCustomizer myCustomizer() {
        return new ZipkinRestTemplateCustomizer() {
            @Override
            void customize(RestTemplate restTemplate) {
                // customize the RestTemplate
            }
        };
    }
}

If, however, you would like to control the full process of creating the RestTemplate object, you will have to create a bean of zipkin2.reporter.Sender type.

    @Bean Sender myRestTemplateSender(ZipkinProperties zipkin,
            ZipkinRestTemplateCustomizer zipkinRestTemplateCustomizer) {
        RestTemplate restTemplate = mySuperCustomRestTemplate();
        zipkinRestTemplateCustomizer.customize(restTemplate);
        return myCustomSender(zipkin, restTemplate);
    }

8.14. Zipkin Stream Span Consumer

We recommend using Zipkin’s native support for message-based span sending. Starting from the Edgware release, the Zipkin Stream server is deprecated. In the Finchley release, it got removed.

If for some reason you need to create the deprecated Stream Zipkin server, see the Dalston Documentation.

8.15. Integrations

8.15.1. OpenTracing

Spring Cloud Sleuth is compatible with OpenTracing. If you have OpenTracing on the classpath, we automatically register the OpenTracing Tracer bean. If you wish to disable this, set spring.sleuth.opentracing.enabled to false

8.15.2. Runnable and Callable

If you wrap your logic in Runnable or Callable, you can wrap those classes in their Sleuth representative, as shown in the following example for Runnable:

Runnable runnable = new Runnable() {
    @Override
    public void run() {
        // do some work
    }

    @Override
    public String toString() {
        return "spanNameFromToStringMethod";
    }
};
// Manual `TraceRunnable` creation with explicit "calculateTax" Span name
Runnable traceRunnable = new TraceRunnable(this.tracing, spanNamer, runnable,
        "calculateTax");
// Wrapping `Runnable` with `Tracing`. That way the current span will be available
// in the thread of `Runnable`
Runnable traceRunnableFromTracer = this.tracing.currentTraceContext()
        .wrap(runnable);

The following example shows how to do so for Callable:

Callable<String> callable = new Callable<String>() {
    @Override
    public String call() throws Exception {
        return someLogic();
    }

    @Override
    public String toString() {
        return "spanNameFromToStringMethod";
    }
};
// Manual `TraceCallable` creation with explicit "calculateTax" Span name
Callable<String> traceCallable = new TraceCallable<>(this.tracing, spanNamer,
        callable, "calculateTax");
// Wrapping `Callable` with `Tracing`. That way the current span will be available
// in the thread of `Callable`
Callable<String> traceCallableFromTracer = this.tracing.currentTraceContext()
        .wrap(callable);

That way, you ensure that a new span is created and closed for each execution.

8.15.3. Spring Cloud CircuitBreaker

If you have Spring Cloud CircuitBreaker on the classpath, we will wrap the passed command Supplier and the fallback Function in its trace representations. In order to disable this instrumentation set spring.sleuth.circuitbreaker.enabled to false.

8.15.4. Hystrix

Custom Concurrency Strategy

We register a custom HystrixConcurrencyStrategy called TraceCallable that wraps all Callable instances in their Sleuth representative. The strategy either starts or continues a span, depending on whether tracing was already going on before the Hystrix command was called. Optionally, you can set spring.sleuth.hystrix.strategy.passthrough to true to just propagate the trace context to the Hystrix execution thread if you don’t wish to start a new span. To disable the custom Hystrix Concurrency Strategy, set the spring.sleuth.hystrix.strategy.enabled to false.

Manual Command setting

Assume that you have the following HystrixCommand:

HystrixCommand<String> hystrixCommand = new HystrixCommand<String>(setter) {
    @Override
    protected String run() throws Exception {
        return someLogic();
    }
};

To pass the tracing information, you have to wrap the same logic in the Sleuth version of the HystrixCommand, which is called TraceCommand, as shown in the following example:

TraceCommand<String> traceCommand = new TraceCommand<String>(tracer, setter) {
    @Override
    public String doRun() throws Exception {
        return someLogic();
    }
};

8.15.5. RxJava

We registering a custom RxJavaSchedulersHook that wraps all Action0 instances in their Sleuth representative, which is called TraceAction. The hook either starts or continues a span, depending on whether tracing was already going on before the Action was scheduled. To disable the custom RxJavaSchedulersHook, set the spring.sleuth.rxjava.schedulers.hook.enabled to false.

You can define a list of regular expressions for thread names for which you do not want spans to be created. To do so, provide a comma-separated list of regular expressions in the spring.sleuth.rxjava.schedulers.ignoredthreads property.

The suggest approach to reactive programming and Sleuth is to use the Reactor support.

8.15.6. HTTP integration

Features from this section can be disabled by setting the spring.sleuth.web.enabled property with value equal to false.

HTTP Filter

Through the TracingFilter, all sampled incoming requests result in creation of a Span. That Span’s name is http: + the path to which the request was sent. For example, if the request was sent to /this/that then the name will be http:/this/that. You can configure which URIs you would like to skip by setting the spring.sleuth.web.skipPattern property. If you have ManagementServerProperties on classpath, its value of contextPath gets appended to the provided skip pattern. If you want to reuse the Sleuth’s default skip patterns and just append your own, pass those patterns by using the spring.sleuth.web.additionalSkipPattern.

By default, all the spring boot actuator endpoints are automatically added to the skip pattern. If you want to disable this behaviour set spring.sleuth.web.ignore-auto-configured-skip-patterns to true.

To change the order of tracing filter registration, please set the spring.sleuth.web.filter-order property.

To disable the filter that logs uncaught exceptions you can disable the spring.sleuth.web.exception-throwing-filter-enabled property.

HandlerInterceptor

Since we want the span names to be precise, we use a TraceHandlerInterceptor that either wraps an existing HandlerInterceptor or is added directly to the list of existing HandlerInterceptors. The TraceHandlerInterceptor adds a special request attribute to the given HttpServletRequest. If the the TracingFilter does not see this attribute, it creates a “fallback” span, which is an additional span created on the server side so that the trace is presented properly in the UI. If that happens, there is probably missing instrumentation. In that case, please file an issue in Spring Cloud Sleuth.

Async Servlet support

If your controller returns a Callable or a WebAsyncTask, Spring Cloud Sleuth continues the existing span instead of creating a new one.

WebFlux support

Through TraceWebFilter, all sampled incoming requests result in creation of a Span. That Span’s name is http: + the path to which the request was sent. For example, if the request was sent to /this/that, the name is http:/this/that. You can configure which URIs you would like to skip by using the spring.sleuth.web.skipPattern property. If you have ManagementServerProperties on the classpath, its value of contextPath gets appended to the provided skip pattern. If you want to reuse Sleuth’s default skip patterns and append your own, pass those patterns by using the spring.sleuth.web.additionalSkipPattern.

To change the order of tracing filter registration, please set the spring.sleuth.web.filter-order property.

Dubbo RPC support

Via the integration with Brave, Spring Cloud Sleuth supports Dubbo. It’s enough to add the brave-instrumentation-dubbo dependency:

<dependency>
    <groupId>io.zipkin.brave</groupId>
    <artifactId>brave-instrumentation-dubbo</artifactId>
</dependency>

You need to also set a dubbo.properties file with the following contents:

dubbo.provider.filter=tracing
dubbo.consumer.filter=tracing

You can read more about Brave - Dubbo integration here. An example of Spring Cloud Sleuth and Dubbo can be found here.

8.15.7. HTTP Client Integration

Synchronous Rest Template

We inject a RestTemplate interceptor to ensure that all the tracing information is passed to the requests. Each time a call is made, a new Span is created. It gets closed upon receiving the response. To block the synchronous RestTemplate features, set spring.sleuth.web.client.enabled to false.

You have to register RestTemplate as a bean so that the interceptors get injected. If you create a RestTemplate instance with a new keyword, the instrumentation does NOT work.
Asynchronous Rest Template
Starting with Sleuth 2.0.0, we no longer register a bean of AsyncRestTemplate type. It is up to you to create such a bean. Then we instrument it.

To block the AsyncRestTemplate features, set spring.sleuth.web.async.client.enabled to false. To disable creation of the default TraceAsyncClientHttpRequestFactoryWrapper, set spring.sleuth.web.async.client.factory.enabled to false. If you do not want to create AsyncRestClient at all, set spring.sleuth.web.async.client.template.enabled to false.

Multiple Asynchronous Rest Templates

Sometimes you need to use multiple implementations of the Asynchronous Rest Template. In the following snippet, you can see an example of how to set up such a custom AsyncRestTemplate:

@Configuration
@EnableAutoConfiguration
static class Config {

    @Bean(name = "customAsyncRestTemplate")
    public AsyncRestTemplate traceAsyncRestTemplate() {
        return new AsyncRestTemplate(asyncClientFactory(),
                clientHttpRequestFactory());
    }

    private ClientHttpRequestFactory clientHttpRequestFactory() {
        ClientHttpRequestFactory clientHttpRequestFactory = new CustomClientHttpRequestFactory();
        // CUSTOMIZE HERE
        return clientHttpRequestFactory;
    }

    private AsyncClientHttpRequestFactory asyncClientFactory() {
        AsyncClientHttpRequestFactory factory = new CustomAsyncClientHttpRequestFactory();
        // CUSTOMIZE HERE
        return factory;
    }

}
WebClient

We inject a ExchangeFilterFunction implementation that creates a span and, through on-success and on-error callbacks, takes care of closing client-side spans.

To block this feature, set spring.sleuth.web.client.enabled to false.

You have to register WebClient as a bean so that the tracing instrumentation gets applied. If you create a WebClient instance with a new keyword, the instrumentation does NOT work.
Traverson

If you use the Traverson library, you can inject a RestTemplate as a bean into your Traverson object. Since RestTemplate is already intercepted, you get full support for tracing in your client. The following pseudo code shows how to do that:

@Autowired RestTemplate restTemplate;

Traverson traverson = new Traverson(URI.create("https://some/address"),
    MediaType.APPLICATION_JSON, MediaType.APPLICATION_JSON_UTF8).setRestOperations(restTemplate);
// use Traverson
Apache HttpClientBuilder and HttpAsyncClientBuilder

We instrument the HttpClientBuilder and HttpAsyncClientBuilder so that tracing context gets injected to the sent requests.

To block these features, set spring.sleuth.web.client.enabled to false.

Netty HttpClient

We instrument the Netty’s HttpClient.

To block this feature, set spring.sleuth.web.client.enabled to false.

You have to register HttpClient as a bean so that the instrumentation happens. If you create a HttpClient instance with a new keyword, the instrumentation does NOT work.
UserInfoRestTemplateCustomizer

We instrument the Spring Security’s UserInfoRestTemplateCustomizer.

To block this feature, set spring.sleuth.web.client.enabled to false.

8.15.8. Feign

By default, Spring Cloud Sleuth provides integration with Feign through TraceFeignClientAutoConfiguration. You can disable it entirely by setting spring.sleuth.feign.enabled to false. If you do so, no Feign-related instrumentation take place.

Part of Feign instrumentation is done through a FeignBeanPostProcessor. You can disable it by setting spring.sleuth.feign.processor.enabled to false. If you set it to false, Spring Cloud Sleuth does not instrument any of your custom Feign components. However, all the default instrumentation is still there.

8.15.9. gRPC

Spring Cloud Sleuth provides instrumentation for gRPC through TraceGrpcAutoConfiguration. You can disable it entirely by setting spring.sleuth.grpc.enabled to false.

Variant 1
Dependencies
The gRPC integration relies on two external libraries to instrument clients and servers and both of those libraries must be on the class path to enable the instrumentation.

Maven:

        <dependency>
            <groupId>io.github.lognet</groupId>
            <artifactId>grpc-spring-boot-starter</artifactId>
        </dependency>
        <dependency>
            <groupId>io.zipkin.brave</groupId>
            <artifactId>brave-instrumentation-grpc</artifactId>
        </dependency>

Gradle:

    compile("io.github.lognet:grpc-spring-boot-starter")
    compile("io.zipkin.brave:brave-instrumentation-grpc")
Server Instrumentation

Spring Cloud Sleuth leverages grpc-spring-boot-starter to register Brave’s gRPC server interceptor with all services annotated with @GRpcService.

Client Instrumentation

gRPC clients leverage a ManagedChannelBuilder to construct a ManagedChannel used to communicate to the gRPC server. The native ManagedChannelBuilder provides static methods as entry points for construction of ManagedChannel instances, however, this mechanism is outside the influence of the Spring application context.

Spring Cloud Sleuth provides a SpringAwareManagedChannelBuilder that can be customized through the Spring application context and injected by gRPC clients. This builder must be used when creating ManagedChannel instances.

Sleuth creates a TracingManagedChannelBuilderCustomizer which inject Brave’s client interceptor into the SpringAwareManagedChannelBuilder.

Variant 2

Grpc Spring Boot Starter automatically detects the presence of Spring Cloud Sleuth and brave’s instrumentation for gRPC and registers the necessary client and/or server tooling.

8.15.10. Asynchronous Communication

@Async Annotated methods

In Spring Cloud Sleuth, we instrument async-related components so that the tracing information is passed between threads. You can disable this behavior by setting the value of spring.sleuth.async.enabled to false.

If you annotate your method with @Async, we automatically create a new Span with the following characteristics:

  • If the method is annotated with @SpanName, the value of the annotation is the Span’s name.

  • If the method is not annotated with @SpanName, the Span name is the annotated method name.

  • The span is tagged with the method’s class name and method name.

@Scheduled Annotated Methods

In Spring Cloud Sleuth, we instrument scheduled method execution so that the tracing information is passed between threads. You can disable this behavior by setting the value of spring.sleuth.scheduled.enabled to false.

If you annotate your method with @Scheduled, we automatically create a new span with the following characteristics:

  • The span name is the annotated method name.

  • The span is tagged with the method’s class name and method name.

If you want to skip span creation for some @Scheduled annotated classes, you can set the spring.sleuth.scheduled.skipPattern with a regular expression that matches the fully qualified name of the @Scheduled annotated class. If you use spring-cloud-sleuth-stream and spring-cloud-netflix-hystrix-stream together, a span is created for each Hystrix metrics and sent to Zipkin. This behavior may be annoying. That’s why, by default, spring.sleuth.scheduled.skipPattern=org.springframework.cloud.netflix.hystrix.stream.HystrixStreamTask.

Executor, ExecutorService, and ScheduledExecutorService

We provide LazyTraceExecutor, TraceableExecutorService, and TraceableScheduledExecutorService. Those implementations create spans each time a new task is submitted, invoked, or scheduled.

The following example shows how to pass tracing information with TraceableExecutorService when working with CompletableFuture:

CompletableFuture<Long> completableFuture = CompletableFuture.supplyAsync(() -> {
    // perform some logic
    return 1_000_000L;
}, new TraceableExecutorService(beanFactory, executorService,
        // 'calculateTax' explicitly names the span - this param is optional
        "calculateTax"));
Sleuth does not work with parallelStream() out of the box. If you want to have the tracing information propagated through the stream, you have to use the approach with supplyAsync(…​), as shown earlier.

If there are beans that implement the Executor interface that you would like to exclude from span creation, you can use the spring.sleuth.async.ignored-beans property where you can provide a list of bean names.

Customization of Executors

Sometimes, you need to set up a custom instance of the AsyncExecutor. The following example shows how to set up such a custom Executor:

@Configuration
@EnableAutoConfiguration
@EnableAsync
// add the infrastructure role to ensure that the bean gets auto-proxied
@Role(BeanDefinition.ROLE_INFRASTRUCTURE)
static class CustomExecutorConfig extends AsyncConfigurerSupport {

    @Autowired
    BeanFactory beanFactory;

    @Override
    public Executor getAsyncExecutor() {
        ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
        // CUSTOMIZE HERE
        executor.setCorePoolSize(7);
        executor.setMaxPoolSize(42);
        executor.setQueueCapacity(11);
        executor.setThreadNamePrefix("MyExecutor-");
        // DON'T FORGET TO INITIALIZE
        executor.initialize();
        return new LazyTraceExecutor(this.beanFactory, executor);
    }

}
To ensure that your configuration gets post processed, remember to add the @Role(BeanDefinition.ROLE_INFRASTRUCTURE) on your @Configuration class

8.15.11. Messaging

Features from this section can be disabled by setting the spring.sleuth.messaging.enabled property with value equal to false.

Spring Integration and Spring Cloud Stream

Spring Cloud Sleuth integrates with Spring Integration. It creates spans for publish and subscribe events. To disable Spring Integration instrumentation, set spring.sleuth.integration.enabled to false.

You can provide the spring.sleuth.integration.patterns pattern to explicitly provide the names of channels that you want to include for tracing. By default, all channels but hystrixStreamOutput channel are included.

When using the Executor to build a Spring Integration IntegrationFlow, you must use the untraced version of the Executor. Decorating the Spring Integration Executor Channel with TraceableExecutorService causes the spans to be improperly closed.

If you want to customize the way tracing context is read from and written to message headers, it’s enough for you to register beans of types:

  • Propagation.Setter<MessageHeaderAccessor, String> - for writing headers to the message

  • Propagation.Getter<MessageHeaderAccessor, String> - for reading headers from the message

Spring RabbitMq

We instrument the RabbitTemplate so that tracing headers get injected into the message.

To block this feature, set spring.sleuth.messaging.rabbit.enabled to false.

Spring Kafka

We instrument the Spring Kafka’s ProducerFactory and ConsumerFactory so that tracing headers get injected into the created Spring Kafka’s Producer and Consumer.

To block this feature, set spring.sleuth.messaging.kafka.enabled to false.

Spring Kafka Streams

We instrument the KafkaStreams KafkaClientSupplier so that tracing headers get injected into the Producer and Consumer`s. A `KafkaStreamsTracing bean allows for further instrumentation through additional TransformerSupplier and ProcessorSupplier methods.

To block this feature, set spring.sleuth.messaging.kafka.streams.enabled to false.

Spring JMS

We instrument the JmsTemplate so that tracing headers get injected into the message. We also support @JmsListener annotated methods on the consumer side.

To block this feature, set spring.sleuth.messaging.jms.enabled to false.

We don’t support baggage propagation for JMS
Spring Cloud AWS Messaging SQS

We instrument @SqsListener which is provided by org.springframework.cloud:spring-cloud-aws-messaging so that tracing headers get extracted from the message and a trace gets put into the context.

To block this feature, set spring.sleuth.messaging.sqs.enabled to false.

8.15.12. Zuul

We instrument the Zuul Ribbon integration by enriching the Ribbon requests with tracing information. To disable Zuul support, set the spring.sleuth.zuul.enabled property to false.

8.15.13. Redis

We set tracing property to Lettcue ClientResources instance to enable Brave tracing built in Lettuce . To disable Redis support, set the spring.sleuth.redis.enabled property to false.

8.15.14. Quartz

We instrument quartz jobs by adding Job/Trigger listeners to the Quartz Scheduler.

To turn off this feature, set the spring.sleuth.quartz.enabled property to false.

8.15.15. Project Reactor

For projects depending on Project Reactor such as Spring Cloud Gateway, we suggest turning the spring.sleuth.reactor.decorate-on-each option to false. That way an increased performance gain should be observed in comparison to the standard instrumentation mechanism. What this option does is it will wrap decorate onLast operator instead of onEach which will result in creation of far fewer objects. The downside of this is that when Project Reactor will change threads, the trace propagation will continue without issues, however anything relying on the ThreadLocal such as e.g. MDC entries can be buggy.

8.16. Configuration properties

To see the list of all Sleuth related configuration properties please check the Appendix page.

8.17. Running examples

You can see the running examples deployed in the Pivotal Web Services. Check them out at the following links:

9. Spring Cloud Consul

Hoxton.SR3

This project provides Consul integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations you can quickly enable and configure the common patterns inside your application and build large distributed systems with Consul based components. The patterns provided include Service Discovery, Control Bus and Configuration. Intelligent Routing (Zuul) and Client Side Load Balancing (Ribbon), Circuit Breaker (Hystrix) are provided by integration with Spring Cloud Netflix.

9.1. Install Consul

Please see the installation documentation for instructions on how to install Consul.

9.2. Consul Agent

A Consul Agent client must be available to all Spring Cloud Consul applications. By default, the Agent client is expected to be at localhost:8500. See the Agent documentation for specifics on how to start an Agent client and how to connect to a cluster of Consul Agent Servers. For development, after you have installed consul, you may start a Consul Agent using the following command:

./src/main/bash/local_run_consul.sh

This will start an agent in server mode on port 8500, with the ui available at localhost:8500

9.3. Service Discovery with Consul

Service Discovery is one of the key tenets of a microservice based architecture. Trying to hand configure each client or some form of convention can be very difficult to do and can be very brittle. Consul provides Service Discovery services via an HTTP API and DNS. Spring Cloud Consul leverages the HTTP API for service registration and discovery. This does not prevent non-Spring Cloud applications from leveraging the DNS interface. Consul Agents servers are run in a cluster that communicates via a gossip protocol and uses the Raft consensus protocol.

9.3.1. How to activate

To activate Consul Service Discovery use the starter with group org.springframework.cloud and artifact id spring-cloud-starter-consul-discovery. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

9.3.2. Registering with Consul

When a client registers with Consul, it provides meta-data about itself such as host and port, id, name and tags. An HTTP Check is created by default that Consul hits the /health endpoint every 10 seconds. If the health check fails, the service instance is marked as critical.

Example Consul client:

@SpringBootApplication
@RestController
public class Application {

    @RequestMapping("/")
    public String home() {
        return "Hello world";
    }

    public static void main(String[] args) {
        new SpringApplicationBuilder(Application.class).web(true).run(args);
    }

}

(i.e. utterly normal Spring Boot app). If the Consul client is located somewhere other than localhost:8500, the configuration is required to locate the client. Example:

application.yml
spring:
  cloud:
    consul:
      host: localhost
      port: 8500
If you use Spring Cloud Consul Config, the above values will need to be placed in bootstrap.yml instead of application.yml.

The default service name, instance id and port, taken from the Environment, are ${spring.application.name}, the Spring Context ID and ${server.port} respectively.

To disable the Consul Discovery Client you can set spring.cloud.consul.discovery.enabled to false. Consul Discovery Client will also be disabled when spring.cloud.discovery.enabled is set to false.

To disable the service registration you can set spring.cloud.consul.discovery.register to false.

Registering Management as a Separate Service

When management server port is set to something different than the application port, by setting management.server.port property, management service will be registered as a separate service than the application service. For example:

application.yml
spring:
  application:
    name: myApp
management:
  server:
    port: 4452

Above configuration will register following 2 services:

  • Application Service:

ID: myApp
Name: myApp
  • Management Service:

ID: myApp-management
Name: myApp-management

Management service will inherit its instanceId and serviceName from the application service. For example:

application.yml
spring:
  application:
    name: myApp
management:
  server:
    port: 4452
spring:
  cloud:
    consul:
      discovery:
        instance-id: custom-service-id
        serviceName: myprefix-${spring.application.name}

Above configuration will register following 2 services:

  • Application Service:

ID: custom-service-id
Name: myprefix-myApp
  • Management Service:

ID: custom-service-id-management
Name: myprefix-myApp-management

Further customization is possible via following properties:

/** Port to register the management service under (defaults to management port) */
spring.cloud.consul.discovery.management-port

/** Suffix to use when registering management service (defaults to "management" */
spring.cloud.consul.discovery.management-suffix

/** Tags to use when registering management service (defaults to "management" */
spring.cloud.consul.discovery.management-tags

9.3.3. HTTP Health Check

The health check for a Consul instance defaults to "/health", which is the default locations of a useful endpoint in a Spring Boot Actuator application. You need to change these, even for an Actuator application if you use a non-default context path or servlet path (e.g. server.servletPath=/foo) or management endpoint path (e.g. management.server.servlet.context-path=/admin). The interval that Consul uses to check the health endpoint may also be configured. "10s" and "1m" represent 10 seconds and 1 minute respectively. Example:

application.yml
spring:
  cloud:
    consul:
      discovery:
        healthCheckPath: ${management.server.servlet.context-path}/health
        healthCheckInterval: 15s

You can disable the health check by setting management.health.consul.enabled=false.

Metadata and Consul tags

Consul does not yet support metadata on services. Spring Cloud’s ServiceInstance has a Map<String, String> metadata field. Spring Cloud Consul uses Consul tags to approximate metadata until Consul officially supports metadata. Tags with the form key=value will be split and used as a Map key and value respectively. Tags without the equal = sign, will be used as both the key and value.

application.yml
spring:
  cloud:
    consul:
      discovery:
        tags: foo=bar, baz

The above configuration will result in a map with foo→bar and baz→baz.

Making the Consul Instance ID Unique

By default a consul instance is registered with an ID that is equal to its Spring Application Context ID. By default, the Spring Application Context ID is ${spring.application.name}:comma,separated,profiles:${server.port}. For most cases, this will allow multiple instances of one service to run on one machine. If further uniqueness is required, Using Spring Cloud you can override this by providing a unique identifier in spring.cloud.consul.discovery.instanceId. For example:

application.yml
spring:
  cloud:
    consul:
      discovery:
        instanceId: ${spring.application.name}:${vcap.application.instance_id:${spring.application.instance_id:${random.value}}}

With this metadata, and multiple service instances deployed on localhost, the random value will kick in there to make the instance unique. In Cloudfoundry the vcap.application.instance_id will be populated automatically in a Spring Boot application, so the random value will not be needed.

Applying Headers to Health Check Requests

Headers can be applied to health check requests. For example, if you’re trying to register a Spring Cloud Config server that uses Vault Backend:

application.yml
spring:
  cloud:
    consul:
      discovery:
        health-check-headers:
          X-Config-Token: 6442e58b-d1ea-182e-cfa5-cf9cddef0722

According to the HTTP standard, each header can have more than one values, in which case, an array can be supplied:

application.yml
spring:
  cloud:
    consul:
      discovery:
        health-check-headers:
          X-Config-Token:
            - "6442e58b-d1ea-182e-cfa5-cf9cddef0722"
            - "Some other value"

9.3.4. Looking up services

Using Load-balancer

Spring Cloud has support for Feign (a REST client builder) and also Spring RestTemplate for looking up services using the logical service names/ids instead of physical URLs. Both Feign and the discovery-aware RestTemplate utilize Ribbon for client-side load balancing.

If you want to access service STORES using the RestTemplate simply declare:

@LoadBalanced
@Bean
public RestTemplate loadbalancedRestTemplate() {
     return new RestTemplate();
}

and use it like this (notice how we use the STORES service name/id from Consul instead of a fully qualified domainname):

@Autowired
RestTemplate restTemplate;

public String getFirstProduct() {
   return this.restTemplate.getForObject("https://STORES/products/1", String.class);
}

If you have Consul clusters in multiple datacenters and you want to access a service in another datacenter a service name/id alone is not enough. In that case you use property spring.cloud.consul.discovery.datacenters.STORES=dc-west where STORES is the service name/id and dc-west is the datacenter where the STORES service lives.

Spring Cloud now also offers support for Spring Cloud LoadBalancer.

As Spring Cloud Ribbon is now under maintenance, we suggest you set spring.cloud.loadbalancer.ribbon.enabled to false, so that BlockingLoadBalancerClient is used instead of RibbonLoadBalancerClient.

Using the DiscoveryClient

You can also use the org.springframework.cloud.client.discovery.DiscoveryClient which provides a simple API for discovery clients that is not specific to Netflix, e.g.

@Autowired
private DiscoveryClient discoveryClient;

public String serviceUrl() {
    List<ServiceInstance> list = discoveryClient.getInstances("STORES");
    if (list != null && list.size() > 0 ) {
        return list.get(0).getUri();
    }
    return null;
}

9.3.5. Consul Catalog Watch

The Consul Catalog Watch takes advantage of the ability of consul to watch services. The Catalog Watch makes a blocking Consul HTTP API call to determine if any services have changed. If there is new service data a Heartbeat Event is published.

To change the frequency of when the Config Watch is called change spring.cloud.consul.config.discovery.catalog-services-watch-delay. The default value is 1000, which is in milliseconds. The delay is the amount of time after the end of the previous invocation and the start of the next.

To disable the Catalog Watch set spring.cloud.consul.discovery.catalogServicesWatch.enabled=false.

The watch uses a Spring TaskScheduler to schedule the call to consul. By default it is a ThreadPoolTaskScheduler with a poolSize of 1. To change the TaskScheduler, create a bean of type TaskScheduler named with the ConsulDiscoveryClientConfiguration.CATALOG_WATCH_TASK_SCHEDULER_NAME constant.

9.4. Distributed Configuration with Consul

Consul provides a Key/Value Store for storing configuration and other metadata. Spring Cloud Consul Config is an alternative to the Config Server and Client. Configuration is loaded into the Spring Environment during the special "bootstrap" phase. Configuration is stored in the /config folder by default. Multiple PropertySource instances are created based on the application’s name and the active profiles that mimicks the Spring Cloud Config order of resolving properties. For example, an application with the name "testApp" and with the "dev" profile will have the following property sources created:

config/testApp,dev/
config/testApp/
config/application,dev/
config/application/

The most specific property source is at the top, with the least specific at the bottom. Properties in the config/application folder are applicable to all applications using consul for configuration. Properties in the config/testApp folder are only available to the instances of the service named "testApp".

Configuration is currently read on startup of the application. Sending a HTTP POST to /refresh will cause the configuration to be reloaded. Config Watch will also automatically detect changes and reload the application context.

9.4.1. How to activate

To get started with Consul Configuration use the starter with group org.springframework.cloud and artifact id spring-cloud-starter-consul-config. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

This will enable auto-configuration that will setup Spring Cloud Consul Config.

9.4.2. Customizing

Consul Config may be customized using the following properties:

bootstrap.yml
spring:
  cloud:
    consul:
      config:
        enabled: true
        prefix: configuration
        defaultContext: apps
        profileSeparator: '::'
  • enabled setting this value to "false" disables Consul Config

  • prefix sets the base folder for configuration values

  • defaultContext sets the folder name used by all applications

  • profileSeparator sets the value of the separator used to separate the profile name in property sources with profiles

9.4.3. Config Watch

The Consul Config Watch takes advantage of the ability of consul to watch a key prefix. The Config Watch makes a blocking Consul HTTP API call to determine if any relevant configuration data has changed for the current application. If there is new configuration data a Refresh Event is published. This is equivalent to calling the /refresh actuator endpoint.

To change the frequency of when the Config Watch is called change spring.cloud.consul.config.watch.delay. The default value is 1000, which is in milliseconds. The delay is the amount of time after the end of the previous invocation and the start of the next.

To disable the Config Watch set spring.cloud.consul.config.watch.enabled=false.

The watch uses a Spring TaskScheduler to schedule the call to consul. By default it is a ThreadPoolTaskScheduler with a poolSize of 1. To change the TaskScheduler, create a bean of type TaskScheduler named with the ConsulConfigAutoConfiguration.CONFIG_WATCH_TASK_SCHEDULER_NAME constant.

9.4.4. YAML or Properties with Config

It may be more convenient to store a blob of properties in YAML or Properties format as opposed to individual key/value pairs. Set the spring.cloud.consul.config.format property to YAML or PROPERTIES. For example to use YAML:

bootstrap.yml
spring:
  cloud:
    consul:
      config:
        format: YAML

YAML must be set in the appropriate data key in consul. Using the defaults above the keys would look like:

config/testApp,dev/data
config/testApp/data
config/application,dev/data
config/application/data

You could store a YAML document in any of the keys listed above.

You can change the data key using spring.cloud.consul.config.data-key.

9.4.5. git2consul with Config

git2consul is a Consul community project that loads files from a git repository to individual keys into Consul. By default the names of the keys are names of the files. YAML and Properties files are supported with file extensions of .yml and .properties respectively. Set the spring.cloud.consul.config.format property to FILES. For example:

bootstrap.yml
spring:
  cloud:
    consul:
      config:
        format: FILES

Given the following keys in /config, the development profile and an application name of foo:

.gitignore
application.yml
bar.properties
foo-development.properties
foo-production.yml
foo.properties
master.ref

the following property sources would be created:

config/foo-development.properties
config/foo.properties
config/application.yml

The value of each key needs to be a properly formatted YAML or Properties file.

9.4.6. Fail Fast

It may be convenient in certain circumstances (like local development or certain test scenarios) to not fail if consul isn’t available for configuration. Setting spring.cloud.consul.config.failFast=false in bootstrap.yml will cause the configuration module to log a warning rather than throw an exception. This will allow the application to continue startup normally.

9.5. Consul Retry

If you expect that the consul agent may occasionally be unavailable when your app starts, you can ask it to keep trying after a failure. You need to add spring-retry and spring-boot-starter-aop to your classpath. The default behaviour is to retry 6 times with an initial backoff interval of 1000ms and an exponential multiplier of 1.1 for subsequent backoffs. You can configure these properties (and others) using spring.cloud.consul.retry.* configuration properties. This works with both Spring Cloud Consul Config and Discovery registration.

To take full control of the retry add a @Bean of type RetryOperationsInterceptor with id "consulRetryInterceptor". Spring Retry has a RetryInterceptorBuilder that makes it easy to create one.

9.6. Spring Cloud Bus with Consul

9.6.1. How to activate

To get started with the Consul Bus use the starter with group org.springframework.cloud and artifact id spring-cloud-starter-consul-bus. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

See the Spring Cloud Bus documentation for the available actuator endpoints and howto send custom messages.

9.7. Circuit Breaker with Hystrix

Applications can use the Hystrix Circuit Breaker provided by the Spring Cloud Netflix project by including this starter in the projects pom.xml: spring-cloud-starter-hystrix. Hystrix doesn’t depend on the Netflix Discovery Client. The @EnableHystrix annotation should be placed on a configuration class (usually the main class). Then methods can be annotated with @HystrixCommand to be protected by a circuit breaker. See the documentation for more details.

9.8. Hystrix metrics aggregation with Turbine and Consul

Turbine (provided by the Spring Cloud Netflix project), aggregates multiple instances Hystrix metrics streams, so the dashboard can display an aggregate view. Turbine uses the DiscoveryClient interface to lookup relevant instances. To use Turbine with Spring Cloud Consul, configure the Turbine application in a manner similar to the following examples:

pom.xml
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-netflix-turbine</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-consul-discovery</artifactId>
</dependency>

Notice that the Turbine dependency is not a starter. The turbine starter includes support for Netflix Eureka.

application.yml
spring.application.name: turbine
applications: consulhystrixclient
turbine:
  aggregator:
    clusterConfig: ${applications}
  appConfig: ${applications}

The clusterConfig and appConfig sections must match, so it’s useful to put the comma-separated list of service ID’s into a separate configuration property.

Turbine.java
@EnableTurbine
@SpringBootApplication
public class Turbine {
    public static void main(String[] args) {
        SpringApplication.run(DemoturbinecommonsApplication.class, args);
    }
}

9.9. Configuration Properties

To see the list of all Consul related configuration properties please check the Appendix page.

10. Spring Cloud Zookeeper

This project provides Zookeeper integrations for Spring Boot applications through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms. With a few annotations, you can quickly enable and configure the common patterns inside your application and build large distributed systems with Zookeeper based components. The provided patterns include Service Discovery and Configuration. Integration with Spring Cloud Netflix provides Intelligent Routing (Zuul), Client Side Load Balancing (Ribbon), and Circuit Breaker (Hystrix).

10.1. Install Zookeeper

See the installation documentation for instructions on how to install Zookeeper.

Spring Cloud Zookeeper uses Apache Curator behind the scenes. While Zookeeper 3.5.x is still considered "beta" by the Zookeeper development team, the reality is that it is used in production by many users. However, Zookeeper 3.4.x is also used in production. Prior to Apache Curator 4.0, both versions of Zookeeper were supported via two versions of Apache Curator. Starting with Curator 4.0 both versions of Zookeeper are supported via the same Curator libraries.

In case you are integrating with version 3.4 you need to change the Zookeeper dependency that comes shipped with curator, and thus spring-cloud-zookeeper. To do so simply exclude that dependency and add the 3.4.x version like shown below.

maven
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-zookeeper-all</artifactId>
    <exclusions>
        <exclusion>
            <groupId>org.apache.zookeeper</groupId>
            <artifactId>zookeeper</artifactId>
        </exclusion>
    </exclusions>
</dependency>
<dependency>
    <groupId>org.apache.zookeeper</groupId>
    <artifactId>zookeeper</artifactId>
    <version>3.4.12</version>
    <exclusions>
        <exclusion>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
        </exclusion>
    </exclusions>
</dependency>
gradle
compile('org.springframework.cloud:spring-cloud-starter-zookeeper-all') {
  exclude group: 'org.apache.zookeeper', module: 'zookeeper'
}
compile('org.apache.zookeeper:zookeeper:3.4.12') {
  exclude group: 'org.slf4j', module: 'slf4j-log4j12'
}

10.2. Service Discovery with Zookeeper

Service Discovery is one of the key tenets of a microservice based architecture. Trying to hand-configure each client or some form of convention can be difficult to do and can be brittle. Curator(A Java library for Zookeeper) provides Service Discovery through a Service Discovery Extension. Spring Cloud Zookeeper uses this extension for service registration and discovery.

10.2.1. Activating

Including a dependency on org.springframework.cloud:spring-cloud-starter-zookeeper-discovery enables autoconfiguration that sets up Spring Cloud Zookeeper Discovery.

For web functionality, you still need to include org.springframework.boot:spring-boot-starter-web.
When working with version 3.4 of Zookeeper you need to change the way you include the dependency as described here.

10.2.2. Registering with Zookeeper

When a client registers with Zookeeper, it provides metadata (such as host and port, ID, and name) about itself.

The following example shows a Zookeeper client:

@SpringBootApplication
@RestController
public class Application {

    @RequestMapping("/")
    public String home() {
        return "Hello world";
    }

    public static void main(String[] args) {
        new SpringApplicationBuilder(Application.class).web(true).run(args);
    }

}
The preceding example is a normal Spring Boot application.

If Zookeeper is located somewhere other than localhost:2181, the configuration must provide the location of the server, as shown in the following example:

application.yml
spring:
  cloud:
    zookeeper:
      connect-string: localhost:2181
If you use Spring Cloud Zookeeper Config, the values shown in the preceding example need to be in bootstrap.yml instead of application.yml.

The default service name, instance ID, and port (taken from the Environment) are ${spring.application.name}, the Spring Context ID, and ${server.port}, respectively.

Having spring-cloud-starter-zookeeper-discovery on the classpath makes the app into both a Zookeeper “service” (that is, it registers itself) and a “client” (that is, it can query Zookeeper to locate other services).

If you would like to disable the Zookeeper Discovery Client, you can set spring.cloud.zookeeper.discovery.enabled to false.

10.2.3. Using the DiscoveryClient

Spring Cloud has support for Feign (a REST client builder), Spring RestTemplate and Spring WebFlux, using logical service names instead of physical URLs.

You can also use the org.springframework.cloud.client.discovery.DiscoveryClient, which provides a simple API for discovery clients that is not specific to Netflix, as shown in the following example:

@Autowired
private DiscoveryClient discoveryClient;

public String serviceUrl() {
    List<ServiceInstance> list = discoveryClient.getInstances("STORES");
    if (list != null && list.size() > 0 ) {
        return list.get(0).getUri().toString();
    }
    return null;
}

10.3. Using Spring Cloud Zookeeper with Spring Cloud Netflix Components

Spring Cloud Netflix supplies useful tools that work regardless of which DiscoveryClient implementation you use. Feign, Turbine, Ribbon, and Zuul all work with Spring Cloud Zookeeper.

10.3.1. Ribbon with Zookeeper

Spring Cloud Zookeeper provides an implementation of Ribbon’s ServerList. When you use the spring-cloud-starter-zookeeper-discovery, Ribbon is autoconfigured to use the ZookeeperServerList by default.

10.4. Spring Cloud Zookeeper and Service Registry

Spring Cloud Zookeeper implements the ServiceRegistry interface, letting developers register arbitrary services in a programmatic way.

The ServiceInstanceRegistration class offers a builder() method to create a Registration object that can be used by the ServiceRegistry, as shown in the following example:

@Autowired
private ZookeeperServiceRegistry serviceRegistry;

public void registerThings() {
    ZookeeperRegistration registration = ServiceInstanceRegistration.builder()
            .defaultUriSpec()
            .address("anyUrl")
            .port(10)
            .name("/a/b/c/d/anotherservice")
            .build();
    this.serviceRegistry.register(registration);
}

10.4.1. Instance Status

Netflix Eureka supports having instances that are OUT_OF_SERVICE registered with the server. These instances are not returned as active service instances. This is useful for behaviors such as blue/green deployments. (Note that the Curator Service Discovery recipe does not support this behavior.) Taking advantage of the flexible payload has let Spring Cloud Zookeeper implement OUT_OF_SERVICE by updating some specific metadata and then filtering on that metadata in the Ribbon ZookeeperServerList. The ZookeeperServerList filters out all non-null instance statuses that do not equal UP. If the instance status field is empty, it is considered to be UP for backwards compatibility. To change the status of an instance, make a POST with OUT_OF_SERVICE to the ServiceRegistry instance status actuator endpoint, as shown in the following example:

$ http POST http://localhost:8081/service-registry status=OUT_OF_SERVICE
The preceding example uses the http command from httpie.org.

10.5. Zookeeper Dependencies

The following topics cover how to work with Spring Cloud Zookeeper dependencies:

10.5.1. Using the Zookeeper Dependencies

Spring Cloud Zookeeper gives you a possibility to provide dependencies of your application as properties. As dependencies, you can understand other applications that are registered in Zookeeper and which you would like to call through Feign (a REST client builder), Spring RestTemplate and Spring WebFlux.

You can also use the Zookeeper Dependency Watchers functionality to control and monitor the state of your dependencies.

10.5.2. Activating Zookeeper Dependencies

Including a dependency on org.springframework.cloud:spring-cloud-starter-zookeeper-discovery enables autoconfiguration that sets up Spring Cloud Zookeeper Dependencies. Even if you provide the dependencies in your properties, you can turn off the dependencies. To do so, set the spring.cloud.zookeeper.dependency.enabled property to false (it defaults to true).

10.5.3. Setting up Zookeeper Dependencies

Consider the following example of dependency representation:

application.yml
spring.application.name: yourServiceName
spring.cloud.zookeeper:
  dependencies:
    newsletter:
      path: /path/where/newsletter/has/registered/in/zookeeper
      loadBalancerType: ROUND_ROBIN
      contentTypeTemplate: application/vnd.newsletter.$version+json
      version: v1
      headers:
        header1:
            - value1
        header2:
            - value2
      required: false
      stubs: org.springframework:foo:stubs
    mailing:
      path: /path/where/mailing/has/registered/in/zookeeper
      loadBalancerType: ROUND_ROBIN
      contentTypeTemplate: application/vnd.mailing.$version+json
      version: v1
      required: true

The next few sections go through each part of the dependency one by one. The root property name is spring.cloud.zookeeper.dependencies.

Aliases

Below the root property you have to represent each dependency as an alias. This is due to the constraints of Ribbon, which requires that the application ID be placed in the URL. Consequently, you cannot pass any complex path, suchas /myApp/myRoute/name). The alias is the name you use instead of the serviceId for DiscoveryClient, Feign, or RestTemplate.

In the previous examples, the aliases are newsletter and mailing. The following example shows Feign usage with a newsletter alias:

@FeignClient("newsletter")
public interface NewsletterService {
        @RequestMapping(method = RequestMethod.GET, value = "/newsletter")
        String getNewsletters();
}
Path

The path is represented by the path YAML property and is the path under which the dependency is registered under Zookeeper. As described in the previous section, Ribbon operates on URLs. As a result, this path is not compliant with its requirement. That is why Spring Cloud Zookeeper maps the alias to the proper path.

Load Balancer Type

The load balancer type is represented by loadBalancerType YAML property.

If you know what kind of load-balancing strategy has to be applied when calling this particular dependency, you can provide it in the YAML file, and it is automatically applied. You can choose one of the following load balancing strategies:

  • STICKY: Once chosen, the instance is always called.

  • RANDOM: Picks an instance randomly.

  • ROUND_ROBIN: Iterates over instances over and over again.

Content-Type Template and Version

The Content-Type template and version are represented by the contentTypeTemplate and version YAML properties.

If you version your API in the Content-Type header, you do not want to add this header to each of your requests. Also, if you want to call a new version of the API, you do not want to roam around your code to bump up the API version. That is why you can provide a contentTypeTemplate with a special $version placeholder. That placeholder will be filled by the value of the version YAML property. Consider the following example of a contentTypeTemplate:

application/vnd.newsletter.$version+json

Further consider the following version:

v1

The combination of contentTypeTemplate and version results in the creation of a Content-Type header for each request, as follows:

application/vnd.newsletter.v1+json
Default Headers

Default headers are represented by the headers map in YAML.

Sometimes, each call to a dependency requires setting up of some default headers. To not do that in code, you can set them up in the YAML file, as shown in the following example headers section:

headers:
    Accept:
        - text/html
        - application/xhtml+xml
    Cache-Control:
        - no-cache

That headers section results in adding the Accept and Cache-Control headers with appropriate list of values in your HTTP request.

Required Dependencies

Required dependencies are represented by required property in YAML.

If one of your dependencies is required to be up when your application boots, you can set the required: true property in the YAML file.

If your application cannot localize the required dependency during boot time, it throws an exception, and the Spring Context fails to set up. In other words, your application cannot start if the required dependency is not registered in Zookeeper.

You can read more about Spring Cloud Zookeeper Presence Checker later in this document.

Stubs

You can provide a colon-separated path to the JAR containing stubs of the dependency, as shown in the following example:

stubs: org.springframework:myApp:stubs

where:

  • org.springframework is the groupId.

  • myApp is the artifactId.

  • stubs is the classifier. (Note that stubs is the default value.)

Because stubs is the default classifier, the preceding example is equal to the following example:

stubs: org.springframework:myApp

10.5.4. Configuring Spring Cloud Zookeeper Dependencies

You can set the following properties to enable or disable parts of Zookeeper Dependencies functionalities:

  • spring.cloud.zookeeper.dependencies: If you do not set this property, you cannot use Zookeeper Dependencies.

  • spring.cloud.zookeeper.dependency.ribbon.enabled (enabled by default): Ribbon requires either explicit global configuration or a particular one for a dependency. By turning on this property, runtime load balancing strategy resolution is possible, and you can use the loadBalancerType section of the Zookeeper Dependencies. The configuration that needs this property has an implementation of LoadBalancerClient that delegates to the ILoadBalancer presented in the next bullet.

  • spring.cloud.zookeeper.dependency.ribbon.loadbalancer (enabled by default): Thanks to this property, the custom ILoadBalancer knows that the part of the URI passed to Ribbon might actually be the alias that has to be resolved to a proper path in Zookeeper. Without this property, you cannot register applications under nested paths.

  • spring.cloud.zookeeper.dependency.headers.enabled (enabled by default): This property registers a RibbonClient that automatically appends appropriate headers and content types with their versions, as presented in the Dependency configuration. Without this setting, those two parameters do not work.

  • spring.cloud.zookeeper.dependency.resttemplate.enabled (enabled by default): When enabled, this property modifies the request headers of a @LoadBalanced-annotated RestTemplate such that it passes headers and content type with the version set in dependency configuration. Without this setting, those two parameters do not work.

10.6. Spring Cloud Zookeeper Dependency Watcher

The Dependency Watcher mechanism lets you register listeners to your dependencies. The functionality is, in fact, an implementation of the Observator pattern. When a dependency changes, its state (to either UP or DOWN), some custom logic can be applied.

10.6.1. Activating

Spring Cloud Zookeeper Dependencies functionality needs to be enabled for you to use the Dependency Watcher mechanism.

10.6.2. Registering a Listener

To register a listener, you must implement an interface called org.springframework.cloud.zookeeper.discovery.watcher.DependencyWatcherListener and register it as a bean. The interface gives you one method:

void stateChanged(String dependencyName, DependencyState newState);

If you want to register a listener for a particular dependency, the dependencyName would be the discriminator for your concrete implementation. newState provides you with information about whether your dependency has changed to CONNECTED or DISCONNECTED.

10.6.3. Using the Presence Checker

Bound with the Dependency Watcher is the functionality called Presence Checker. It lets you provide custom behavior when your application boots, to react according to the state of your dependencies.

The default implementation of the abstract org.springframework.cloud.zookeeper.discovery.watcher.presence.DependencyPresenceOnStartupVerifier class is the org.springframework.cloud.zookeeper.discovery.watcher.presence.DefaultDependencyPresenceOnStartupVerifier, which works in the following way.

  1. If the dependency is marked us required and is not in Zookeeper, when your application boots, it throws an exception and shuts down.

  2. If the dependency is not required, the org.springframework.cloud.zookeeper.discovery.watcher.presence.LogMissingDependencyChecker logs that the dependency is missing at the WARN level.

Because the DefaultDependencyPresenceOnStartupVerifier is registered only when there is no bean of type DependencyPresenceOnStartupVerifier, this functionality can be overridden.

10.7. Distributed Configuration with Zookeeper

Zookeeper provides a hierarchical namespace that lets clients store arbitrary data, such as configuration data. Spring Cloud Zookeeper Config is an alternative to the Config Server and Client. Configuration is loaded into the Spring Environment during the special “bootstrap” phase. Configuration is stored in the /config namespace by default. Multiple PropertySource instances are created, based on the application’s name and the active profiles, to mimic the Spring Cloud Config order of resolving properties. For example, an application with a name of testApp and with the dev profile has the following property sources created for it:

  • config/testApp,dev

  • config/testApp

  • config/application,dev

  • config/application

The most specific property source is at the top, with the least specific at the bottom. Properties in the config/application namespace apply to all applications that use zookeeper for configuration. Properties in the config/testApp namespace are available only to the instances of the service named testApp.

Configuration is currently read on startup of the application. Sending a HTTP POST request to /refresh causes the configuration to be reloaded. Watching the configuration namespace (which Zookeeper supports) is not currently implemented.

10.7.1. Activating

Including a dependency on org.springframework.cloud:spring-cloud-starter-zookeeper-config enables autoconfiguration that sets up Spring Cloud Zookeeper Config.

When working with version 3.4 of Zookeeper you need to change the way you include the dependency as described here.

10.7.2. Customizing

Zookeeper Config may be customized by setting the following properties:

bootstrap.yml
spring:
  cloud:
    zookeeper:
      config:
        enabled: true
        root: configuration
        defaultContext: apps
        profileSeparator: '::'
  • enabled: Setting this value to false disables Zookeeper Config.

  • root: Sets the base namespace for configuration values.

  • defaultContext: Sets the name used by all applications.

  • profileSeparator: Sets the value of the separator used to separate the profile name in property sources with profiles.

10.7.3. Access Control Lists (ACLs)

You can add authentication information for Zookeeper ACLs by calling the addAuthInfo method of a CuratorFramework bean. One way to accomplish this is to provide your own CuratorFramework bean, as shown in the following example:

@BoostrapConfiguration
public class CustomCuratorFrameworkConfig {

  @Bean
  public CuratorFramework curatorFramework() {
    CuratorFramework curator = new CuratorFramework();
    curator.addAuthInfo("digest", "user:password".getBytes());
    return curator;
  }

}

Consult the ZookeeperAutoConfiguration class to see how the CuratorFramework bean’s default configuration.

Alternatively, you can add your credentials from a class that depends on the existing CuratorFramework bean, as shown in the following example:

@BoostrapConfiguration
public class DefaultCuratorFrameworkConfig {

  public ZookeeperConfig(CuratorFramework curator) {
    curator.addAuthInfo("digest", "user:password".getBytes());
  }

}

The creation of this bean must occur during the boostrapping phase. You can register configuration classes to run during this phase by annotating them with @BootstrapConfiguration and including them in a comma-separated list that you set as the value of the org.springframework.cloud.bootstrap.BootstrapConfiguration property in the resources/META-INF/spring.factories file, as shown in the following example:

resources/META-INF/spring.factories
org.springframework.cloud.bootstrap.BootstrapConfiguration=\
my.project.CustomCuratorFrameworkConfig,\
my.project.DefaultCuratorFrameworkConfig

11. Spring Boot Cloud CLI

Spring Boot CLI provides Spring Boot command line features for Spring Cloud. You can write Groovy scripts to run Spring Cloud component applications (e.g. @EnableEurekaServer). You can also easily do things like encryption and decryption to support Spring Cloud Config clients with secret configuration values. With the Launcher CLI you can launch services like Eureka, Zipkin, Config Server conveniently all at once from the command line (very useful at development time).

Spring Cloud is released under the non-restrictive Apache 2.0 license. If you would like to contribute to this section of the documentation or if you find an error, please find the source code and issue trackers in the project at github.

11.1. Installation

To install, make sure you have Spring Boot CLI (2.0.0 or better):

$ spring version
Spring CLI v2.2.0.BUILD-SNAPSHOT

E.g. for SDKMan users

$ sdk install springboot 2.2.0.BUILD-SNAPSHOT
$ sdk use springboot 2.2.0.BUILD-SNAPSHOT

and install the Spring Cloud plugin

$ mvn install
$ spring install org.springframework.cloud:spring-cloud-cli:2.2.0.BUILD-SNAPSHOT
Prerequisites: to use the encryption and decryption features you need the full-strength JCE installed in your JVM (it’s not there by default). You can download the "Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files" from Oracle, and follow instructions for installation (essentially replace the 2 policy files in the JRE lib/security directory with the ones that you downloaded).

11.2. Running Spring Cloud Services in Development

The Launcher CLI can be used to run common services like Eureka, Config Server etc. from the command line. To list the available services you can do spring cloud --list, and to launch a default set of services just spring cloud. To choose the services to deploy, just list them on the command line, e.g.

$ spring cloud eureka configserver h2 kafka stubrunner zipkin

Summary of supported deployables:

Service Name Address Description

eureka

Eureka Server

localhost:8761

Eureka server for service registration and discovery. All the other services show up in its catalog by default.

configserver

Config Server

localhost:8888

Spring Cloud Config Server running in the "native" profile and serving configuration from the local directory ./launcher

h2

H2 Database

localhost:9095 (console), jdbc:h2:tcp://localhost:9096/{data}

Relation database service. Use a file path for {data} (e.g. ./target/test) when you connect. Remember that you can add ;MODE=MYSQL or ;MODE=POSTGRESQL to connect with compatibility to other server types.

kafka

Kafka Broker

localhost:9091 (actuator endpoints), localhost:9092

hystrixdashboard

Hystrix Dashboard

localhost:7979

Any Spring Cloud app that declares Hystrix circuit breakers publishes metrics on /hystrix.stream. Type that address into the dashboard to visualize all the metrics,

dataflow

Dataflow Server

localhost:9393

Spring Cloud Dataflow server with UI at /admin-ui. Connect the Dataflow shell to target at root path.

zipkin

Zipkin Server

localhost:9411

Zipkin Server with UI for visualizing traces. Stores span data in memory and accepts them via HTTP POST of JSON data.

stubrunner

Stub Runner Boot

localhost:8750

Downloads WireMock stubs, starts WireMock and feeds the started servers with stored stubs. Pass stubrunner.ids to pass stub coordinates and then go to localhost:8750/stubs.

Each of these apps can be configured using a local YAML file with the same name (in the current working directory or a subdirectory called "config" or in ~/.spring-cloud). E.g. in configserver.yml you might want to do something like this to locate a local git repository for the backend:

configserver.yml
spring:
  profiles:
    active: git
  cloud:
    config:
      server:
        git:
          uri: file://${user.home}/dev/demo/config-repo

E.g. in Stub Runner app you could fetch stubs from your local .m2 in the following way.

stubrunner.yml
stubrunner:
  workOffline: true
  ids:
    - com.example:beer-api-producer:+:9876

11.2.1. Adding Additional Applications

Additional applications can be added to ./config/cloud.yml (not ./config.yml because that would replace the defaults), e.g. with

config/cloud.yml
spring:
  cloud:
    launcher:
      deployables:
        source:
          coordinates: maven://com.example:source:0.0.1-SNAPSHOT
          port: 7000
        sink:
          coordinates: maven://com.example:sink:0.0.1-SNAPSHOT
          port: 7001

when you list the apps:

$ spring cloud --list
source sink configserver dataflow eureka h2 hystrixdashboard kafka stubrunner zipkin

(notice the additional apps at the start of the list).

11.3. Writing Groovy Scripts and Running Applications

Spring Cloud CLI has support for most of the Spring Cloud declarative features, such as the @Enable* class of annotations. For example, here is a fully functional Eureka server

app.groovy
@EnableEurekaServer
class Eureka {}

which you can run from the command line like this

$ spring run app.groovy

To include additional dependencies, often it suffices just to add the appropriate feature-enabling annotation, e.g. @EnableConfigServer, @EnableOAuth2Sso or @EnableEurekaClient. To manually include a dependency you can use a @Grab with the special "Spring Boot" short style artifact co-ordinates, i.e. with just the artifact ID (no need for group or version information), e.g. to set up a client app to listen on AMQP for management events from the Spring CLoud Bus:

app.groovy
@Grab('spring-cloud-starter-bus-amqp')
@RestController
class Service {
  @RequestMapping('/')
  def home() { [message: 'Hello'] }
}

11.4. Encryption and Decryption

The Spring Cloud CLI comes with an "encrypt" and a "decrypt" command. Both accept arguments in the same form with a key specified as a mandatory "--key", e.g.

$ spring encrypt mysecret --key foo
682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda
$ spring decrypt --key foo 682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda
mysecret

To use a key in a file (e.g. an RSA public key for encyption) prepend the key value with "@" and provide the file path, e.g.

$ spring encrypt mysecret --key @${HOME}/.ssh/id_rsa.pub
AQAjPgt3eFZQXwt8tsHAVv/QHiY5sI2dRcR+...

12. Spring Cloud Security

Spring Cloud Security offers a set of primitives for building secure applications and services with minimum fuss. A declarative model which can be heavily configured externally (or centrally) lends itself to the implementation of large systems of co-operating, remote components, usually with a central indentity management service. It is also extremely easy to use in a service platform like Cloud Foundry. Building on Spring Boot and Spring Security OAuth2 we can quickly create systems that implement common patterns like single sign on, token relay and token exchange.

Spring Cloud is released under the non-restrictive Apache 2.0 license. If you would like to contribute to this section of the documentation or if you find an error, please find the source code and issue trackers in the project at github.

12.1. Quickstart

12.1.1. OAuth2 Single Sign On

Here’s a Spring Cloud "Hello World" app with HTTP Basic authentication and a single user account:

app.groovy
@Grab('spring-boot-starter-security')
@Controller
class Application {

  @RequestMapping('/')
  String home() {
    'Hello World'
  }

}

You can run it with spring run app.groovy and watch the logs for the password (username is "user"). So far this is just the default for a Spring Boot app.

Here’s a Spring Cloud app with OAuth2 SSO:

app.groovy
@Controller
@EnableOAuth2Sso
class Application {

  @RequestMapping('/')
  String home() {
    'Hello World'
  }

}

Spot the difference? This app will actually behave exactly the same as the previous one, because it doesn’t know it’s OAuth2 credentals yet.

You can register an app in github quite easily, so try that if you want a production app on your own domain. If you are happy to test on localhost:8080, then set up these properties in your application configuration:

application.yml
security:
  oauth2:
    client:
      clientId: bd1c0a783ccdd1c9b9e4
      clientSecret: 1a9030fbca47a5b2c28e92f19050bb77824b5ad1
      accessTokenUri: https://github.com/login/oauth/access_token
      userAuthorizationUri: https://github.com/login/oauth/authorize
      clientAuthenticationScheme: form
    resource:
      userInfoUri: https://api.github.com/user
      preferTokenInfo: false

run the app above and it will redirect to github for authorization. If you are already signed into github you won’t even notice that it has authenticated. These credentials will only work if your app is running on port 8080.

To limit the scope that the client asks for when it obtains an access token you can set security.oauth2.client.scope (comma separated or an array in YAML). By default the scope is empty and it is up to to Authorization Server to decide what the defaults should be, usually depending on the settings in the client registration that it holds.

The examples above are all Groovy scripts. If you want to write the same code in Java (or Groovy) you need to add Spring Security OAuth2 to the classpath (e.g. see the sample here).

12.1.2. OAuth2 Protected Resource

You want to protect an API resource with an OAuth2 token? Here’s a simple example (paired with the client above):

app.groovy
@Grab('spring-cloud-starter-security')
@RestController
@EnableResourceServer
class Application {

  @RequestMapping('/')
  def home() {
    [message: 'Hello World']
  }

}

and

application.yml
security:
  oauth2:
    resource:
      userInfoUri: https://api.github.com/user
      preferTokenInfo: false

12.2. More Detail

12.2.1. Single Sign On

All of the OAuth2 SSO and resource server features moved to Spring Boot in version 1.3. You can find documentation in the Spring Boot user guide.

12.2.2. Token Relay

A Token Relay is where an OAuth2 consumer acts as a Client and forwards the incoming token to outgoing resource requests. The consumer can be a pure Client (like an SSO application) or a Resource Server.

Client Token Relay in Spring Cloud Gateway

If your app also has a Spring Cloud Gateway embedded reverse proxy then you can ask it to forward OAuth2 access tokens downstream to the services it is proxying. Thus the SSO app above can be enhanced simply like this:

App.java
@Autowired
private TokenRelayGatewayFilterFactory filterFactory;

@Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder builder) {
    return builder.routes()
            .route("resource", r -> r.path("/resource")
                    .filters(f -> f.filter(filterFactory.apply()))
                    .uri("http://localhost:9000"))
            .build();
}

or this

application.yaml
spring:
  cloud:
    gateway:
      routes:
      - id: resource
        uri: http://localhost:9000
        predicates:
        - Path=/resource
        filters:
        - TokenRelay=

and it will (in addition to logging the user in and grabbing a token) pass the authentication token downstream to the services (in this case /resource).

To enable this for Spring Cloud Gateway add the following dependencies

  • org.springframework.boot:spring-boot-starter-oauth2-client

  • org.springframework.cloud:spring-cloud-starter-security

How does it work? The filter extracts an access token from the currently authenticated user, and puts it in a request header for the downstream requests.

For a full working sample see this project.

The default implementation of ReactiveOAuth2AuthorizedClientService used by TokenRelayGatewayFilterFactory uses an in-memory data store. You will need to provide your own implementation ReactiveOAuth2AuthorizedClientService if you need a more robust solution.
Client Token Relay

If your app is a user facing OAuth2 client (i.e. has declared @EnableOAuth2Sso or @EnableOAuth2Client) then it has an OAuth2ClientContext in request scope from Spring Boot. You can create your own OAuth2RestTemplate from this context and an autowired OAuth2ProtectedResourceDetails, and then the context will always forward the access token downstream, also refreshing the access token automatically if it expires. (These are features of Spring Security and Spring Boot.)

Spring Boot (1.4.1) does not create an OAuth2ProtectedResourceDetails automatically if you are using client_credentials tokens. In that case you need to create your own ClientCredentialsResourceDetails and configure it with @ConfigurationProperties("security.oauth2.client").
Client Token Relay in Zuul Proxy

If your app also has a Spring Cloud Zuul embedded reverse proxy (using @EnableZuulProxy) then you can ask it to forward OAuth2 access tokens downstream to the services it is proxying. Thus the SSO app above can be enhanced simply like this:

app.groovy
@Controller
@EnableOAuth2Sso
@EnableZuulProxy
class Application {

}

and it will (in addition to logging the user in and grabbing a token) pass the authentication token downstream to the /proxy/* services. If those services are implemented with @EnableResourceServer then they will get a valid token in the correct header.

How does it work? The @EnableOAuth2Sso annotation pulls in spring-cloud-starter-security (which you could do manually in a traditional app), and that in turn triggers some autoconfiguration for a ZuulFilter, which itself is activated because Zuul is on the classpath (via @EnableZuulProxy). The filter just extracts an access token from the currently authenticated user, and puts it in a request header for the downstream requests.

Spring Boot does not create an OAuth2RestOperations automatically which is needed for refresh_token. In that case you need to create your own OAuth2RestOperations so OAuth2TokenRelayFilter can refresh the token if needed.
Resource Server Token Relay

If your app has @EnableResourceServer you might want to relay the incoming token downstream to other services. If you use a RestTemplate to contact the downstream services then this is just a matter of how to create the template with the right context.

If your service uses UserInfoTokenServices to authenticate incoming tokens (i.e. it is using the security.oauth2.user-info-uri configuration), then you can simply create an OAuth2RestTemplate using an autowired OAuth2ClientContext (it will be populated by the authentication process before it hits the backend code). Equivalently (with Spring Boot 1.4), you could inject a UserInfoRestTemplateFactory and grab its OAuth2RestTemplate in your configuration. For example:

MyConfiguration.java
@Bean
public OAuth2RestTemplate restTemplate(UserInfoRestTemplateFactory factory) {
    return factory.getUserInfoRestTemplate();
}

This rest template will then have the same OAuth2ClientContext (request-scoped) that is used by the authentication filter, so you can use it to send requests with the same access token.

If your app is not using UserInfoTokenServices but is still a client (i.e. it declares @EnableOAuth2Client or @EnableOAuth2Sso), then with Spring Security Cloud any OAuth2RestOperations that the user creates from an @Autowired OAuth2Context will also forward tokens. This feature is implemented by default as an MVC handler interceptor, so it only works in Spring MVC. If you are not using MVC you could use a custom filter or AOP interceptor wrapping an AccessTokenContextRelay to provide the same feature.

Here’s a basic example showing the use of an autowired rest template created elsewhere ("foo.com" is a Resource Server accepting the same tokens as the surrounding app):

MyController.java
@Autowired
private OAuth2RestOperations restTemplate;

@RequestMapping("/relay")
public String relay() {
    ResponseEntity<String> response =
      restTemplate.getForEntity("https://foo.com/bar", String.class);
    return "Success! (" + response.getBody() + ")";
}

If you don’t want to forward tokens (and that is a valid choice, since you might want to act as yourself, rather than the client that sent you the token), then you only need to create your own OAuth2Context instead of autowiring the default one.

Feign clients will also pick up an interceptor that uses the OAuth2ClientContext if it is available, so they should also do a token relay anywhere where a RestTemplate would.

12.3. Configuring Authentication Downstream of a Zuul Proxy

You can control the authorization behaviour downstream of an @EnableZuulProxy through the proxy.auth.* settings. Example:

application.yml
proxy:
  auth:
    routes:
      customers: oauth2
      stores: passthru
      recommendations: none

In this example the "customers" service gets an OAuth2 token relay, the "stores" service gets a passthrough (the authorization header is just passed downstream), and the "recommendations" service has its authorization header removed. The default behaviour is to do a token relay if there is a token available, and passthru otherwise.

See ProxyAuthenticationProperties for full details.

13. Spring Cloud for Cloud Foundry

Spring Cloud for Cloudfoundry makes it easy to run Spring Cloud apps in Cloud Foundry (the Platform as a Service). Cloud Foundry has the notion of a "service", which is middlware that you "bind" to an app, essentially providing it with an environment variable containing credentials (e.g. the location and username to use for the service).

The spring-cloud-cloudfoundry-commons module configures the Reactor-based Cloud Foundry Java client, v 3.0, and can be used standalone.

The spring-cloud-cloudfoundry-web project provides basic support for some enhanced features of webapps in Cloud Foundry: binding automatically to single-sign-on services and optionally enabling sticky routing for discovery.

The spring-cloud-cloudfoundry-discovery project provides an implementation of Spring Cloud Commons DiscoveryClient so you can @EnableDiscoveryClient and provide your credentials as spring.cloud.cloudfoundry.discovery.[username,password] (also *.url if you are not connecting to Pivotal Web Services) and then you can use the DiscoveryClient directly or via a LoadBalancerClient.

The first time you use it the discovery client might be slow owing to the fact that it has to get an access token from Cloud Foundry.

13.1. Discovery

Here’s a Spring Cloud app with Cloud Foundry discovery:

app.groovy
@Grab('org.springframework.cloud:spring-cloud-cloudfoundry')
@RestController
@EnableDiscoveryClient
class Application {

  @Autowired
  DiscoveryClient client

  @RequestMapping('/')
  String home() {
    'Hello from ' + client.getLocalServiceInstance()
  }

}

If you run it without any service bindings:

$ spring jar app.jar app.groovy
$ cf push -p app.jar

It will show its app name in the home page.

The DiscoveryClient can lists all the apps in a space, according to the credentials it is authenticated with, where the space defaults to the one the client is running in (if any). If neither org nor space are configured, they default per the user’s profile in Cloud Foundry.

13.2. Single Sign On

All of the OAuth2 SSO and resource server features moved to Spring Boot in version 1.3. You can find documentation in the Spring Boot user guide.

This project provides automatic binding from CloudFoundry service credentials to the Spring Boot features. If you have a CloudFoundry service called "sso", for instance, with credentials containing "client_id", "client_secret" and "auth_domain", it will bind automatically to the Spring OAuth2 client that you enable with @EnableOAuth2Sso (from Spring Boot). The name of the service can be parameterized using spring.oauth2.sso.serviceId.

13.3. Configuration

To see the list of all Spring Cloud Sloud Foundry related configuration properties please check the Appendix page.

14. Spring Cloud Contract Reference Documentation

Adam Dudczak, Mathias Düsterhöft, Marcin Grzejszczak, Dennis Kieselhorst, Jakub Kubryński, Karol Lassak, Olga Maciaszek-Sharma, Mariusz Smykuła, Dave Syer, Jay Bryant

2.2.1.RELEASE

Copyright © 2012-2019

Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically.

14.1. Getting Started

If you are getting started with Spring Cloud Contract, or Spring in general, start by reading this section. It answers the basic “what?”, “how?” and “why?” questions. It includes an introduction to Spring Cloud Contract, along with installation instructions. We then walk you through building your first Spring Cloud Contract application, discussing some core principles as we go.

14.1.1. Introducing Spring Cloud Contract

Spring Cloud Contract moves TDD to the level of software architecture. It lets you perform consumer-driven and producer-driven contract testing.

History

Before becoming Spring Cloud Contract, this project was called Accurest. It was created by Marcin Grzejszczak and Jakub Kubrynski from (Codearte).

The 0.1.0 release took place on 26 Jan 2015 and it became stable with 1.0.0 release on 29 Feb 2016.

Why Do You Need It?

Assume that we have a system that consists of multiple microservices, as the following image shows:

Microservices Architecture
Testing Issues

If we want to test the application in the top left corner of the image in the preceding section to determine whether it can communicate with other services, we could do one of two things:

  • Deploy all microservices and perform end-to-end tests.

  • Mock other microservices in unit and integration tests.

Both have their advantages but also a lot of disadvantages.

Deploy all microservices and perform end to end tests

Advantages:

  • Simulates production.

  • Tests real communication between services.

Disadvantages:

  • To test one microservice, we have to deploy six microservices, a couple of databases, and other items.

  • The environment where the tests run is locked for a single suite of tests (nobody else would be able to run the tests in the meantime).

  • They take a long time to run.

  • The feedback comes very late in the process.

  • They are extremely hard to debug.

Mock other microservices in unit and integration tests

Advantages:

  • They provide very fast feedback.

  • They have no infrastructure requirements.

Disadvantages:

  • The implementor of the service creates stubs that might have nothing to do with reality.

  • You can go to production with passing tests and failing production.

To solve the aforementioned issues, Spring Cloud Contract was created. The main idea is to give you very fast feedback, without the need to set up the whole world of microservices. If you work on stubs, then the only applications you need are those that your application directly uses. The following image shows the relationship of stubs to an application:

Stubbed Services

Spring Cloud Contract gives you the certainty that the stubs that you use were created by the service that you call. Also, if you can use them, it means that they were tested against the producer’s side. In short, you can trust those stubs.

Purposes

The main purposes of Spring Cloud Contract are:

  • To ensure that HTTP and Messaging stubs (used when developing the client) do exactly what the actual server-side implementation does.

  • To promote the ATDD (acceptance test-driven developement) method and the microservices architectural style.

  • To provide a way to publish changes in contracts that are immediately visible on both sides.

  • To generate boilerplate test code to be used on the server side.

By default, Spring Cloud Contract integrates with Wiremock as the HTTP server stub.

Spring Cloud Contract’s purpose is NOT to start writing business features in the contracts. Assume that we have a business use case of fraud check. If a user can be a fraud for 100 different reasons, we would assume that you would create two contracts, one for the positive case and one for the negative case. Contract tests are used to test contracts between applications and not to simulate full behavior.
What Is a Contract?

As consumers of services, we need to define what exactly we want to achieve. We need to formulate our expectations. That is why we write contracts. In other words, a contract is an agreement on how the API or message communication should look. Consider the following example:

Assume that you want to send a request that contains the ID of a client company and the amount it wants to borrow from us. You also want to send it to the /fraudcheck URL via the PUT method. The following listing shows a contract to check whether a client should be marked as a fraud in both Groovy and YAML:

groovy
/*
 * Copyright 2013-2019 the original author or authors.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package contracts

org.springframework.cloud.contract.spec.Contract.make {
    request { // (1)
        method 'PUT' // (2)
        url '/fraudcheck' // (3)
        body([ // (4)
               "client.id": $(regex('[0-9]{10}')),
               loanAmount : 99999
        ])
        headers { // (5)
            contentType('application/json')
        }
    }
    response { // (6)
        status OK() // (7)
        body([ // (8)
               fraudCheckStatus  : "FRAUD",
               "rejection.reason": "Amount too high"
        ])
        headers { // (9)
            contentType('application/json')
        }
    }
}

/*
From the Consumer perspective, when shooting a request in the integration test:

(1) - If the consumer sends a request
(2) - With the "PUT" method
(3) - to the URL "/fraudcheck"
(4) - with the JSON body that
 * has a field `client.id` that matches a regular expression `[0-9]{10}`
 * has a field `loanAmount` that is equal to `99999`
(5) - with header `Content-Type` equal to `application/json`
(6) - then the response will be sent with
(7) - status equal `200`
(8) - and JSON body equal to
 { "fraudCheckStatus": "FRAUD", "rejectionReason": "Amount too high" }
(9) - with header `Content-Type` equal to `application/json`

From the Producer perspective, in the autogenerated producer-side test:

(1) - A request will be sent to the producer
(2) - With the "PUT" method
(3) - to the URL "/fraudcheck"
(4) - with the JSON body that
 * has a field `client.id` that will have a generated value that matches a regular expression `[0-9]{10}`
 * has a field `loanAmount` that is equal to `99999`
(5) - with header `Content-Type` equal to `application/json`
(6) - then the test will assert if the response has been sent with
(7) - status equal `200`
(8) - and JSON body equal to
 { "fraudCheckStatus": "FRAUD", "rejectionReason": "Amount too high" }
(9) - with header `Content-Type` matching `application/json.*`
 */
yaml
request: # (1)
  method: PUT # (2)
  url: /yamlfraudcheck # (3)
  body: # (4)
    "client.id": 1234567890
    loanAmount: 99999
  headers: # (5)
    Content-Type: application/json
  matchers:
    body:
      - path: $.['client.id'] # (6)
        type: by_regex
        value: "[0-9]{10}"
response: # (7)
  status: 200 # (8)
  body:  # (9)
    fraudCheckStatus: "FRAUD"
    "rejection.reason": "Amount too high"
  headers: # (10)
    Content-Type: application/json


#From the Consumer perspective, when shooting a request in the integration test:
#
#(1) - If the consumer sends a request
#(2) - With the "PUT" method
#(3) - to the URL "/yamlfraudcheck"
#(4) - with the JSON body that
# * has a field `client.id`
# * has a field `loanAmount` that is equal to `99999`
#(5) - with header `Content-Type` equal to `application/json`
#(6) - and a `client.id` json entry matches the regular expression `[0-9]{10}`
#(7) - then the response will be sent with
#(8) - status equal `200`
#(9) - and JSON body equal to
# { "fraudCheckStatus": "FRAUD", "rejectionReason": "Amount too high" }
#(10) - with header `Content-Type` equal to `application/json`
#
#From the Producer perspective, in the autogenerated producer-side test:
#
#(1) - A request will be sent to the producer
#(2) - With the "PUT" method
#(3) - to the URL "/yamlfraudcheck"
#(4) - with the JSON body that
# * has a field `client.id` `1234567890`
# * has a field `loanAmount` that is equal to `99999`
#(5) - with header `Content-Type` equal to `application/json`
#(7) - then the test will assert if the response has been sent with
#(8) - status equal `200`
#(9) - and JSON body equal to
# { "fraudCheckStatus": "FRAUD", "rejectionReason": "Amount too high" }
#(10) - with header `Content-Type` equal to `application/json`

14.1.2. A Three-second Tour

This very brief tour walks through using Spring Cloud Contract. It consists of the following topics:

You can find a somewhat longer tour here.

The following UML diagram shows the relationship of the parts within Spring Cloud Contract:

getting started three second
On the Producer Side

To start working with Spring Cloud Contract, you can add files with REST or messaging contracts expressed in either Groovy DSL or YAML to the contracts directory, which is set by the contractsDslDir property. By default, it is $rootDir/src/test/resources/contracts.

Then you can add the Spring Cloud Contract Verifier dependency and plugin to your build file, as the following example shows:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-contract-verifier</artifactId>
    <scope>test</scope>
</dependency>

The following listing shows how to add the plugin, which should go in the build/plugins portion of the file:

<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <version>${spring-cloud-contract.version}</version>
    <extensions>true</extensions>
</plugin>

Running ./mvnw clean install automatically generates tests that verify the application compliance with the added contracts. By default, the tests get generated under org.springframework.cloud.contract.verifier.tests..

As the implementation of the functionalities described by the contracts is not yet present, the tests fail.

To make them pass, you must add the correct implementation of either handling HTTP requests or messages. Also, you must add a base test class for auto-generated tests to the project. This class is extended by all the auto-generated tests, and it should contain all the setup information necessary to run them (for example RestAssuredMockMvc controller setup or messaging test setup).

The following example, from pom.xml, shows how to specify the base test class:

<build>
        <plugins>
            <plugin>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-contract-maven-plugin</artifactId>
                <version>2.1.2.RELEASE</version>
                <extensions>true</extensions>
                <configuration>
                    <baseClassForTests>com.example.contractTest.BaseTestClass</baseClassForTests> (1)
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>
1 The baseClassForTests element lets you specify your base test class. It must be a child of a configuration element within spring-cloud-contract-maven-plugin.

Once the implementation and the test base class are in place, the tests pass, and both the application and the stub artifacts are built and installed in the local Maven repository. You can now merge the changes, and you can publish both the application and the stub artifacts in an online repository.

On the Consumer Side

You can use Spring Cloud Contract Stub Runner in the integration tests to get a running WireMock instance or messaging route that simulates the actual service.

To do so, add the dependency to Spring Cloud Contract Stub Runner, as the following example shows:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
    <scope>test</scope>
</dependency>

You can get the Producer-side stubs installed in your Maven repository in either of two ways:

  • By checking out the Producer side repository and adding contracts and generating the stubs by running the following commands:

    $ cd local-http-server-repo
    $ ./mvnw clean install -DskipTests
    The tests are being skipped because the producer-side contract implementation is not in place yet, so the automatically-generated contract tests fail.
  • By getting already-existing producer service stubs from a remote repository. To do so, pass the stub artifact IDs and artifact repository URL as Spring Cloud Contract Stub Runner properties, as the following example shows:

    stubrunner:
      ids: 'com.example:http-server-dsl:+:stubs:8080'
      repositoryRoot: https://repo.spring.io/libs-snapshot

Now you can annotate your test class with @AutoConfigureStubRunner. In the annotation, provide the group-id and artifact-id values for Spring Cloud Contract Stub Runner to run the collaborators' stubs for you, as the following example shows:

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment=WebEnvironment.NONE)
@AutoConfigureStubRunner(ids = {"com.example:http-server-dsl:+:stubs:6565"},
        stubsMode = StubRunnerProperties.StubsMode.LOCAL)
public class LoanApplicationServiceTests {
Use the REMOTE stubsMode when downloading stubs from an online repository and LOCAL for offline work.

Now, in your integration test, you can receive stubbed versions of HTTP responses or messages that are expected to be emitted by the collaborator service.

14.1.3. Developing Your First Spring Cloud Contract-based Application

This brief tour walks through using Spring Cloud Contract. It consists of the following topics:

You can find an even more brief tour here.

For the sake of this example, the Stub Storage is Nexus/Artifactory.

The following UML diagram shows the relationship of the parts of Spring Cloud Contract:

Getting started first application
On the Producer Side

To start working with Spring Cloud Contract, you can add Spring Cloud Contract Verifier dependency and plugin to your build file, as the following example shows:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-contract-verifier</artifactId>
    <scope>test</scope>
</dependency>

The following listing shows how to add the plugin, which should go in the build/plugins portion of the file:

<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <version>${spring-cloud-contract.version}</version>
    <extensions>true</extensions>
</plugin>

The easiest way to get started is to go to the Spring Initializr and add “Web” and “Contract Verifier” as dependencies. Doing so pulls in the previously mentioned dependencies and everything else you need in the pom.xml file (except for setting the base test class, which we cover later in this section). The following image shows the settings to use in the Spring Initializr:

Spring Initializr with Web and Contract Verifier

Now you can add files with REST/ messaging contracts expressed in either Groovy DSL or YAML to the contracts directory, which is set by the contractsDslDir property. By default, it is $rootDir/src/test/resources/contracts. Note that the file name does not matter. You can organize your contracts within this directory with whatever naming scheme you like.

For the HTTP stubs, a contract defines what kind of response should be returned for a given request (taking into account the HTTP methods, URLs, headers, status codes, and so on). The following example shows an HTTP stub contract in both Groovy and YAML:

groovy
package contracts

org.springframework.cloud.contract.spec.Contract.make {
    request {
        method 'PUT'
        url '/fraudcheck'
        body([
               "client.id": $(regex('[0-9]{10}')),
               loanAmount: 99999
        ])
        headers {
            contentType('application/json')
        }
    }
    response {
        status OK()
        body([
               fraudCheckStatus: "FRAUD",
               "rejection.reason": "Amount too high"
        ])
        headers {
            contentType('application/json')
        }
    }
}
yaml
request:
  method: PUT
  url: /fraudcheck
  body:
    "client.id": 1234567890
    loanAmount: 99999
  headers:
    Content-Type: application/json
  matchers:
    body:
      - path: $.['client.id']
        type: by_regex
        value: "[0-9]{10}"
response:
  status: 200
  body:
    fraudCheckStatus: "FRAUD"
    "rejection.reason": "Amount too high"
  headers:
    Content-Type: application/json;charset=UTF-8

If you need to use messaging, you can define:

  • The input and output messages (taking into account from and where it was sent, the message body, and the header).

  • The methods that should be called after the message is received.

  • The methods that, when called, should trigger a message.

The following example shows a Camel messaging contract:

groovy
def contractDsl = Contract.make {
    name "foo"
    label 'some_label'
    input {
        messageFrom('jms:delete')
        messageBody([
                bookName: 'foo'
        ])
        messageHeaders {
            header('sample', 'header')
        }
        assertThat('bookWasDeleted()')
    }
}
yaml
label: some_label
input:
  messageFrom: jms:delete
  messageBody:
    bookName: 'foo'
  messageHeaders:
    sample: header
  assertThat: bookWasDeleted()

Running ./mvnw clean install automatically generates tests that verify the application compliance with the added contracts. By default, the generated tests are under org.springframework.cloud.contract.verifier.tests..

The generated tests may differ, depending on which framework and test type you have setup in your plugin.

In the next listing, you can find:

  • The default test mode for HTTP contracts in MockMvc

  • A JAX-RS client with the JAXRS test mode

  • A WebTestClient-based test (this is particularly recommended while working with Reactive, Web-Flux-based applications) set with the WEBTESTCLIENT test mode

  • A Spock-based test with the testFramework property set to SPOCK

You need only one of these test frameworks. MockMvc is the default. To use one of the other frameworks, add its library to your classpath.

The following listing shows samples for all frameworks:

mockmvc
@Test
public void validate_shouldMarkClientAsFraud() throws Exception {
    // given:
        MockMvcRequestSpecification request = given()
                .header("Content-Type", "application/vnd.fraud.v1+json")
                .body("{\"client.id\":\"1234567890\",\"loanAmount\":99999}");

    // when:
        ResponseOptions response = given().spec(request)
                .put("/fraudcheck");

    // then:
        assertThat(response.statusCode()).isEqualTo(200);
        assertThat(response.header("Content-Type")).matches("application/vnd.fraud.v1.json.*");
    // and:
        DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
        assertThatJson(parsedJson).field("['fraudCheckStatus']").matches("[A-Z]{5}");
        assertThatJson(parsedJson).field("['rejection.reason']").isEqualTo("Amount too high");
}
jaxrs
@SuppressWarnings("rawtypes")
public class FooTest {
  WebTarget webTarget;

  @Test
  public void validate_() throws Exception {

    // when:
      Response response = webTarget
              .path("/users")
              .queryParam("limit", "10")
              .queryParam("offset", "20")
              .queryParam("filter", "email")
              .queryParam("sort", "name")
              .queryParam("search", "55")
              .queryParam("age", "99")
              .queryParam("name", "Denis.Stepanov")
              .queryParam("email", "[email protected]")
              .request()
              .build("GET")
              .invoke();
      String responseAsString = response.readEntity(String.class);

    // then:
      assertThat(response.getStatus()).isEqualTo(200);

    // and:
      DocumentContext parsedJson = JsonPath.parse(responseAsString);
      assertThatJson(parsedJson).field("['property1']").isEqualTo("a");
  }

}
webtestclient
@Test
    public void validate_shouldRejectABeerIfTooYoung() throws Exception {
        // given:
            WebTestClientRequestSpecification request = given()
                    .header("Content-Type", "application/json")
                    .body("{\"age\":10}");

        // when:
            WebTestClientResponse response = given().spec(request)
                    .post("/check");

        // then:
            assertThat(response.statusCode()).isEqualTo(200);
            assertThat(response.header("Content-Type")).matches("application/json.*");
        // and:
            DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
            assertThatJson(parsedJson).field("['status']").isEqualTo("NOT_OK");
    }
spock
given:
     ContractVerifierMessage inputMessage = contractVerifierMessaging.create(
        \'\'\'{"bookName":"foo"}\'\'\',
        ['sample': 'header']
    )

when:
     contractVerifierMessaging.send(inputMessage, 'jms:delete')

then:
     noExceptionThrown()
     bookWasDeleted()

As the implementation of the functionalities described by the contracts is not yet present, the tests fail.

To make them pass, you must add the correct implementation of handling either HTTP requests or messages. Also, you must add a base test class for auto-generated tests to the project. This class is extended by all the auto-generated tests and should contain all the setup necessary information needed to run them (for example, RestAssuredMockMvc controller setup or messaging test setup).

The following example, from pom.xml, shows how to specify the base test class:

<build>
        <plugins>
            <plugin>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-contract-maven-plugin</artifactId>
                <version>2.1.2.RELEASE</version>
                <extensions>true</extensions>
                <configuration>
                    <baseClassForTests>com.example.contractTest.BaseTestClass</baseClassForTests> (1)
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>
1 The baseClassForTests element lets you specify your base test class. It must be a child of a configuration element within spring-cloud-contract-maven-plugin.

The following example shows a minimal (but functional) base test class:

package com.example.contractTest;

import org.junit.Before;

import io.restassured.module.mockmvc.RestAssuredMockMvc;

public class BaseTestClass {

    @Before
    public void setup() {
        RestAssuredMockMvc.standaloneSetup(new FraudController());
    }
}

This minimal class really is all you need to get your tests to work. It serves as a starting place to which the automatically generated tests attach.

Now we can move on to the implementation. For that, we first need a data class, which we then use in our controller. The following listing shows the data class:

package com.example.Test;

import com.fasterxml.jackson.annotation.JsonProperty;

public class LoanRequest {

    @JsonProperty("client.id")
    private String clientId;

    private Long loanAmount;

    public String getClientId() {
        return clientId;
    }

    public void setClientId(String clientId) {
        this.clientId = clientId;
    }

    public Long getLoanAmount() {
        return loanAmount;
    }

    public void setLoanRequestAmount(Long loanAmount) {
        this.loanAmount = loanAmount;
    }
}

The preceding class provides an object in which we can store the parameters. Because the client ID in the contract is called client.id, we need to use the @JsonProperty("client.id") parameter to map it to the clientId field.

Now we can move along to the controller, which the following listing shows:

package com.example.docTest;

import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class FraudController {

    @PutMapping(value = "/fraudcheck", consumes="application/json", produces="application/json")
    public String check(@RequestBody LoanRequest loanRequest) { (1)

        if (loanRequest.getLoanAmount() > 10000) { (2)
            return "{fraudCheckStatus: FRAUD, rejection.reason: Amount too high}"; (3)
        } else {
            return "{fraudCheckStatus: OK, acceptance.reason: Amount OK}"; (4)
        }
    }
}
1 We map the incoming parameters to a LoanRequest object.
2 We check the requested loan amount to see if it is too much.
3 If it is too much, we return the JSON (created with a simple string here) that the test expects.
4 If we had a test to catch when the amount is allowable, we could match it to this output.

The FraudController is about as simple as things get. You can do much more, including logging, validating the client ID, and so on.

Once the implementation and the test base class are in place, the tests pass, and both the application and the stub artifacts are built and installed in the local Maven repository Information about installing the stubs jar to the local repository appears in the logs, as the following example shows:

[INFO] --- spring-cloud-contract-maven-plugin:1.0.0.BUILD-SNAPSHOT:generateStubs (default-generateStubs) @ http-server ---
[INFO] Building jar: /some/path/http-server/target/http-server-0.0.1-SNAPSHOT-stubs.jar
[INFO]
[INFO] --- maven-jar-plugin:2.6:jar (default-jar) @ http-server ---
[INFO] Building jar: /some/path/http-server/target/http-server-0.0.1-SNAPSHOT.jar
[INFO]
[INFO] --- spring-boot-maven-plugin:1.5.5.BUILD-SNAPSHOT:repackage (default) @ http-server ---
[INFO]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ http-server ---
[INFO] Installing /some/path/http-server/target/http-server-0.0.1-SNAPSHOT.jar to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT.jar
[INFO] Installing /some/path/http-server/pom.xml to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT.pom
[INFO] Installing /some/path/http-server/target/http-server-0.0.1-SNAPSHOT-stubs.jar to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar

You can now merge the changes and publish both the application and the stub artifacts in an online repository.

On the Consumer Side

You can use Spring Cloud Contract Stub Runner in the integration tests to get a running WireMock instance or messaging route that simulates the actual service.

To get started, add the dependency to Spring Cloud Contract Stub Runner, as follows:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
    <scope>test</scope>
</dependency>

You can get the Producer-side stubs installed in your Maven repository in either of two ways:

  • By checking out the Producer side repository and adding contracts and generating the stubs by running the following commands:

    $ cd local-http-server-repo
    $ ./mvnw clean install -DskipTests
    The tests are skipped because the Producer-side contract implementation is not yet in place, so the automatically-generated contract tests fail.
  • Getting already existing producer service stubs from a remote repository. To do so, pass the stub artifact IDs and artifact repository URl as Spring Cloud Contract Stub Runner properties, as the following example shows:

    stubrunner:
      ids: 'com.example:http-server-dsl:+:stubs:8080'
      repositoryRoot: https://repo.spring.io/libs-snapshot

Now you can annotate your test class with @AutoConfigureStubRunner. In the annotation, provide the group-id and artifact-id for Spring Cloud Contract Stub Runner to run the collaborators' stubs for you, as the following example shows:

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment=WebEnvironment.NONE)
@AutoConfigureStubRunner(ids = {"com.example:http-server-dsl:+:stubs:6565"},
        stubsMode = StubRunnerProperties.StubsMode.LOCAL)
public class LoanApplicationServiceTests {
Use the REMOTE stubsMode when downloading stubs from an online repository and LOCAL for offline work.

In your integration test, you can receive stubbed versions of HTTP responses or messages that are expected to be emitted by the collaborator service. You can see entries similar to the following in the build logs:

2016-07-19 14:22:25.403  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Desired version is + - will try to resolve the latest version
2016-07-19 14:22:25.438  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolved version is 0.0.1-SNAPSHOT
2016-07-19 14:22:25.439  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolving artifact com.example:http-server:jar:stubs:0.0.1-SNAPSHOT using remote repositories []
2016-07-19 14:22:25.451  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolved artifact com.example:http-server:jar:stubs:0.0.1-SNAPSHOT to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar
2016-07-19 14:22:25.465  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Unpacking stub from JAR [URI: file:/path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar]
2016-07-19 14:22:25.475  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Unpacked file to [/var/folders/0p/xwq47sq106x1_g3dtv6qfm940000gq/T/contracts100276532569594265]
2016-07-19 14:22:27.737  INFO 41050 --- [           main] o.s.c.c.stubrunner.StubRunnerExecutor    : All stubs are now running RunningStubs [namesAndPorts={com.example:http-server:0.0.1-SNAPSHOT:stubs=8080}]

14.1.4. Step-by-step Guide to Consumer Driven Contracts (CDC) with Contracts on the Producer Side

Consider an example of fraud detection and the loan issuance process. The business scenario is such that we want to issue loans to people but do not want them to steal from us. The current implementation of our system grants loans to everybody.

Assume that Loan Issuance is a client to the Fraud Detection server. In the current sprint, we must develop a new feature: if a client wants to borrow too much money, we mark the client as a fraud.

Technical remarks

  • Fraud Detection has an artifact-id of http-server

  • Loan Issuance has an artifact-id of http-client

  • Both have a group-id of com.example

  • For the sake of this example the Stub Storage is Nexus/Artifactory

Social remarks

  • Both the client and the server development teams need to communicate directly and discuss changes while going through the process

  • CDC is all about communication

In this case, the producer owns the contracts. Physically, all of the contracts are in the producer’s repository.
Technical Note

If you use the SNAPSHOT, Milestone, or Release Candidate versions you need to add the following section to your build:

Maven
<repositories>
    <repository>
        <id>spring-snapshots</id>
        <name>Spring Snapshots</name>
        <url>https://repo.spring.io/snapshot</url>
        <snapshots>
            <enabled>true</enabled>
        </snapshots>
    </repository>
    <repository>
        <id>spring-milestones</id>
        <name>Spring Milestones</name>
        <url>https://repo.spring.io/milestone</url>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
    </repository>
    <repository>
        <id>spring-releases</id>
        <name>Spring Releases</name>
        <url>https://repo.spring.io/release</url>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
    </repository>
</repositories>
<pluginRepositories>
    <pluginRepository>
        <id>spring-snapshots</id>
        <name>Spring Snapshots</name>
        <url>https://repo.spring.io/snapshot</url>
        <snapshots>
            <enabled>true</enabled>
        </snapshots>
    </pluginRepository>
    <pluginRepository>
        <id>spring-milestones</id>
        <name>Spring Milestones</name>
        <url>https://repo.spring.io/milestone</url>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
    </pluginRepository>
    <pluginRepository>
        <id>spring-releases</id>
        <name>Spring Releases</name>
        <url>https://repo.spring.io/release</url>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
    </pluginRepository>
</pluginRepositories>
Gradle
repositories {
    mavenCentral()
    mavenLocal()
    maven { url "https://repo.spring.io/snapshot" }
    maven { url "https://repo.spring.io/milestone" }
    maven { url "https://repo.spring.io/release" }
}

For simplicity, we use the following acronyms:

  • Loan Issuance (LI): The HTTP client

  • Fraud Detection (FD): The HTTP server

  • Spring Cloud Contract (SCC)

The Consumer Side (Loan Issuance)

As a developer of the Loan Issuance service (a consumer of the Fraud Detection server), you might do the following steps:

  1. Start doing TDD by writing a test for your feature.

  2. Write the missing implementation.

  3. Clone the Fraud Detection service repository locally.

  4. Define the contract locally in the repo of the fraud detection service.

  5. Add the Spring Cloud Contract (SCC) plugin.

  6. Run the integration tests.

  7. File a pull request.

  8. Create an initial implementation.

  9. Take over the pull request.

  10. Write the missing implementation.

  11. Deploy your app.

  12. Work online.

We start with the loan issuance flow, which the following UML diagram shows:

getting started cdc client
Start Doing TDD by Writing a Test for Your Feature

The following listing shows a test that we might use to check whether a loan amount is too large:

@Test
public void shouldBeRejectedDueToAbnormalLoanAmount() {
    // given:
    LoanApplication application = new LoanApplication(new Client("1234567890"),
            99999);
    // when:
    LoanApplicationResult loanApplication = service.loanApplication(application);
    // then:
    assertThat(loanApplication.getLoanApplicationStatus())
            .isEqualTo(LoanApplicationStatus.LOAN_APPLICATION_REJECTED);
    assertThat(loanApplication.getRejectionReason()).isEqualTo("Amount too high");
}

Assume that you have written a test of your new feature. If a loan application for a big amount is received, the system should reject that loan application with some description.

Write the Missing Implementation

At some point in time, you need to send a request to the Fraud Detection service. Assume that you need to send the request containing the ID of the client and the amount the client wants to borrow. You want to send it to the /fraudcheck URL by using the PUT method. To do so, you might use code similar to the following:

ResponseEntity<FraudServiceResponse> response = restTemplate.exchange(
        "http://localhost:" + port + "/fraudcheck", HttpMethod.PUT,
        new HttpEntity<>(request, httpHeaders), FraudServiceResponse.class);

For simplicity, the port of the Fraud Detection service is set to 8080, and the application runs on 8090.

If you start the test at this point, it breaks, because no service currently runs on port 8080.
Clone the Fraud Detection service repository locally

You can start by playing around with the server side contract. To do so, you must first clone it, by running the following command:

$ git clone https://your-git-server.com/server-side.git local-http-server-repo
Define the Contract Locally in the Repository of the Fraud Detection Service

As a consumer, you need to define what exactly you want to achieve. You need to formulate your expectations. To do so, write the following contract:

Place the contract in the src/test/resources/contracts/fraud folder. The fraud folder is important because the producer’s test base class name references that folder.

The following example shows our contract, in both Groovy and YAML:

groovy
/*
 * Copyright 2013-2019 the original author or authors.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package contracts

org.springframework.cloud.contract.spec.Contract.make {
    request { // (1)
        method 'PUT' // (2)
        url '/fraudcheck' // (3)
        body([ // (4)
               "client.id": $(regex('[0-9]{10}')),
               loanAmount : 99999
        ])
        headers { // (5)
            contentType('application/json')
        }
    }
    response { // (6)
        status OK() // (7)
        body([ // (8)
               fraudCheckStatus  : "FRAUD",
               "rejection.reason": "Amount too high"
        ])
        headers { // (9)
            contentType('application/json')
        }
    }
}

/*
From the Consumer perspective, when shooting a request in the integration test:

(1) - If the consumer sends a request
(2) - With the "PUT" method
(3) - to the URL "/fraudcheck"
(4) - with the JSON body that
 * has a field `client.id` that matches a regular expression `[0-9]{10}`
 * has a field `loanAmount` that is equal to `99999`
(5) - with header `Content-Type` equal to `application/json`
(6) - then the response will be sent with
(7) - status equal `200`
(8) - and JSON body equal to
 { "fraudCheckStatus": "FRAUD", "rejectionReason": "Amount too high" }
(9) - with header `Content-Type` equal to `application/json`

From the Producer perspective, in the autogenerated producer-side test:

(1) - A request will be sent to the producer
(2) - With the "PUT" method
(3) - to the URL "/fraudcheck"
(4) - with the JSON body that
 * has a field `client.id` that will have a generated value that matches a regular expression `[0-9]{10}`
 * has a field `loanAmount` that is equal to `99999`
(5) - with header `Content-Type` equal to `application/json`
(6) - then the test will assert if the response has been sent with
(7) - status equal `200`
(8) - and JSON body equal to
 { "fraudCheckStatus": "FRAUD", "rejectionReason": "Amount too high" }
(9) - with header `Content-Type` matching `application/json.*`
 */
yaml
request: # (1)
  method: PUT # (2)
  url: /yamlfraudcheck # (3)
  body: # (4)
    "client.id": 1234567890
    loanAmount: 99999
  headers: # (5)
    Content-Type: application/json
  matchers:
    body:
      - path: $.['client.id'] # (6)
        type: by_regex
        value: "[0-9]{10}"
response: # (7)
  status: 200 # (8)
  body:  # (9)
    fraudCheckStatus: "FRAUD"
    "rejection.reason": "Amount too high"
  headers: # (10)
    Content-Type: application/json


#From the Consumer perspective, when shooting a request in the integration test:
#
#(1) - If the consumer sends a request
#(2) - With the "PUT" method
#(3) - to the URL "/yamlfraudcheck"
#(4) - with the JSON body that
# * has a field `client.id`
# * has a field `loanAmount` that is equal to `99999`
#(5) - with header `Content-Type` equal to `application/json`
#(6) - and a `client.id` json entry matches the regular expression `[0-9]{10}`
#(7) - then the response will be sent with
#(8) - status equal `200`
#(9) - and JSON body equal to
# { "fraudCheckStatus": "FRAUD", "rejectionReason": "Amount too high" }
#(10) - with header `Content-Type` equal to `application/json`
#
#From the Producer perspective, in the autogenerated producer-side test:
#
#(1) - A request will be sent to the producer
#(2) - With the "PUT" method
#(3) - to the URL "/yamlfraudcheck"
#(4) - with the JSON body that
# * has a field `client.id` `1234567890`
# * has a field `loanAmount` that is equal to `99999`
#(5) - with header `Content-Type` equal to `application/json`
#(7) - then the test will assert if the response has been sent with
#(8) - status equal `200`
#(9) - and JSON body equal to
# { "fraudCheckStatus": "FRAUD", "rejectionReason": "Amount too high" }
#(10) - with header `Content-Type` equal to `application/json`

The YML contract is quite straightforward. However, when you take a look at the Contract written with a statically typed Groovy DSL, you might wonder what the value(client(…​), server(…​)) parts are. By using this notation, Spring Cloud Contract lets you define parts of a JSON block, a URL, or other structure that is dynamic. In case of an identifier or a timestamp, you need not hardcode a value. You want to allow some different ranges of values. To enable ranges of values, you can set regular expressions that match those values for the consumer side. You can provide the body by means of either a map notation or String with interpolations. We highly recommend using the map notation.

You must understand the map notation in order to set up contracts. See the Groovy docs regarding JSON.

The previously shown contract is an agreement between two sides that:

  • If an HTTP request is sent with all of

    • A PUT method on the /fraudcheck endpoint

    • A JSON body with a client.id that matches the regular expression [0-9]{10} and loanAmount equal to 99999,

    • A Content-Type header with a value of application/vnd.fraud.v1+json

  • Then an HTTP response is sent to the consumer that

    • Has status 200

    • Contains a JSON body with the fraudCheckStatus field containing a value of FRAUD and the rejectionReason field having a value of Amount too high

    • Has a Content-Type header with a value of application/vnd.fraud.v1+json

Once you are ready to check the API in practice in the integration tests, you need to install the stubs locally.

Add the Spring Cloud Contract Verifier Plugin

We can add either a Maven or a Gradle plugin. In this example, we show how to add Maven. First, we add the Spring Cloud Contract BOM, as the following example shows:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>${spring-cloud-release.version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

Next, add the Spring Cloud Contract Verifier Maven plugin, as the following example shows:

            <plugin>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-contract-maven-plugin</artifactId>
                <version>${spring-cloud-contract.version}</version>
                <extensions>true</extensions>
                <configuration>
                    <packageWithBaseClasses>com.example.fraud</packageWithBaseClasses>
<!--                    <convertToYaml>true</convertToYaml>-->
                </configuration>
                <!-- if additional dependencies are needed e.g. for Pact -->
                <dependencies>
                    <dependency>
                        <groupId>org.springframework.cloud</groupId>
                        <artifactId>spring-cloud-contract-pact</artifactId>
                        <version>${spring-cloud-contract.version}</version>
                    </dependency>
                </dependencies>
            </plugin>

Since the plugin was added, you get the Spring Cloud Contract Verifier features, which, from the provided contracts:

  • Generate and run tests

  • Produce and install stubs

You do not want to generate tests, since you, as the consumer, want only to play with the stubs. You need to skip the test generation and execution. To do so, run the following commands:

$ cd local-http-server-repo
$ ./mvnw clean install -DskipTests

Once you run those commands, you should you see something like the following content in the logs:

[INFO] --- spring-cloud-contract-maven-plugin:1.0.0.BUILD-SNAPSHOT:generateStubs (default-generateStubs) @ http-server ---
[INFO] Building jar: /some/path/http-server/target/http-server-0.0.1-SNAPSHOT-stubs.jar
[INFO]
[INFO] --- maven-jar-plugin:2.6:jar (default-jar) @ http-server ---
[INFO] Building jar: /some/path/http-server/target/http-server-0.0.1-SNAPSHOT.jar
[INFO]
[INFO] --- spring-boot-maven-plugin:1.5.5.BUILD-SNAPSHOT:repackage (default) @ http-server ---
[INFO]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ http-server ---
[INFO] Installing /some/path/http-server/target/http-server-0.0.1-SNAPSHOT.jar to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT.jar
[INFO] Installing /some/path/http-server/pom.xml to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT.pom
[INFO] Installing /some/path/http-server/target/http-server-0.0.1-SNAPSHOT-stubs.jar to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar

The following line is extremely important:

[INFO] Installing /some/path/http-server/target/http-server-0.0.1-SNAPSHOT-stubs.jar to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar

It confirms that the stubs of the http-server have been installed in the local repository.

Running the Integration Tests

In order to profit from the Spring Cloud Contract Stub Runner functionality of automatic stub downloading, you must do the following in your consumer side project (Loan Application service):

  1. Add the Spring Cloud Contract BOM, as follows:

    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-dependencies</artifactId>
                <version>${spring-cloud-release-train.version}</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>
  2. Add the dependency to Spring Cloud Contract Stub Runner, as follows:

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
        <scope>test</scope>
    </dependency>
  3. Annotate your test class with @AutoConfigureStubRunner. In the annotation, provide the group-id and artifact-id for the Stub Runner to download the stubs of your collaborators. (Optional step) Because you are playing with the collaborators offline, you can also provide the offline work switch (StubRunnerProperties.StubsMode.LOCAL).

    @RunWith(SpringRunner.class)
    @SpringBootTest(webEnvironment = WebEnvironment.NONE)
    @AutoConfigureStubRunner(ids = {
            "com.example:http-server-dsl:0.0.1:stubs" }, stubsMode = StubRunnerProperties.StubsMode.LOCAL)
    public class LoanApplicationServiceTests {

Now, when you run your tests, you see something like the following output in the logs:

2016-07-19 14:22:25.403  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Desired version is + - will try to resolve the latest version
2016-07-19 14:22:25.438  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolved version is 0.0.1-SNAPSHOT
2016-07-19 14:22:25.439  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolving artifact com.example:http-server:jar:stubs:0.0.1-SNAPSHOT using remote repositories []
2016-07-19 14:22:25.451  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolved artifact com.example:http-server:jar:stubs:0.0.1-SNAPSHOT to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar
2016-07-19 14:22:25.465  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Unpacking stub from JAR [URI: file:/path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar]
2016-07-19 14:22:25.475  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Unpacked file to [/var/folders/0p/xwq47sq106x1_g3dtv6qfm940000gq/T/contracts100276532569594265]
2016-07-19 14:22:27.737  INFO 41050 --- [           main] o.s.c.c.stubrunner.StubRunnerExecutor    : All stubs are now running RunningStubs [namesAndPorts={com.example:http-server:0.0.1-SNAPSHOT:stubs=8080}]

This output means that Stub Runner has found your stubs and started a server for your application with a group ID of com.example and an artifact ID of http-server with version 0.0.1-SNAPSHOT of the stubs and with the stubs classifier on port 8080.

Filing a Pull Request

What you have done until now is an iterative process. You can play around with the contract, install it locally, and work on the consumer side until the contract works as you wish.

Once you are satisfied with the results and the test passes, you can publish a pull request to the server side. Currently, the consumer side work is done.

The Producer Side (Fraud Detection server)

As a developer of the Fraud Detection server (a server to the Loan Issuance service), you might want to do the following

  • Take over the pull request

  • Write the missing implementation

  • Deploy the application

The following UML diagram shows the fraud detection flow:

getting started cdc server
Taking over the Pull Request

As a reminder, the following listing shows the initial implementation:

@RequestMapping(value = "/fraudcheck", method = PUT)
public FraudCheckResult fraudCheck(@RequestBody FraudCheck fraudCheck) {
return new FraudCheckResult(FraudCheckStatus.OK, NO_REASON);
}

Then you can run the following commands:

$ git checkout -b contract-change-pr master
$ git pull https://your-git-server.com/server-side-fork.git contract-change-pr

You must add the dependencies needed by the autogenerated tests, as follows:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-contract-verifier</artifactId>
    <scope>test</scope>
</dependency>

In the configuration of the Maven plugin, you must pass the packageWithBaseClasses property, as follows:

            <plugin>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-contract-maven-plugin</artifactId>
                <version>${spring-cloud-contract.version}</version>
                <extensions>true</extensions>
                <configuration>
                    <packageWithBaseClasses>com.example.fraud</packageWithBaseClasses>
<!--                    <convertToYaml>true</convertToYaml>-->
                </configuration>
                <!-- if additional dependencies are needed e.g. for Pact -->
                <dependencies>
                    <dependency>
                        <groupId>org.springframework.cloud</groupId>
                        <artifactId>spring-cloud-contract-pact</artifactId>
                        <version>${spring-cloud-contract.version}</version>
                    </dependency>
                </dependencies>
            </plugin>
This example uses “convention-based” naming by setting the packageWithBaseClasses property. Doing so means that the two last packages combine to make the name of the base test class. In our case, the contracts were placed under src/test/resources/contracts/fraud. Since you do not have two packages starting from the contracts folder, pick only one, which should be fraud. Add the Base suffix and capitalize fraud. That gives you the FraudBase test class name.

All the generated tests extend that class. Over there, you can set up your Spring Context or whatever is necessary. In this case, you should use Rest Assured MVC to start the server side FraudDetectionController. The following listing shows the FraudBase class:

/*
 * Copyright 2013-2019 the original author or authors.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package com.example.fraud;

import io.restassured.module.mockmvc.RestAssuredMockMvc;
import org.junit.Before;

public class FraudBase {

    @Before
    public void setup() {
        RestAssuredMockMvc.standaloneSetup(new FraudDetectionController(),
                new FraudStatsController(stubbedStatsProvider()));
    }

    private StatsProvider stubbedStatsProvider() {
        return fraudType -> {
            switch (fraudType) {
            case DRUNKS:
                return 100;
            case ALL:
                return 200;
            }
            return 0;
        };
    }

    public void assertThatRejectionReasonIsNull(Object rejectionReason) {
        assert rejectionReason == null;
    }

}

Now, if you run the ./mvnw clean install, you get something like the following output:

Results :

Tests in error:
  ContractVerifierTest.validate_shouldMarkClientAsFraud:32 » IllegalState Parsed...

This error occurs because you have a new contract from which a test was generated and it failed since you have not implemented the feature. The auto-generated test would look like the following test method:

@Test
public void validate_shouldMarkClientAsFraud() throws Exception {
    // given:
        MockMvcRequestSpecification request = given()
                .header("Content-Type", "application/vnd.fraud.v1+json")
                .body("{\"client.id\":\"1234567890\",\"loanAmount\":99999}");

    // when:
        ResponseOptions response = given().spec(request)
                .put("/fraudcheck");

    // then:
        assertThat(response.statusCode()).isEqualTo(200);
        assertThat(response.header("Content-Type")).matches("application/vnd.fraud.v1.json.*");
    // and:
        DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
        assertThatJson(parsedJson).field("['fraudCheckStatus']").matches("[A-Z]{5}");
        assertThatJson(parsedJson).field("['rejection.reason']").isEqualTo("Amount too high");
}

If you used the Groovy DSL, you can see that all of the producer() parts of the Contract that were present in the value(consumer(…​), producer(…​)) blocks got injected into the test. In case of using YAML, the same applied for the matchers sections of the response.

Note that, on the producer side, you are also doing TDD. The expectations are expressed in the form of a test. This test sends a request to our own application with the URL, headers, and body defined in the contract. It is also expecting precisely defined values in the response. In other words, you have the red part of red, green, and refactor. It is time to convert the red into the green.

Write the Missing Implementation

Because you know the expected input and expected output, you can write the missing implementation as follows:

@RequestMapping(value = "/fraudcheck", method = PUT)
public FraudCheckResult fraudCheck(@RequestBody FraudCheck fraudCheck) {
if (amountGreaterThanThreshold(fraudCheck)) {
    return new FraudCheckResult(FraudCheckStatus.FRAUD, AMOUNT_TOO_HIGH);
}
return new FraudCheckResult(FraudCheckStatus.OK, NO_REASON);
}

When you run ./mvnw clean install again, the tests pass. Since the Spring Cloud Contract Verifier plugin adds the tests to the generated-test-sources, you can actually run those tests from your IDE.

Deploying Your Application

Once you finish your work, you can deploy your changes. To do so, you must first merge the branch by running the following commands:

$ git checkout master
$ git merge --no-ff contract-change-pr
$ git push origin master

Your CI might run something a command such as ./mvnw clean deploy, which would publish both the application and the stub artifacts.

Consumer Side (Loan Issuance), Final Step

As a developer of the loan issuance service (a consumer of the Fraud Detection server), I want to:

  • Merge our feature branch to master

  • Switch to online mode of working

The following UML diagram shows the final state of the process:

getting started cdc client final
Merging a Branch to Master

The following commands show one way to merge a branch into master with Git:

$ git checkout master
$ git merge --no-ff contract-change-pr
Working Online

Now you can disable the offline work for Spring Cloud Contract Stub Runner and indicate where the repository with your stubs is located. At this moment, the stubs of the server side are automatically downloaded from Nexus/Artifactory. You can set the value of stubsMode to REMOTE. The following code shows an example of achieving the same thing by changing the properties:

stubrunner:
  ids: 'com.example:http-server-dsl:+:stubs:8080'
  repositoryRoot: https://repo.spring.io/libs-snapshot

That’s it. You have finished the tutorial.

14.1.5. Next Steps

Hopefully, this section provided some of the Spring Cloud Contract basics and got you on your way to writing your own applications. If you are a task-oriented type of developer, you might want to jump over to spring.io and check out some of the getting started guides that solve specific “How do I do that with Spring?” problems. We also have Spring Cloud Contract-specific “how-to” reference documentation.

Otherwise, the next logical step is to read Using Spring Cloud Contract. If you are really impatient, you could also jump ahead and read about Spring Cloud Contract features.

In addition to that you can check out the following videos:

  • "Consumer Driven Contracts and Your Microservice Architecture" by Olga Maciaszek-Sharma and Marcin Grzejszczak

  • "Contract Tests in the Enterprise" by Marcin Grzejszczak

  • "Why Contract Tests Matter?" by Marcin Grzejszczak

You can find the default project samples at samples.

You can find the Spring Cloud Contract workshops here.

14.2. Using Spring Cloud Contract

This section goes into more detail about how you should use Spring Cloud Contract. It covers topics such as flows of how to work with Spring Cloud Contract. We also cover some Spring Cloud Contract best practices.

If you are starting out with Spring Cloud Contract, you should probably read the Getting Started guide before diving into this section.

14.2.1. Provider Contract Testing with Stubs in Nexus or Artifactory

You can check the Developing Your First Spring Cloud Contract based application link to see the provider contract testing with stubs in the Nexus or Artifactory flow.

You can also check the workshop page for a step-by-step instruction on how to do this flow.

14.2.2. Provider Contract Testing with Stubs in Git

In this flow, we perform the provider contract testing (the producer has no knowledge of how consumers use their API). The stubs are uploaded to a separate repository (they are not uploaded to Artifactory or Nexus).

Prerequisites

Before testing provider contracts with stubs in git, you must provide a git repository that contains all the stubs for each producer. For an example of such a project, see this samples or this sample. As a result of pushing stubs there, the repository has the following structure:

$ tree .
└── META-INF
   └── folder.with.group.id.as.its.name
       └── folder-with-artifact-id
           └── folder-with-version
               ├── contractA.groovy
               ├── contractB.yml
               └── contractC.groovy

You must also provide consumer code that has Spring Cloud Contract Stub Runner set up. For an example of such a project, see this sample and search for a BeerControllerGitTest test. You must also provide producer code that has Spring Cloud Contract set up, together with a plugin. For an example of such a project, see this sample.

The Flow

The flow looks exactly as the one presented in Developing Your First Spring Cloud Contract based application, but the Stub Storage implementation is a git repository.

You can read more about setting up a git repository and setting consumer and producer side in the How To page of the documentation.

Consumer setup

In order to fetch the stubs from a git repository instead of Nexus or Artifactory, you need to use the git protocol in the URL of the repositoryRoot property in Stub Runner. The following example shows how to set it up:

Annotation
@AutoConfigureStubRunner(
stubsMode = StubRunnerProperties.StubsMode.REMOTE,
        repositoryRoot = "git://[email protected]:spring-cloud-samples/spring-cloud-contract-nodejs-contracts-git.git",
        ids = "com.example:artifact-id:0.0.1")
JUnit 4 Rule
@Rule
    public StubRunnerRule rule = new StubRunnerRule()
            .downloadStub("com.example","artifact-id", "0.0.1")
            .repoRoot("git://[email protected]:spring-cloud-samples/spring-cloud-contract-nodejs-contracts-git.git")
            .stubsMode(StubRunnerProperties.StubsMode.REMOTE);
JUnit 5 Extension
@RegisterExtension
    public StubRunnerExtension stubRunnerExtension = new StubRunnerExtension()
            .downloadStub("com.example","artifact-id", "0.0.1")
            .repoRoot("git://[email protected]:spring-cloud-samples/spring-cloud-contract-nodejs-contracts-git.git")
            .stubsMode(StubRunnerProperties.StubsMode.REMOTE);
Setting up the Producer

In order to push the stubs to a git repository instead of Nexus or Artifactory, you need to use the git protocol in the URL of the plugin setup. Also you need to explicitly tell the plugin to push the stubs at the end of the build process. The following example shows how to do so:

maven
<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <version>${spring-cloud-contract.version}</version>
    <extensions>true</extensions>
    <configuration>
        <!-- Base class mappings etc. -->

        <!-- We want to pick contracts from a Git repository -->
        <contractsRepositoryUrl>git://git://[email protected]:spring-cloud-samples/spring-cloud-contract-nodejs-contracts-git.git</contractsRepositoryUrl>

        <!-- We reuse the contract dependency section to set up the path
        to the folder that contains the contract definitions. In our case the
        path will be /groupId/artifactId/version/contracts -->
        <contractDependency>
            <groupId>${project.groupId}</groupId>
            <artifactId>${project.artifactId}</artifactId>
            <version>${project.version}</version>
        </contractDependency>

        <!-- The contracts mode can't be classpath -->
        <contractsMode>REMOTE</contractsMode>
    </configuration>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <!-- By default we will not push the stubs back to SCM,
                you have to explicitly add it as a goal -->
                <goal>pushStubsToScm</goal>
            </goals>
        </execution>
    </executions>
</plugin>
gradle
contracts {
    // We want to pick contracts from a Git repository
    contractDependency {
        stringNotation = "${project.group}:${project.name}:${project.version}"
    }
    /*
    We reuse the contract dependency section to set up the path
    to the folder that contains the contract definitions. In our case the
    path will be /groupId/artifactId/version/contracts
     */
    contractRepository {
        repositoryUrl = "git://git://[email protected]:spring-cloud-samples/spring-cloud-contract-nodejs-contracts-git.git"
    }
    // The mode can't be classpath
    contractsMode = "REMOTE"
    // Base class mappings etc.
}

/*
In this scenario we want to publish stubs to SCM whenever
the `publish` task is executed
*/
publish.dependsOn("publishStubsToScm")

You can read more about setting up a git repository in the How To page of the documentation.

14.2.4. Consumer Driven Contracts with Contracts in an External Repository

In this flow, we perform Consumer Driven Contract testing. The contract definitions are stored in a separate repository.

See the workshop page for step-by-step instructions on how to do this flow.

Prerequisites

To use consumer-driven contracts with the contracts held in an external repository, you need to set up a git repository that:

  • Contains all the contract definitions for each producer.

  • Can package the contract definitions in a JAR.

  • For each contract producer, contains a way (for example, pom.xml) to install stubs locally through the Spring Cloud Contract Plugin (SCC Plugin)

For more information, see the How To section, where we describe how to set up such a repository For an example of such a project, see this sample.

You also need consumer code that has Spring Cloud Contract Stub Runner set up. For an example of such a project, see this sample. You also need producer code that has Spring Cloud Contract set up, together with a plugin. For an example of such a project, see this sample. The stub storage is Nexus or Artifactory

At a high level, the flow looks as follows:

  1. The consumer works with the contract definitions from the separate repository

  2. Once the consumer’s work is done, a branch with working code is done on the consumer side and a pull request is made to the separate repository that holds the contract definitions.

  3. The producer takes over the pull request to the separate repository with contract definitions and installs the JAR with all contracts locally.

  4. The producer generates tests from the locally stored JAR and writes the missing implementation to make the tests pass.

  5. Once the producer’s work is done, the pull request to the repository that holds the contract definitions is merged.

  6. After the CI tool builds the repository with the contract definitions and the JAR with contract definitions gets uploaded to Nexus or Artifactory, the producer can merge its branch.

  7. Finally, the consumer can switch to working online to fetch stubs of the producer from a remote location, and the branch can be merged to master.

Consumer Flow

The consumer:

  1. Writes a test that would send a request to the producer.

    The test fails due to no server being present.

  2. Clones the repository that holds the contract definitions.

  3. Set up the requirements as contracts under the folder with the consumer name as a subfolder of the producer.

    For example, for a producer named producer and a consumer named consumer, the contracts would be stored under src/main/resources/contracts/producer/consumer/)

  4. Once the contracts are defined, installs the producer stubs to local storage, as the following example shows:

    $ cd src/main/resource/contracts/producer
    $ ./mvnw clean install
  5. Sets up Spring Cloud Contract (SCC) Stub Runner in the consumer tests, to:

    • Fetch the producer stubs from local storage.

    • Work in the stubs-per-consumer mode (this enables consumer driven contracts mode).

      The SCC Stub Runner:

    • Fetches the producer stubs.

    • Runs an in-memory HTTP server stub with the producer stubs.

    • Now your test communicates with the HTTP server stub and your tests pass

    • Create a pull request to the repository with contract definitions, with the new contracts for the producer

    • Branch your consumer code, until the producer team has merged their code

The following UML diagram shows the consumer flow:

flow overview consumer cdc external consumer
Producer Flow

The producer:

  1. Takes over the pull request to the repository with contract definitions. You can do it from the command line, as follows

    $ git checkout -b the_branch_with_pull_request master
    git pull https://github.com/user_id/project_name.git the_branch_with_pull_request
  2. Installs the contract definitions, as follows

    $ ./mvnw clean install
  3. Sets up the plugin to fetch the contract definitions from a JAR instead of from src/test/resources/contracts, as follows:

    Maven
    <plugin>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-contract-maven-plugin</artifactId>
        <version>${spring-cloud-contract.version}</version>
        <extensions>true</extensions>
        <configuration>
            <!-- We want to use the JAR with contracts with the following coordinates -->
            <contractDependency>
                <groupId>com.example</groupId>
                <artifactId>beer-contracts</artifactId>
            </contractDependency>
            <!-- The JAR with contracts should be taken from Maven local -->
            <contractsMode>LOCAL</contractsMode>
            <!-- ... additional configuration -->
        </configuration>
    </plugin>
    Gradle
    contracts {
        // We want to use the JAR with contracts with the following coordinates
        // group id `com.example`, artifact id `beer-contracts`, LATEST version and NO classifier
        contractDependency {
            stringNotation = 'com.example:beer-contracts:+:'
        }
        // The JAR with contracts should be taken from Maven local
        contractsMode = "LOCAL"
        // Additional configuration
    }
  4. Runs the build to generate tests and stubs, as follows:

    Maven
    ./mvnw clean install
    Gradle
    ./gradlew clean build
  5. Writes the missing implementation, to make the tests pass.

  6. Merges the pull request to the repository with contract definitions, as follows:

    $ git commit -am "Finished the implementation to make the contract tests pass"
    $ git checkout master
    $ git merge --no-ff the_branch_with_pull_request
    $ git push origin master
  7. The CI system builds the project with the contract definitions and uploads the JAR with the contract definitions to Nexus or Artifactory.

  8. Switches to working remotely.

  9. Sets up the plugin so that the contract definitions are no longer taken from the local storage but from a remote location, as follows:

    Maven
    <plugin>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-contract-maven-plugin</artifactId>
        <version>${spring-cloud-contract.version}</version>
        <extensions>true</extensions>
        <configuration>
            <!-- We want to use the JAR with contracts with the following coordinates -->
            <contractDependency>
                <groupId>com.example</groupId>
                <artifactId>beer-contracts</artifactId>
            </contractDependency>
            <!-- The JAR with contracts should be taken from a remote location -->
            <contractsMode>REMOTE</contractsMode>
            <!-- ... additional configuration -->
        </configuration>
    </plugin>
    Gradle
    contracts {
        // We want to use the JAR with contracts with the following coordinates
        // group id `com.example`, artifact id `beer-contracts`, LATEST version and NO classifier
        contractDependency {
            stringNotation = 'com.example:beer-contracts:+:'
        }
        // The JAR with contracts should be taken from a remote location
        contractsMode = "REMOTE"
        // Additional configuration
    }
  10. Merges the producer code with the new implementation.

  11. The CI system:

    • Builds the project

    • Generates tests, stubs, and the stub JAR

    • Uploads the artifact with the application and the stubs to Nexus or Artifactory.

The following UML diagram shows the producer process:

flow overview consumer cdc external producer

14.2.5. Consumer Driven Contracts with Contracts on the Producer Side, Pushed to Git

You can check Step-by-step Guide to Consumer Driven Contracts (CDC) with contracts laying on the producer side to see the consumer driven contracts with contracts on the producer side flow.

The stub storage implementation is a git repository. We describe its setup in the Provider Contract Testing with Stubs in Git section.

You can read more about setting up a git repository for the consumer and producer sides in the How To page of the documentation.

14.2.6. Provider Contract Testing with Stubs in Artifactory for a non-Spring Application

The Flow

You can check Developing Your First Spring Cloud Contract based application to see the flow for provider contract testing with stubs in Nexus or Artifactory.

Setting up the Consumer

For the consumer side, you can use a JUnit rule. That way, you need not start a Spring context. The follwoing listing shows such a rule (in JUnit4 and JUnit 5);

JUnit 4 Rule
@Rule
    public StubRunnerRule rule = new StubRunnerRule()
            .downloadStub("com.example","artifact-id", "0.0.1")
            .repoRoot("git://[email protected]:spring-cloud-samples/spring-cloud-contract-nodejs-contracts-git.git")
            .stubsMode(StubRunnerProperties.StubsMode.REMOTE);
JUnit 5 Extension
@Rule
    public StubRunnerExtension stubRunnerExtension = new StubRunnerExtension()
            .downloadStub("com.example","artifact-id", "0.0.1")
            .repoRoot("git://[email protected]:spring-cloud-samples/spring-cloud-contract-nodejs-contracts-git.git")
            .stubsMode(StubRunnerProperties.StubsMode.REMOTE);
Setting up the Producer

By default, the Spring Cloud Contract Plugin uses Rest Assured’s MockMvc setup for the generated tests. Since non-Spring applications do not use MockMvc, you can change the testMode to EXPLICIT to send a real request to an application bound at a specific port.

In this example, we use a framework called Javalin to start a non-Spring HTTP server.

Assume that we have the following application:

package com.example.demo;

import io.javalin.Javalin;

public class DemoApplication {

    public static void main(String[] args) {
        new DemoApplication().run(7000);
    }

    public Javalin start(int port) {
        return Javalin.create().start(port);
    }

    public Javalin registerGet(Javalin app) {
        return app.get("/", ctx -> ctx.result("Hello World"));
    }

    public Javalin run(int port) {
        return registerGet(start(port));
    }

}

Given that application, we can set up the plugin to use the EXPLICIT mode (that is, to send out requests to a real port), as follows:

maven
<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <version>${spring-cloud-contract.version}</version>
    <extensions>true</extensions>
    <configuration>
        <baseClassForTests>com.example.demo.BaseClass</baseClassForTests>
        <!-- This will setup the EXPLICIT mode for the tests -->
        <testMode>EXPLICIT</testMode>
    </configuration>
</plugin>
gradle
contracts {
    // This will setup the EXPLICIT mode for the tests
    testMode = "EXPLICIT"
    baseClassForTests = "com.example.demo.BaseClass"
}

The base class might resemble the following:

import io.javalin.Javalin;
import io.restassured.RestAssured;
import org.junit.After;
import org.junit.Before;
import org.springframework.util.SocketUtils;

public class BaseClass {

    Javalin app;

    @Before
    public void setup() {
        // pick a random port
        int port = SocketUtils.findAvailableTcpPort();
        // start the application at a random port
        this.app = start(port);
        // tell Rest Assured where the started application is
        RestAssured.baseURI = "http://localhost:" + port;
    }

    @After
    public void close() {
        // stop the server after each test
        this.app.stop();
    }

    private Javalin start(int port) {
        // reuse the production logic to start a server
        return new DemoApplication().run(port);
    }
}

With such a setup:

  • We have setup the Spring Cloud Contract plugin to use the EXPLICIT mode to send real requests instead of mocked ones.

  • We have defined a base class that:

    • Starts the HTTP server on a random port for each test.

    • Sets Rest Assured to send requests to that port.

    • Closes the HTTP server after each test.

14.2.7. Provider Contract Testing with Stubs in Artifactory in a non-JVM World

In this flow, we assume that:

  • The API Producer and API Consumer are non-JVM applications.

  • The contract definitions are written in YAML.

  • The Stub Storage is Artifactory or Nexus.

  • Spring Cloud Contract Docker (SCC Docker) and Spring Cloud Contract Stub Runner Docker (SCC Stub Runner Docker) images are used.

You can read more about how to use Spring Cloud Contract with Docker in this page.

Here, you can read a blog post about how to use Spring Cloud Contract in a polyglot world.

Here, you can find a sample of a NodeJS application that uses Spring Cloud Contract both as a producer and a consumer.

Producer Flow

At a high level, the producer:

  1. Writes contract definitions (for example, in YAML).

  2. Sets up the build tool to:

    1. Start the application with mocked services on a given port.

      If mocking is not possible, you can setup the infrastructure and define tests in a stateful way.

    2. Run the Spring Cloud Contract Docker image and pass the port of a running application as an environment variable.

The SCC Docker image: * Generates the tests from the attached volume. * Runs the tests against the running application.

Upon test completion, stubs get uploaded to a stub storage site (such as Artifactory or Git).

The following UML diagram shows the producer flow:

flows provider non jvm producer
Consumer Flow

At a high level, the consumer:

  1. Sets up the build tool to:

    • Start the Spring Cloud Contract Stub Runner Docker image and start the stubs.

      The environment variables configure:

    • The stubs to fetch.

    • The location of the repositories.

      Note that:

    • To use the local storage, you can also attach it as a volume.

    • The ports at which the stubs are running need to be exposed.

  2. Run the application tests against the running stubs.

The following UML diagram shows the consumer flow:

flows provider non jvm consumer

14.2.8. Provider Contract Testing with REST Docs and Stubs in Nexus or Artifactory

In this flow, we do not use a Spring Cloud Contract plugin to generate tests and stubs. We write Spring RESTDocs and, from them, we automatically generate stubs. Finally, we set up our builds to package the stubs and upload them to the stub storage site — in our case, Nexus or Artifactory.

See the workshop page for a step-by-step instruction on how to use this flow.

Producer Flow

As a producer, we:

  1. We write RESTDocs tests of our API.

  2. We add Spring Cloud Contract Stub Runner starter to our build (spring-cloud-starter-contract-stub-runner), as follows

    maven
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>
    
    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-dependencies</artifactId>
                <version>${spring-cloud.version}</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>
    gradle
    dependencies {
        testImplementation 'org.springframework.cloud:spring-cloud-starter-contract-stub-runner'
    }
    
    dependencyManagement {
        imports {
            mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
        }
    }
  3. We set up the build tool to package our stubs, as follows:

    maven
    <!-- pom.xml -->
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-assembly-plugin</artifactId>
            <executions>
                <execution>
                    <id>stub</id>
                    <phase>prepare-package</phase>
                    <goals>
                        <goal>single</goal>
                    </goals>
                    <inherited>false</inherited>
                    <configuration>
                        <attach>true</attach>
                        <descriptors>
                            ${basedir}/src/assembly/stub.xml
                        </descriptors>
                    </configuration>
                </execution>
            </executions>
        </plugin>
    </plugins>
    
    <!-- src/assembly/stub.xml -->
    <assembly
        xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd">
        <id>stubs</id>
        <formats>
            <format>jar</format>
        </formats>
        <includeBaseDirectory>false</includeBaseDirectory>
        <fileSets>
            <fileSet>
                <directory>${project.build.directory}/generated-snippets/stubs</directory>
                <outputDirectory>META-INF/${project.groupId}/${project.artifactId}/${project.version}/mappings</outputDirectory>
                <includes>
                    <include>**/*</include>
                </includes>
            </fileSet>
        </fileSets>
    </assembly>
    gradle
    task stubsJar(type: Jar) {
        classifier = "stubs"
        into("META-INF/${project.group}/${project.name}/${project.version}/mappings") {
            include('**/*.*')
            from("${project.buildDir}/generated-snippets/stubs")
        }
    }
    // we need the tests to pass to build the stub jar
    stubsJar.dependsOn(test)
    bootJar.dependsOn(stubsJar)

Now, when we run the tests, stubs are automatically published and packaged.

The following UML diagram shows the producer flow:

flows provider rest docs producer
Consumer Flow

Since the consumer flow is not affected by the tool used to generate the stubs, you can check Developing Your First Spring Cloud Contract based application to see the flow for consumer side of the provider contract testing with stubs in Nexus or Artifactory.

14.2.9. What to Read Next

You should now understand how you can use Spring Cloud Contract and some best practices that you should follow. You can now go on to learn about specific Spring Cloud Contract features, or you could skip ahead and read about the advanced features of Spring Cloud Contract.

14.3. Spring Cloud Contract Features

This section dives into the details of Spring Cloud Contract. Here you can learn about the key features that you may want to use and customize. If you have not already done so, you might want to read the "getting-started.html" and "using.html" sections, so that you have a good grounding of the basics.

14.3.1. Contract DSL

Spring Cloud Contract supports the DSLs written in the following languages:

  • Groovy

  • YAML

  • Java

  • Kotlin

Spring Cloud Contract supports defining multiple contracts in a single file.

The following example shows a contract definition:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    request {
        method 'PUT'
        url '/api/12'
        headers {
            header 'Content-Type': 'application/vnd.org.springframework.cloud.contract.verifier.twitter-places-analyzer.v1+json'
        }
        body '''\
    [{
        "created_at": "Sat Jul 26 09:38:57 +0000 2014",
        "id": 492967299297845248,
        "id_str": "492967299297845248",
        "text": "Gonna see you at Warsaw",
        "place":
        {
            "attributes":{},
            "bounding_box":
            {
                "coordinates":
                    [[
                        [-77.119759,38.791645],
                        [-76.909393,38.791645],
                        [-76.909393,38.995548],
                        [-77.119759,38.995548]
                    ]],
                "type":"Polygon"
            },
            "country":"United States",
            "country_code":"US",
            "full_name":"Washington, DC",
            "id":"01fbe706f872cb32",
            "name":"Washington",
            "place_type":"city",
            "url": "https://api.twitter.com/1/geo/id/01fbe706f872cb32.json"
        }
    }]
'''
    }
    response {
        status OK()
    }
}
yml
description: Some description
name: some name
priority: 8
ignored: true
request:
  url: /foo
  queryParameters:
    a: b
    b: c
  method: PUT
  headers:
    foo: bar
    fooReq: baz
  body:
    foo: bar
  matchers:
    body:
      - path: $.foo
        type: by_regex
        value: bar
    headers:
      - key: foo
        regex: bar
response:
  status: 200
  headers:
    foo2: bar
    foo3: foo33
    fooRes: baz
  body:
    foo2: bar
    foo3: baz
    nullValue: null
  matchers:
    body:
      - path: $.foo2
        type: by_regex
        value: bar
      - path: $.foo3
        type: by_command
        value: executeMe($it)
      - path: $.nullValue
        type: by_null
        value: null
    headers:
      - key: foo2
        regex: bar
      - key: foo3
        command: andMeToo($it)
java
import java.util.Collection;
import java.util.Collections;
import java.util.function.Supplier;

import org.springframework.cloud.contract.spec.Contract;
import org.springframework.cloud.contract.verifier.util.ContractVerifierUtil;

class contract_rest implements Supplier<Collection<Contract>> {

    @Override
    public Collection<Contract> get() {
        return Collections.singletonList(Contract.make(c -> {
            c.description("Some description");
            c.name("some name");
            c.priority(8);
            c.ignored();
            c.request(r -> {
                r.url("/foo", u -> {
                    u.queryParameters(q -> {
                        q.parameter("a", "b");
                        q.parameter("b", "c");
                    });
                });
                r.method(r.PUT());
                r.headers(h -> {
                    h.header("foo", r.value(r.client(r.regex("bar")), r.server("bar")));
                    h.header("fooReq", "baz");
                });
                r.body(ContractVerifierUtil.map().entry("foo", "bar"));
                r.bodyMatchers(m -> {
                    m.jsonPath("$.foo", m.byRegex("bar"));
                });
            });
            c.response(r -> {
                r.fixedDelayMilliseconds(1000);
                r.status(r.OK());
                r.headers(h -> {
                    h.header("foo2", r.value(r.server(r.regex("bar")), r.client("bar")));
                    h.header("foo3", r.value(r.server(r.execute("andMeToo($it)")),
                            r.client("foo33")));
                    h.header("fooRes", "baz");
                });
                r.body(ContractVerifierUtil.map().entry("foo2", "bar")
                        .entry("foo3", "baz").entry("nullValue", null));
                r.bodyMatchers(m -> {
                    m.jsonPath("$.foo2", m.byRegex("bar"));
                    m.jsonPath("$.foo3", m.byCommand("executeMe($it)"));
                    m.jsonPath("$.nullValue", m.byNull());
                });
            });
        }));
    }

}
kotlin
import org.springframework.cloud.contract.spec.ContractDsl.Companion.contract
import org.springframework.cloud.contract.spec.withQueryParameters

contract {
    name = "some name"
    description = "Some description"
    priority = 8
    ignored = true
    request {
        url = url("/foo") withQueryParameters  {
            parameter("a", "b")
            parameter("b", "c")
        }
        method = PUT
        headers {
            header("foo", value(client(regex("bar")), server("bar")))
            header("fooReq", "baz")
        }
        body = body(mapOf("foo" to "bar"))
        bodyMatchers {
            jsonPath("$.foo", byRegex("bar"))
        }
    }
    response {
        delay = fixedMilliseconds(1000)
        status = OK
        headers {
            header("foo2", value(server(regex("bar")), client("bar")))
            header("foo3", value(server(execute("andMeToo(\$it)")), client("foo33")))
            header("fooRes", "baz")
        }
        body = body(mapOf(
                "foo" to "bar",
                "foo3" to "baz",
                "nullValue" to null
        ))
        bodyMatchers {
            jsonPath("$.foo2", byRegex("bar"))
            jsonPath("$.foo3", byCommand("executeMe(\$it)"))
            jsonPath("$.nullValue", byNull)
        }
    }
}

You can compile contracts to stubs mapping by using the following standalone Maven command:

mvn org.springframework.cloud:spring-cloud-contract-maven-plugin:convert
Contract DSL in Groovy

If you are not familiar with Groovy, do not worry - you can use Java syntax in the Groovy DSL files as well.

If you decide to write the contract in Groovy, do not be alarmed if you have not used Groovy before. Knowledge of the language is not really needed, as the Contract DSL uses only a tiny subset of it (only literals, method calls, and closures). Also, the DSL is statically typed, to make it programmer-readable without any knowledge of the DSL itself.

Remember that, inside the Groovy contract file, you have to provide the fully qualified name to the Contract class and make static imports, such as org.springframework.cloud.spec.Contract.make { …​ }. You can also provide an import to the Contract class (import org.springframework.cloud.spec.Contract) and then call Contract.make { …​ }.
Contract DSL in Java

To write a contract definition in Java, you need to create a class, that implements either the Supplier<Contract> interface for a single contract or Supplier<Collection<Contract>> for multiple contracts.

You can also write the contract definitions under src/test/java (e.g. src/test/java/contracts) so that you don’t have to modify the classpath of your project. In this case you’ll have to provide a new location of contract definitions to your Spring Cloud Contract plugin.

Maven
<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <version>${spring-cloud-contract.version}</version>
    <extensions>true</extensions>
    <configuration>
        <contractsDirectory>src/test/java/contracts</contractsDirectory>
    </configuration>
</plugin>
Gradle
contracts {
    contractsDslDir = new File(project.rootDir, "src/test/java/contracts")
}
Contract DSL in Kotlin

To get started with writing contracts in Kotlin you would need to start with a (newly created) Kotlin Script file (.kts). Just like the with the Java DSL you can put your contracts in any directory of your choice. The Maven and Gradle plugins will look at the src/test/resources/contracts directory by default.

You need to explicitly pass the the spring-cloud-contract-spec-kotlin dependency to your project plugin setup.

Maven
<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <version>${spring-cloud-contract.version}</version>
    <extensions>true</extensions>
    <configuration>
        <!-- some config -->
    </configuration>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-contract-spec-kotlin</artifactId>
            <version>${spring-cloud-contract.version}</version>
        </dependency>
    </dependencies>
</plugin>

<dependencies>
        <!-- Remember to add this for the DSL support in the IDE and on the consumer side -->
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-contract-spec-kotlin</artifactId>
            <scope>test</scope>
        </dependency>
</dependencies>
Gradle
buildscript {
    repositories {
        // ...
    }
    dependencies {
        classpath "org.springframework.cloud:spring-cloud-contract-gradle-plugin:${scContractVersion}"
        // remember to add this:
        classpath "org.springframework.cloud:spring-cloud-contract-spec-kotlin:${scContractVersion}"
    }
}

dependencies {
    // ...

    // Remember to add this for the DSL support in the IDE and on the consumer side
    testImplementation "org.springframework.cloud:spring-cloud-contract-spec-kotlin"
}
Remember that, inside the Kotlin Script file, you have to provide the fully qualified name to the ContractDSL class. Generally you would use its contract function like this: org.springframework.cloud.contract.spec.ContractDsl.contract { …​ }. You can also provide an import to the contract function (import org.springframework.cloud.contract.spec.ContractDsl.Companion.contract) and then call contract { …​ }.
Contract DSL in YML

In order to see a schema of a YAML contract, you can check out the YML Schema page.

Limitations
The support for verifying the size of JSON arrays is experimental. If you want to turn it on, set the value of the following system property to true: spring.cloud.contract.verifier.assert.size. By default, this feature is set to false. You can also set the assertJsonSize property in the plugin configuration.
Because JSON structure can have any form, it can be impossible to parse it properly when using the Groovy DSL and the value(consumer(…​), producer(…​)) notation in GString. That is why you should use the Groovy Map notation.
Common Top-Level Elements

The following sections describe the most common top-level elements:

Description

You can add a description to your contract. The description is arbitrary text. The following code shows an example:

groovy
            org.springframework.cloud.contract.spec.Contract.make {
                description('''
given:
    An input
when:
    Sth happens
then:
    Output
''')
            }
yml
description: Some description
name: some name
priority: 8
ignored: true
request:
  url: /foo
  queryParameters:
    a: b
    b: c
  method: PUT
  headers:
    foo: bar
    fooReq: baz
  body:
    foo: bar
  matchers:
    body:
      - path: $.foo
        type: by_regex
        value: bar
    headers:
      - key: foo
        regex: bar
response:
  status: 200
  headers:
    foo2: bar
    foo3: foo33
    fooRes: baz
  body:
    foo2: bar
    foo3: baz
    nullValue: null
  matchers:
    body:
      - path: $.foo2
        type: by_regex
        value: bar
      - path: $.foo3
        type: by_command
        value: executeMe($it)
      - path: $.nullValue
        type: by_null
        value: null
    headers:
      - key: foo2
        regex: bar
      - key: foo3
        command: andMeToo($it)
java
Contract.make(c -> {
    c.description("Some description");
}));
kotlin
contract {
    description = """
given:
    An input
when:
    Sth happens
then:
    Output
"""
}
Name

You can provide a name for your contract. Assume that you provided the following name: should register a user. If you do so, the name of the autogenerated test is validate_should_register_a_user. Also, the name of the stub in a WireMock stub is should_register_a_user.json.

You must ensure that the name does not contain any characters that make the generated test not compile. Also, remember that, if you provide the same name for multiple contracts, your autogenerated tests fail to compile and your generated stubs override each other.

The following example shows how to add a name to a contract:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    name("some_special_name")
}
yml
name: some name
java
Contract.make(c -> {
    c.name("some name");
}));
kotlin
contract {
    name = "some_special_name"
}
Ignoring Contracts

If you want to ignore a contract, you can either set a value for ignored contracts in the plugin configuration or set the ignored property on the contract itself. The following example shows how to do so:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    ignored()
}
yml
ignored: true
java
Contract.make(c -> {
    c.ignored();
}));
kotlin
contract {
    ignored = true
}
Contracts in Progress

A contract in progress will not generate tests on the producer side, but will allow generation of stubs.

Use this feature with caution as it may lead to false positives. You generate stubs for your consumers to use without actually having the implementation in place!

If you want to set a contract in progress the following example shows how to do so:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    inProgress()
}
yml
inProgress: true
java
Contract.make(c -> {
    c.inProgress();
}));
kotlin
contract {
    inProgress = true
}

You can set the value of the failOnInProgress Spring Cloud Contract plugin property to ensure that your build will break when at least one contract in progress remains in your sources.

Passing Values from Files

Starting with version 1.2.0, you can pass values from files. Assume that you have the following resources in your project:

└── src
    └── test
        └── resources
            └── contracts
                ├── readFromFile.groovy
                ├── request.json
                └── response.json

Further assume that your contract is as follows:

groovy
/*
 * Copyright 2013-2019 the original author or authors.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

import org.springframework.cloud.contract.spec.Contract

Contract.make {
    request {
        method('PUT')
        headers {
            contentType(applicationJson())
        }
        body(file("request.json"))
        url("/1")
    }
    response {
        status OK()
        body(file("response.json"))
        headers {
            contentType(applicationJson())
        }
    }
}
yml
request:
  method: GET
  url: /foo
  bodyFromFile: request.json
response:
  status: 200
  bodyFromFile: response.json
java
import java.util.Collection;
import java.util.Collections;
import java.util.function.Supplier;

import org.springframework.cloud.contract.spec.Contract;

class contract_rest_from_file implements Supplier<Collection<Contract>> {

    @Override
    public Collection<Contract> get() {
        return Collections.singletonList(Contract.make(c -> {
            c.request(r -> {
                r.url("/foo");
                r.method(r.GET());
                r.body(r.file("request.json"));
            });
            c.response(r -> {
                r.status(r.OK());
                r.body(r.file("response.json"));
            });
        }));
    }

}
kotlin
import org.springframework.cloud.contract.spec.ContractDsl.Companion.contract

contract {
    request {
        url = url("/1")
        method = PUT
        headers {
            contentType = APPLICATION_JSON
        }
        body = bodyFromFile("request.json")
    }
    response {
        status = OK
        body = bodyFromFile("response.json")
        headers {
            contentType = APPLICATION_JSON
        }
    }
}

Further assume that the JSON files is as follows:

request.json
{
  "status": "REQUEST"
}
response.json
{
  "status": "RESPONSE"
}

When test or stub generation takes place, the contents of the request.json and response.json files are passed to the body of a request or a response. The name of the file needs to be a file with location relative to the folder in which the contract lays.

If you need to pass the contents of a file in binary form, you can use the fileAsBytes method in the coded DSL or a bodyFromFileAsBytes field in YAML.

The following example shows how to pass the contents of binary files:

groovy
import org.springframework.cloud.contract.spec.Contract

Contract.make {
    request {
        url("/1")
        method(PUT())
        headers {
            contentType(applicationOctetStream())
        }
        body(fileAsBytes("request.pdf"))
    }
    response {
        status 200
        body(fileAsBytes("response.pdf"))
        headers {
            contentType(applicationOctetStream())
        }
    }
}
yml
request:
  url: /1
  method: PUT
  headers:
    Content-Type: application/octet-stream
  bodyFromFileAsBytes: request.pdf
response:
  status: 200
  bodyFromFileAsBytes: response.pdf
  headers:
    Content-Type: application/octet-stream
java
import java.util.Collection;
import java.util.Collections;
import java.util.function.Supplier;

import org.springframework.cloud.contract.spec.Contract;

class contract_rest_from_pdf implements Supplier<Collection<Contract>> {

    @Override
    public Collection<Contract> get() {
        return Collections.singletonList(Contract.make(c -> {
            c.request(r -> {
                r.url("/1");
                r.method(r.PUT());
                r.body(r.fileAsBytes("request.pdf"));
                r.headers(h -> {
                    h.contentType(h.applicationOctetStream());
                });
            });
            c.response(r -> {
                r.status(r.OK());
                r.body(r.fileAsBytes("response.pdf"));
                r.headers(h -> {
                    h.contentType(h.applicationOctetStream());
                });
            });
        }));
    }

}
kotlin
import org.springframework.cloud.contract.spec.ContractDsl.Companion.contract

contract {
    request {
        url = url("/1")
        method = PUT
        headers {
            contentType = APPLICATION_OCTET_STREAM
        }
        body = bodyFromFileAsBytes("contracts/request.pdf")
    }
    response {
        status = OK
        body = bodyFromFileAsBytes("contracts/response.pdf")
        headers {
            contentType = APPLICATION_OCTET_STREAM
        }
    }
}
You should use this approach whenever you want to work with binary payloads, both for HTTP and messaging.

14.3.2. Contracts for HTTP

Spring Cloud Contract lets you verify applications that use REST or HTTP as a means of communication. Spring Cloud Contract verifies that, for a request that matches the criteria from the request part of the contract, the server provides a response that is in keeping with the response part of the contract. Subsequently, the contracts are used to generate WireMock stubs that, for any request matching the provided criteria, provide a suitable response.

HTTP Top-Level Elements

You can call the following methods in the top-level closure of a contract definition:

  • request: Mandatory

  • response : Mandatory

  • priority: Optional

The following example shows how to define an HTTP request contract:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    // Definition of HTTP request part of the contract
    // (this can be a valid request or invalid depending
    // on type of contract being specified).
    request {
        method GET()
        url "/foo"
        //...
    }

    // Definition of HTTP response part of the contract
    // (a service implementing this contract should respond
    // with following response after receiving request
    // specified in "request" part above).
    response {
        status 200
        //...
    }

    // Contract priority, which can be used for overriding
    // contracts (1 is highest). Priority is optional.
    priority 1
}
yml
priority: 8
request:
...
response:
...
java
org.springframework.cloud.contract.spec.Contract.make(c -> {
    // Definition of HTTP request part of the contract
    // (this can be a valid request or invalid depending
    // on type of contract being specified).
    c.request(r -> {
        r.method(r.GET());
        r.url("/foo");
        // ...
    });

    // Definition of HTTP response part of the contract
    // (a service implementing this contract should respond
    // with following response after receiving request
    // specified in "request" part above).
    c.response(r -> {
        r.status(200);
        // ...
    });

    // Contract priority, which can be used for overriding
    // contracts (1 is highest). Priority is optional.
    c.priority(1);
});
kotlin
contract {
    // Definition of HTTP request part of the contract
    // (this can be a valid request or invalid depending
    // on type of contract being specified).
    request {
        method = GET
        url = url("/foo")
        // ...
    }

    // Definition of HTTP response part of the contract
    // (a service implementing this contract should respond
    // with following response after receiving request
    // specified in "request" part above).
    response {
        status = OK
        // ...
    }

    // Contract priority, which can be used for overriding
    // contracts (1 is highest). Priority is optional.
    priority = 1
}
If you want to make your contract have a higher priority, you need to pass a lower number to the priority tag or method. For example, a priority with a value of 5 has higher priority than a priority with a value of 10.
HTTP Request

The HTTP protocol requires only the method and the URL to be specified in a request. The same information is mandatory in request definition of the contract.

The following example shows a contract for a request:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    request {
        // HTTP request method (GET/POST/PUT/DELETE).
        method 'GET'

        // Path component of request URL is specified as follows.
        urlPath('/users')
    }

    response {
        //...
        status 200
    }
}
yml
method: PUT
url: /foo
java
org.springframework.cloud.contract.spec.Contract.make(c -> {
    c.request(r -> {
        // HTTP request method (GET/POST/PUT/DELETE).
        r.method("GET");

        // Path component of request URL is specified as follows.
        r.urlPath("/users");
    });

    c.response(r -> {
        // ...
        r.status(200);
    });
});
kotlin
contract {
    request {
        // HTTP request method (GET/POST/PUT/DELETE).
        method = method("GET")

        // Path component of request URL is specified as follows.
        urlPath = path("/users")
    }
    response {
        // ...
        status = code(200)
    }
}

You can specify an absolute rather than a relative url, but using urlPath is the recommended way, as doing so makes the tests be host-independent.

The following example uses url:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    request {
        method 'GET'

        // Specifying `url` and `urlPath` in one contract is illegal.
        url('http://localhost:8888/users')
    }

    response {
        //...
        status 200
    }
}
yml
request:
  method: PUT
  urlPath: /foo
java
org.springframework.cloud.contract.spec.Contract.make(c -> {
    c.request(r -> {
        r.method("GET");

        // Specifying `url` and `urlPath` in one contract is illegal.
        r.url("http://localhost:8888/users");
    });

    c.response(r -> {
        // ...
        r.status(200);
    });
});
kotlin
contract {
    request {
        method = GET

        // Specifying `url` and `urlPath` in one contract is illegal.
        url("http://localhost:8888/users")
    }
    response {
        // ...
        status = OK
    }
}

request may contain query parameters, as the following example (which uses urlPath) shows:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    request {
        //...
        method GET()

        urlPath('/users') {

            // Each parameter is specified in form
            // `'paramName' : paramValue` where parameter value
            // may be a simple literal or one of matcher functions,
            // all of which are used in this example.
            queryParameters {

                // If a simple literal is used as value
                // default matcher function is used (equalTo)
                parameter 'limit': 100

                // `equalTo` function simply compares passed value
                // using identity operator (==).
                parameter 'filter': equalTo("email")

                // `containing` function matches strings
                // that contains passed substring.
                parameter 'gender': value(consumer(containing("[mf]")), producer('mf'))

                // `matching` function tests parameter
                // against passed regular expression.
                parameter 'offset': value(consumer(matching("[0-9]+")), producer(123))

                // `notMatching` functions tests if parameter
                // does not match passed regular expression.
                parameter 'loginStartsWith': value(consumer(notMatching(".{0,2}")), producer(3))
            }
        }

        //...
    }

    response {
        //...
        status 200
    }
}
yml
request:
...
queryParameters:
  a: b
  b: c
java
org.springframework.cloud.contract.spec.Contract.make(c -> {
    c.request(r -> {
        // ...
        r.method(r.GET());

        r.urlPath("/users", u -> {

            // Each parameter is specified in form
            // `'paramName' : paramValue` where parameter value
            // may be a simple literal or one of matcher functions,
            // all of which are used in this example.
            u.queryParameters(q -> {

                // If a simple literal is used as value
                // default matcher function is used (equalTo)
                q.parameter("limit", 100);

                // `equalTo` function simply compares passed value
                // using identity operator (==).
                q.parameter("filter", r.equalTo("email"));

                // `containing` function matches strings
                // that contains passed substring.
                q.parameter("gender",
                        r.value(r.consumer(r.containing("[mf]")),
                                r.producer("mf")));

                // `matching` function tests parameter
                // against passed regular expression.
                q.parameter("offset",
                        r.value(r.consumer(r.matching("[0-9]+")),
                                r.producer(123)));

                // `notMatching` functions tests if parameter
                // does not match passed regular expression.
                q.parameter("loginStartsWith",
                        r.value(r.consumer(r.notMatching(".{0,2}")),
                                r.producer(3)));
            });
        });

        // ...
    });

    c.response(r -> {
        // ...
        r.status(200);
    });
});
kotlin
contract {
    request {
        // ...
        method = GET

        // Each parameter is specified in form
        // `'paramName' : paramValue` where parameter value
        // may be a simple literal or one of matcher functions,
        // all of which are used in this example.
        urlPath = path("/users") withQueryParameters {
            // If a simple literal is used as value
            // default matcher function is used (equalTo)
            parameter("limit", 100)

            // `equalTo` function simply compares passed value
            // using identity operator (==).
            parameter("filter", equalTo("email"))

            // `containing` function matches strings
            // that contains passed substring.
            parameter("gender", value(consumer(containing("[mf]")), producer("mf")))

            // `matching` function tests parameter
            // against passed regular expression.
            parameter("offset", value(consumer(matching("[0-9]+")), producer(123)))

            // `notMatching` functions tests if parameter
            // does not match passed regular expression.
            parameter("loginStartsWith", value(consumer(notMatching(".{0,2}")), producer(3)))
        }

        // ...
    }
    response {
        // ...
        status = code(200)
    }
}

request can contain additional request headers, as the following example shows:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    request {
        //...
        method GET()
        url "/foo"

        // Each header is added in form `'Header-Name' : 'Header-Value'`.
        // there are also some helper methods
        headers {
            header 'key': 'value'
            contentType(applicationJson())
        }

        //...
    }

    response {
        //...
        status 200
    }
}
yml
request:
...
headers:
  foo: bar
  fooReq: baz
java
org.springframework.cloud.contract.spec.Contract.make(c -> {
    c.request(r -> {
        // ...
        r.method(r.GET());
        r.url("/foo");

        // Each header is added in form `'Header-Name' : 'Header-Value'`.
        // there are also some helper methods
        r.headers(h -> {
            h.header("key", "value");
            h.contentType(h.applicationJson());
        });

        // ...
    });

    c.response(r -> {
        // ...
        r.status(200);
    });
});
kotlin
contract {
    request {
        // ...
        method = GET
        url = url("/foo")

        // Each header is added in form `'Header-Name' : 'Header-Value'`.
        // there are also some helper variables
        headers {
            header("key", "value")
            contentType = APPLICATION_JSON
        }

        // ...
    }
    response {
        // ...
        status = OK
    }
}

request may contain additional request cookies, as the following example shows:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    request {
        //...
        method GET()
        url "/foo"

        // Each Cookies is added in form `'Cookie-Key' : 'Cookie-Value'`.
        // there are also some helper methods
        cookies {
            cookie 'key': 'value'
            cookie('another_key', 'another_value')
        }

        //...
    }

    response {
        //...
        status 200
    }
}
yml
request:
...
cookies:
  foo: bar
  fooReq: baz
java
org.springframework.cloud.contract.spec.Contract.make(c -> {
    c.request(r -> {
        // ...
        r.method(r.GET());
        r.url("/foo");

        // Each Cookies is added in form `'Cookie-Key' : 'Cookie-Value'`.
        // there are also some helper methods
        r.cookies(ck -> {
            ck.cookie("key", "value");
            ck.cookie("another_key", "another_value");
        });

        // ...
    });

    c.response(r -> {
        // ...
        r.status(200);
    });
});
kotlin
contract {
    request {
        // ...
        method = GET
        url = url("/foo")

        // Each Cookies is added in form `'Cookie-Key' : 'Cookie-Value'`.
        // there are also some helper methods
        cookies {
            cookie("key", "value")
            cookie("another_key", "another_value")
        }

        // ...
    }

    response {
        // ...
        status = code(200)
    }
}

request may contain a request body, as the following example shows:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    request {
        //...
        method GET()
        url "/foo"

        // Currently only JSON format of request body is supported.
        // Format will be determined from a header or body's content.
        body '''{ "login" : "john", "name": "John The Contract" }'''
    }

    response {
        //...
        status 200
    }
}
yml
request:
...
body:
  foo: bar
java
org.springframework.cloud.contract.spec.Contract.make(c -> {
    c.request(r -> {
        // ...
        r.method(r.GET());
        r.url("/foo");

        // Currently only JSON format of request body is supported.
        // Format will be determined from a header or body's content.
        r.body("{ \"login\" : \"john\", \"name\": \"John The Contract\" }");
    });

    c.response(r -> {
        // ...
        r.status(200);
    });
});
kotlin
contract {
    request {
        // ...
        method = GET
        url = url("/foo")

        // Currently only JSON format of request body is supported.
        // Format will be determined from a header or body's content.
        body = body("{ \"login\" : \"john\", \"name\": \"John The Contract\" }")
    }
    response {
        // ...
        status = OK
    }
}

request can contain multipart elements. To include multipart elements, use the multipart method/section, as the following examples show:

groovy
org.springframework.cloud.contract.spec.Contract contractDsl = org.springframework.cloud.contract.spec.Contract.make {
    request {
        method 'PUT'
        url '/multipart'
        headers {
            contentType('multipart/form-data;boundary=AaB03x')
        }
        multipart(
                // key (parameter name), value (parameter value) pair
                formParameter: $(c(regex('".+"')), p('"formParameterValue"')),
                someBooleanParameter: $(c(regex(anyBoolean())), p('true')),
                // a named parameter (e.g. with `file` name) that represents file with
                // `name` and `content`. You can also call `named("fileName", "fileContent")`
                file: named(
                        // name of the file
                        name: $(c(regex(nonEmpty())), p('filename.csv')),
                        // content of the file
                        content: $(c(regex(nonEmpty())), p('file content')),
                        // content type for the part
                        contentType: $(c(regex(nonEmpty())), p('application/json')))
        )
    }
    response {
        status OK()
    }
}
org.springframework.cloud.contract.spec.Contract contractDsl = org.springframework.cloud.contract.spec.Contract.make {
    request {
        method "PUT"
        url "/multipart"
        headers {
            contentType('multipart/form-data;boundary=AaB03x')
        }
        multipart(
                file: named(
                        name: value(stub(regex('.+')), test('file')),
                        content: value(stub(regex('.+')), test([100, 117, 100, 97] as byte[]))
                )
        )
    }
    response {
        status 200
    }
}
yml
request:
  method: PUT
  url: /multipart
  headers:
    Content-Type: multipart/form-data;boundary=AaB03x
  multipart:
    params:
      # key (parameter name), value (parameter value) pair
      formParameter: '"formParameterValue"'
      someBooleanParameter: true
    named:
      - paramName: file
        fileName: filename.csv
        fileContent: file content
  matchers:
    multipart:
      params:
        - key: formParameter
          regex: ".+"
        - key: someBooleanParameter
          predefined: any_boolean
      named:
        - paramName: file
          fileName:
            predefined: non_empty
          fileContent:
            predefined: non_empty
response:
  status: 200
java
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.function.Supplier;

import org.springframework.cloud.contract.spec.Contract;
import org.springframework.cloud.contract.spec.internal.DslProperty;
import org.springframework.cloud.contract.spec.internal.Request;
import org.springframework.cloud.contract.verifier.util.ContractVerifierUtil;

class contract_multipart implements Supplier<Collection<Contract>> {

    private static Map<String, DslProperty> namedProps(Request r) {
        Map<String, DslProperty> map = new HashMap<>();
        // name of the file
        map.put("name", r.$(r.c(r.regex(r.nonEmpty())), r.p("filename.csv")));
        // content of the file
        map.put("content", r.$(r.c(r.regex(r.nonEmpty())), r.p("file content")));
        // content type for the part
        map.put("contentType", r.$(r.c(r.regex(r.nonEmpty())), r.p("application/json")));
        return map;
    }

    @Override
    public Collection<Contract> get() {
        return Collections.singletonList(Contract.make(c -> {
            c.request(r -> {
                r.method("PUT");
                r.url("/multipart");
                r.headers(h -> {
                    h.contentType("multipart/form-data;boundary=AaB03x");
                });
                r.multipart(ContractVerifierUtil.map()
                        // key (parameter name), value (parameter value) pair
                        .entry("formParameter",
                                r.$(r.c(r.regex("\".+\"")),
                                        r.p("\"formParameterValue\"")))
                        .entry("someBooleanParameter",
                                r.$(r.c(r.regex(r.anyBoolean())), r.p("true")))
                        // a named parameter (e.g. with `file` name) that represents file
                        // with
                        // `name` and `content`. You can also call `named("fileName",
                        // "fileContent")`
                        .entry("file", r.named(namedProps(r))));
            });
            c.response(r -> {
                r.status(r.OK());
            });
        }));
    }

}
kotlin
import org.springframework.cloud.contract.spec.ContractDsl.Companion.contract

contract {
    request {
        method = PUT
        url = url("/multipart")
        multipart {
            field("formParameter", value(consumer(regex("\".+\"")), producer("\"formParameterValue\"")))
            field("someBooleanParameter", value(consumer(anyBoolean), producer("true")))
            field("file",
                named(
                    // name of the file
                    value(consumer(regex(nonEmpty)), producer("filename.csv")),
                    // content of the file
                    value(consumer(regex(nonEmpty)), producer("file content")),
                    // content type for the part
                    value(consumer(regex(nonEmpty)), producer("application/json"))
                )
            )
        }
        headers {
            contentType = "multipart/form-data;boundary=AaB03x"
        }
    }
    response {
        status = OK
    }
}

In the preceding example, we define parameters in either of two ways:

Coded DSL
  • Directly, by using the map notation, where the value can be a dynamic property (such as formParameter: $(consumer(…​), producer(…​))).

  • By using the named(…​) method that lets you set a named parameter. A named parameter can set a name and content. You can call it either by using a method with two arguments, such as named("fileName", "fileContent"), or by using a map notation, such as named(name: "fileName", content: "fileContent").

YAML
  • The multipart parameters are set in the multipart.params section.

  • The named parameters (the fileName and fileContent for a given parameter name) can be set in the multipart.named section. That section contains the paramName (the name of the parameter), fileName (the name of the file), fileContent (the content of the file) fields.

  • The dynamic bits can be set via the matchers.multipart section.

    • For parameters, use the params section, which can accept regex or a predefined regular expression.

    • for named params, use the named section where first you define the parameter name with paramName. Then you can pass the parametrization of either fileName or fileContent in a regex or in a predefined regular expression.

From the contract in the preceding example, the generated test and stubs look as follows:

Test
// given:
  MockMvcRequestSpecification request = given()
    .header("Content-Type", "multipart/form-data;boundary=AaB03x")
    .param("formParameter", "\"formParameterValue\"")
    .param("someBooleanParameter", "true")
    .multiPart("file", "filename.csv", "file content".getBytes());

 // when:
  ResponseOptions response = given().spec(request)
    .put("/multipart");

 // then:
  assertThat(response.statusCode()).isEqualTo(200);
Stub
            '''
{
  "request" : {
    "url" : "/multipart",
    "method" : "PUT",
    "headers" : {
      "Content-Type" : {
        "matches" : "multipart/form-data;boundary=AaB03x.*"
      }
    },
    "bodyPatterns" : [ {
        "matches" : ".*--(.*)\\r\\nContent-Disposition: form-data; name=\\"formParameter\\"\\r\\n(Content-Type: .*\\r\\n)?(Content-Transfer-Encoding: .*\\r\\n)?(Content-Length: \\\\d+\\r\\n)?\\r\\n\\".+\\"\\r\\n--\\\\1.*"
    }, {
        "matches" : ".*--(.*)\\r\\nContent-Disposition: form-data; name=\\"someBooleanParameter\\"\\r\\n(Content-Type: .*\\r\\n)?(Content-Transfer-Encoding: .*\\r\\n)?(Content-Length: \\\\d+\\r\\n)?\\r\\n(true|false)\\r\\n--\\\\1.*"
    }, {
      "matches" : ".*--(.*)\\r\\nContent-Disposition: form-data; name=\\"file\\"; filename=\\"[\\\\S\\\\s]+\\"\\r\\n(Content-Type: .*\\r\\n)?(Content-Transfer-Encoding: .*\\r\\n)?(Content-Length: \\\\d+\\r\\n)?\\r\\n[\\\\S\\\\s]+\\r\\n--\\\\1.*"
    } ]
  },
  "response" : {
    "status" : 200,
    "transformers" : [ "response-template", "foo-transformer" ]
  }
}
    '''
HTTP Response

The response must contain an HTTP status code and may contain other information. The following code shows an example:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    request {
        //...
        method GET()
        url "/foo"
    }
    response {
        // Status code sent by the server
        // in response to request specified above.
        status OK()
    }
}
yml
response:
...
status: 200
java
org.springframework.cloud.contract.spec.Contract.make(c -> {
    c.request(r -> {
        // ...
        r.method(r.GET());
        r.url("/foo");
    });
    c.response(r -> {
        // Status code sent by the server
        // in response to request specified above.
        r.status(r.OK());
    });
});
kotlin
contract {
    request {
        // ...
        method = GET
        url =url("/foo")
    }
    response {
        // Status code sent by the server
        // in response to request specified above.
        status = OK
    }
}

Besides status, the response may contain headers, cookies, and a body, which are specified the same way as in the request (see HTTP Request).

In the Groovy DSL, you can reference the org.springframework.cloud.contract.spec.internal.HttpStatus methods to provide a meaningful status instead of a digit. For example, you can call OK() for a status 200 or BAD_REQUEST() for 400.
Dynamic properties

The contract can contain some dynamic properties: timestamps, IDs, and so on. You do not want to force the consumers to stub their clocks to always return the same value of time so that it gets matched by the stub.

For the Groovy DSL, you can provide the dynamic parts in your contracts in two ways: pass them directly in the body or set them in a separate section called bodyMatchers.

Before 2.0.0, these were set by using testMatchers and stubMatchers. See the migration guide for more information.

For YAML, you can use only the matchers section.

Entries inside the matchers must reference existing elements of the payload. For more information check this issue.
Dynamic Properties inside the Body
This section is valid only for the Coded DSL (Groovy, Java etc.). Check out the Dynamic Properties in the Matchers Sections section for YAML examples of a similar feature.

You can set the properties inside the body either with the value method or, if you use the Groovy map notation, with $(). The following example shows how to set dynamic properties with the value method:

value
value(consumer(...), producer(...))
value(c(...), p(...))
value(stub(...), test(...))
value(client(...), server(...))
$
$(consumer(...), producer(...))
$(c(...), p(...))
$(stub(...), test(...))
$(client(...), server(...))

Both approaches work equally well. The stub and client methods are aliases over the consumer method. Subsequent sections take a closer look at what you can do with those values.

Regular Expressions
This section is valid only for Groovy DSL. Check out the Dynamic Properties in the Matchers Sections section for YAML examples of a similar feature.

You can use regular expressions to write your requests in the contract DSL. Doing so is particularly useful when you want to indicate that a given response should be provided for requests that follow a given pattern. Also, you can use regular expressions when you need to use patterns and not exact values both for your tests and your server-side tests.

Make sure that regex matches a whole region of a sequence, as, internally, a call to Pattern.matches() is called. For instance, abc does not match aabc, but .abc does. There are several additional known limitations as well.

The following example shows how to use regular expressions to write a request:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    request {
        method('GET')
        url $(consumer(~/\/[0-9]{2}/), producer('/12'))
    }
    response {
        status OK()
        body(
                id: $(anyNumber()),
                surname: $(
                        consumer('Kowalsky'),
                        producer(regex('[a-zA-Z]+'))
                ),
                name: 'Jan',
                created: $(consumer('2014-02-02 12:23:43'), producer(execute('currentDate(it)'))),
                correlationId: value(consumer('5d1f9fef-e0dc-4f3d-a7e4-72d2220dd827'),
                        producer(regex('[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12}'))
                )
        )
        headers {
            header 'Content-Type': 'text/plain'
        }
    }
}
java
org.springframework.cloud.contract.spec.Contract.make(c -> {
    c.request(r -> {
        r.method("GET");
        r.url(r.$(r.consumer(r.regex("\\/[0-9]{2}")), r.producer("/12")));
    });
    c.response(r -> {
        r.status(r.OK());
        r.body(ContractVerifierUtil.map().entry("id", r.$(r.anyNumber()))
                .entry("surname", r.$(r.consumer("Kowalsky"),
                        r.producer(r.regex("[a-zA-Z]+")))));
        r.headers(h -> {
            h.header("Content-Type", "text/plain");
        });
    });
});
kotlin
contract {
    request {
        method = method("GET")
        url = url(v(consumer(regex("\\/[0-9]{2}")), producer("/12")))
    }
    response {
        status = OK
        body(mapOf(
                "id" to v(anyNumber),
                "surname" to v(consumer("Kowalsky"), producer(regex("[a-zA-Z]+")))
        ))
        headers {
            header("Content-Type", "text/plain")
        }
    }
}

You can also provide only one side of the communication with a regular expression. If you do so, then the contract engine automatically provides the generated string that matches the provided regular expression. The following code shows an example for Groovy:

org.springframework.cloud.contract.spec.Contract.make {
    request {
        method 'PUT'
        url value(consumer(regex('/foo/[0-9]{5}')))
        body([
                requestElement: $(consumer(regex('[0-9]{5}')))
        ])
        headers {
            header('header', $(consumer(regex('application\\/vnd\\.fraud\\.v1\\+json;.*'))))
        }
    }
    response {
        status OK()
        body([
                responseElement: $(producer(regex('[0-9]{7}')))
        ])
        headers {
            contentType("application/vnd.fraud.v1+json")
        }
    }
}

In the preceding example, the opposite side of the communication has the respective data generated for request and response.

Spring Cloud Contract comes with a series of predefined regular expressions that you can use in your contracts, as the following example shows:

public static RegexProperty onlyAlphaUnicode() {
    return new RegexProperty(ONLY_ALPHA_UNICODE).asString();
}

public static RegexProperty alphaNumeric() {
    return new RegexProperty(ALPHA_NUMERIC).asString();
}

public static RegexProperty number() {
    return new RegexProperty(NUMBER).asDouble();
}

public static RegexProperty positiveInt() {
    return new RegexProperty(POSITIVE_INT).asInteger();
}

public static RegexProperty anyBoolean() {
    return new RegexProperty(TRUE_OR_FALSE).asBooleanType();
}

public static RegexProperty anInteger() {
    return new RegexProperty(INTEGER).asInteger();
}

public static RegexProperty aDouble() {
    return new RegexProperty(DOUBLE).asDouble();
}

public static RegexProperty ipAddress() {
    return new RegexProperty(IP_ADDRESS).asString();
}

public static RegexProperty hostname() {
    return new RegexProperty(HOSTNAME_PATTERN).asString();
}

public static RegexProperty email() {
    return new RegexProperty(EMAIL).asString();
}

public static RegexProperty url() {
    return new RegexProperty(URL).asString();
}

public static RegexProperty httpsUrl() {
    return new RegexProperty(HTTPS_URL).asString();
}

public static RegexProperty uuid() {
    return new RegexProperty(UUID).asString();
}

public static RegexProperty isoDate() {
    return new RegexProperty(ANY_DATE).asString();
}

public static RegexProperty isoDateTime() {
    return new RegexProperty(ANY_DATE_TIME).asString();
}

public static RegexProperty isoTime() {
    return new RegexProperty(ANY_TIME).asString();
}

public static RegexProperty iso8601WithOffset() {
    return new RegexProperty(ISO8601_WITH_OFFSET).asString();
}

public static RegexProperty nonEmpty() {
    return new RegexProperty(NON_EMPTY).asString();
}

public static RegexProperty nonBlank() {
    return new RegexProperty(NON_BLANK).asString();
}

In your contract, you can use it as follows (example for the Groovy DSL):

Contract dslWithOptionalsInString = Contract.make {
    priority 1
    request {
        method POST()
        url '/users/password'
        headers {
            contentType(applicationJson())
        }
        body(
                email: $(consumer(optional(regex(email()))), producer('[email protected]')),
                callback_url: $(consumer(regex(hostname())), producer('http://partners.com'))
        )
    }
    response {
        status 404
        headers {
            contentType(applicationJson())
        }
        body(
                code: value(consumer("123123"), producer(optional("123123"))),
                message: "User not found by email = [${value(producer(regex(email())), consumer('[email protected]'))}]"
        )
    }
}

To make matters even simpler, you can use a set of predefined objects that automatically assume that you want a regular expression to be passed. All of those methods start with the any prefix, as follows:

T anyAlphaUnicode();

T anyAlphaNumeric();

T anyNumber();

T anyInteger();

T anyPositiveInt();

T anyDouble();

T anyHex();

T aBoolean();

T anyIpAddress();

T anyHostname();

T anyEmail();

T anyUrl();

T anyHttpsUrl();

T anyUuid();

T anyDate();

T anyDateTime();

T anyTime();

T anyIso8601WithOffset();

T anyNonBlankString();

T anyNonEmptyString();

T anyOf(String... values);

The following example shows how you can reference those methods:

groovy
Contract contractDsl = Contract.make {
    name "foo"
    label 'trigger_event'
    input {
        triggeredBy('toString()')
    }
    outputMessage {
        sentTo 'topic.rateablequote'
        body([
                alpha            : $(anyAlphaUnicode()),
                number           : $(anyNumber()),
                anInteger        : $(anyInteger()),
                positiveInt      : $(anyPositiveInt()),
                aDouble          : $(anyDouble()),
                aBoolean         : $(aBoolean()),
                ip               : $(anyIpAddress()),
                hostname         : $(anyHostname()),
                email            : $(anyEmail()),
                url              : $(anyUrl()),
                httpsUrl         : $(anyHttpsUrl()),
                uuid             : $(anyUuid()),
                date             : $(anyDate()),
                dateTime         : $(anyDateTime()),
                time             : $(anyTime()),
                iso8601WithOffset: $(anyIso8601WithOffset()),
                nonBlankString   : $(anyNonBlankString()),
                nonEmptyString   : $(anyNonEmptyString()),
                anyOf            : $(anyOf('foo', 'bar'))
        ])
    }
}
kotlin
contract {
    name = "foo"
    label = "trigger_event"
    input {
        triggeredBy = "toString()"
    }
    outputMessage {
        sentTo = sentTo("topic.rateablequote")
        body(mapOf(
                "alpha" to v(anyAlphaUnicode),
                "number" to v(anyNumber),
                "anInteger" to v(anyInteger),
                "positiveInt" to v(anyPositiveInt),
                "aDouble" to v(anyDouble),
                "aBoolean" to v(aBoolean),
                "ip" to v(anyIpAddress),
                "hostname" to v(anyAlphaUnicode),
                "email" to v(anyEmail),
                "url" to v(anyUrl),
                "httpsUrl" to v(anyHttpsUrl),
                "uuid" to v(anyUuid),
                "date" to v(anyDate),
                "dateTime" to v(anyDateTime),
                "time" to v(anyTime),
                "iso8601WithOffset" to v(anyIso8601WithOffset),
                "nonBlankString" to v(anyNonBlankString),
                "nonEmptyString" to v(anyNonEmptyString),
                "anyOf" to v(anyOf('foo', 'bar'))
        ))
        headers {
            header("Content-Type", "text/plain")
        }
    }
}
Limitations
Due to certain limitations of the Xeger library that generates a string out of a regex, do not use the $ and ^ signs in your regex if you rely on automatic generation. See Issue 899.
Do not use a LocalDate instance as a value for $ (for example, $(consumer(LocalDate.now()))). It causes a java.lang.StackOverflowError. Use $(consumer(LocalDate.now().toString())) instead. See Issue 900.
Passing Optional Parameters
This section is valid only for Groovy DSL. Check out the Dynamic Properties in the Matchers Sections section for YAML examples of a similar feature.

You can provide optional parameters in your contract. However, you can provide optional parameters only for the following:

  • The STUB side of the Request

  • The TEST side of the Response

The following example shows how to provide optional parameters:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    priority 1
    name "optionals"
    request {
        method 'POST'
        url '/users/password'
        headers {
            contentType(applicationJson())
        }
        body(
                email: $(consumer(optional(regex(email()))), producer('[email protected]')),
                callback_url: $(consumer(regex(hostname())), producer('https://partners.com'))
        )
    }
    response {
        status 404
        headers {
            header 'Content-Type': 'application/json'
        }
        body(
                code: value(consumer("123123"), producer(optional("123123")))
        )
    }
}
java
org.springframework.cloud.contract.spec.Contract.make(c -> {
    c.priority(1);
    c.name("optionals");
    c.request(r -> {
        r.method("POST");
        r.url("/users/password");
        r.headers(h -> {
            h.contentType(h.applicationJson());
        });
        r.body(ContractVerifierUtil.map()
                .entry("email",
                        r.$(r.consumer(r.optional(r.regex(r.email()))),
                                r.producer("[email protected]")))
                .entry("callback_url", r.$(r.consumer(r.regex(r.hostname())),
                        r.producer("https://partners.com"))));
    });
    c.response(r -> {
        r.status(404);
        r.headers(h -> {
            h.header("Content-Type", "application/json");
        });
        r.body(ContractVerifierUtil.map().entry("code", r.value(
                r.consumer("123123"), r.producer(r.optional("123123")))));
    });
});
kotlin
contract { c ->
    priority = 1
    name = "optionals"
    request {
        method = POST
        url = url("/users/password")
        headers {
            contentType = APPLICATION_JSON
        }
        body = body(mapOf(
                "email" to v(consumer(optional(regex(email))), producer("[email protected]")),
                "callback_url" to v(consumer(regex(hostname)), producer("https://partners.com"))
        ))
    }
    response {
        status = NOT_FOUND
        headers {
            header("Content-Type", "application/json")
        }
        body(mapOf(
                "code" to value(consumer("123123"), producer(optional("123123")))
        ))
    }
}

By wrapping a part of the body with the optional() method, you create a regular expression that must be present 0 or more times.

If you use Spock, the following test would be generated from the previous example:

groovy
                    """\
package com.example

import com.jayway.jsonpath.DocumentContext
import com.jayway.jsonpath.JsonPath
import spock.lang.Specification
import io.restassured.module.mockmvc.specification.MockMvcRequestSpecification
import io.restassured.response.ResponseOptions

import static org.springframework.cloud.contract.verifier.assertion.SpringCloudContractAssertions.assertThat
import static org.springframework.cloud.contract.verifier.util.ContractVerifierUtil.*
import static com.toomuchcoding.jsonassert.JsonAssertion.assertThatJson
import static io.restassured.module.mockmvc.RestAssuredMockMvc.*

@SuppressWarnings("rawtypes")
class FooSpec extends Specification {

\tdef validate_optionals() throws Exception {
\t\tgiven:
\t\t\tMockMvcRequestSpecification request = given()
\t\t\t\t\t.header("Content-Type", "application/json")
\t\t\t\t\t.body('''{"email":"[email protected]","callback_url":"https://partners.com"}''')

\t\twhen:
\t\t\tResponseOptions response = given().spec(request)
\t\t\t\t\t.post("/users/password")

\t\tthen:
\t\t\tresponse.statusCode() == 404
\t\t\tresponse.header("Content-Type") == 'application/json'

\t\tand:
\t\t\tDocumentContext parsedJson = JsonPath.parse(response.body.asString())
\t\t\tassertThatJson(parsedJson).field("['code']").matches("(123123)?")
\t}

}
"""

The following stub would also be generated:

                    '''
{
  "request" : {
    "url" : "/users/password",
    "method" : "POST",
    "bodyPatterns" : [ {
      "matchesJsonPath" : "$[?(@.['email'] =~ /([a-zA-Z0-9._%+-][email protected][a-zA-Z0-9.-]+\\\\.[a-zA-Z]{2,6})?/)]"
    }, {
      "matchesJsonPath" : "$[?(@.['callback_url'] =~ /((http[s]?|ftp):\\\\/)\\\\/?([^:\\\\/\\\\s]+)(:[0-9]{1,5})?/)]"
    } ],
    "headers" : {
      "Content-Type" : {
        "equalTo" : "application/json"
      }
    }
  },
  "response" : {
    "status" : 404,
    "body" : "{\\"code\\":\\"123123\\",\\"message\\":\\"User not found by email == [not.existing[email protected]]\\"}",
    "headers" : {
      "Content-Type" : "application/json"
    }
  },
  "priority" : 1
}
'''
Executing Custom Methods on the Server Side
This section is valid only for Groovy DSL. Check out the Dynamic Properties in the Matchers Sections section for YAML examples of a similar feature.

You can define a method call that runs on the server side during the test. Such a method can be added to the class defined as baseClassForTests in the configuration. The following code shows an example of the contract portion of the test case:

groovy
method GET()
java
r.method(r.GET());
kotlin
method = GET

The following code shows the base class portion of the test case:

abstract class BaseMockMvcSpec extends Specification {

    def setup() {
        RestAssuredMockMvc.standaloneSetup(new PairIdController())
    }

    void isProperCorrelationId(Integer correlationId) {
        assert correlationId == 123456
    }

    void isEmpty(String value) {
        assert value == null
    }

}
You cannot use both a String and execute to perform concatenation. For example, calling header('Authorization', 'Bearer ' + execute('authToken()')) leads to improper results. Instead, call header('Authorization', execute('authToken()')) and ensure that the authToken() method returns everything you need.

The type of the object read from the JSON can be one of the following, depending on the JSON path:

  • String: If you point to a String value in the JSON.

  • JSONArray: If you point to a List in the JSON.

  • Map: If you point to a Map in the JSON.

  • Number: If you point to Integer, Double, and other numeric type in the JSON.

  • Boolean: If you point to a Boolean in the JSON.

In the request part of the contract, you can specify that the body should be taken from a method.

You must provide both the consumer and the producer side. The execute part is applied for the whole body, not for parts of it.

The following example shows how to read an object from JSON:

Contract contractDsl = Contract.make {
    request {
        method 'GET'
        url '/something'
        body(
                $(c('foo'), p(execute('hashCode()')))
        )
    }
    response {
        status OK()
    }
}

The preceding example results in calling the hashCode() method in the request body. It should resemble the following code:

// given:
 MockMvcRequestSpecification request = given()
   .body(hashCode());

// when:
 ResponseOptions response = given().spec(request)
   .get("/something");

// then:
 assertThat(response.statusCode()).isEqualTo(200);
Referencing the Request from the Response

The best situation is to provide fixed values, but sometimes you need to reference a request in your response.

If you write contracts in the Groovy DSL, you can use the fromRequest() method, which lets you reference a bunch of elements from the HTTP request. You can use the following options:

  • fromRequest().url(): Returns the request URL and query parameters.

  • fromRequest().query(String key): Returns the first query parameter with a given name.

  • fromRequest().query(String key, int index): Returns the nth query parameter with a given name.

  • fromRequest().path(): Returns the full path.

  • fromRequest().path(int index): Returns the nth path element.

  • fromRequest().header(String key): Returns the first header with a given name.

  • fromRequest().header(String key, int index): Returns the nth header with a given name.

  • fromRequest().body(): Returns the full request body.

  • fromRequest().body(String jsonPath): Returns the element from the request that matches the JSON Path.

If you use the YAML contract definition or the Java one, you have to use the Handlebars {{{ }}} notation with custom Spring Cloud Contract functions to achieve this. In that case, you can use the following options:

  • {{{ request.url }}}: Returns the request URL and query parameters.

  • {{{ request.query.key.[index] }}}: Returns the nth query parameter with a given name. For example, for a key of thing, the first entry is {{{ request.query.thing.[0] }}}

  • {{{ request.path }}}: Returns the full path.

  • {{{ request.path.[index] }}}: Returns the nth path element. For example, the first entry is `{{{ request.path.[0] }}}

  • {{{ request.headers.key }}}: Returns the first header with a given name.

  • {{{ request.headers.key.[index] }}}: Returns the nth header with a given name.

  • {{{ request.body }}}: Returns the full request body.

  • {{{ jsonpath this 'your.json.path' }}}: Returns the element from the request that matches the JSON Path. For example, for a JSON path of $.here, use {{{ jsonpath this '$.here' }}}

Consider the following contract:

groovy
Contract contractDsl = Contract.make {
    request {
        method 'GET'
        url('/api/v1/xxxx') {
            queryParameters {
                parameter('foo', 'bar')
                parameter('foo', 'bar2')
            }
        }
        headers {
            header(authorization(), 'secret')
            header(authorization(), 'secret2')
        }
        body(foo: 'bar', baz: 5)
    }
    response {
        status OK()
        headers {
            header(authorization(), "foo ${fromRequest().header(authorization())} bar")
        }
        body(
                url: fromRequest().url(),
                path: fromRequest().path(),
                pathIndex: fromRequest().path(1),
                param: fromRequest().query('foo'),
                paramIndex: fromRequest().query('foo', 1),
                authorization: fromRequest().header('Authorization'),
                authorization2: fromRequest().header('Authorization', 1),
                fullBody: fromRequest().body(),
                responseFoo: fromRequest().body('$.foo'),
                responseBaz: fromRequest().body('$.baz'),
                responseBaz2: "Bla bla ${fromRequest().body('$.foo')} bla bla",
                rawUrl: fromRequest().rawUrl(),
                rawPath: fromRequest().rawPath(),
                rawPathIndex: fromRequest().rawPath(1),
                rawParam: fromRequest().rawQuery('foo'),
                rawParamIndex: fromRequest().rawQuery('foo', 1),
                rawAuthorization: fromRequest().rawHeader('Authorization'),
                rawAuthorization2: fromRequest().rawHeader('Authorization', 1),
                rawResponseFoo: fromRequest().rawBody('$.foo'),
                rawResponseBaz: fromRequest().rawBody('$.baz'),
                rawResponseBaz2: "Bla bla ${fromRequest().rawBody('$.foo')} bla bla"
        )
    }
}
Contract contractDsl = Contract.make {
    request {
        method 'GET'
        url('/api/v1/xxxx') {
            queryParameters {
                parameter('foo', 'bar')
                parameter('foo', 'bar2')
            }
        }
        headers {
            header(authorization(), 'secret')
            header(authorization(), 'secret2')
        }
        body(foo: "bar", baz: 5)
    }
    response {
        status OK()
        headers {
            contentType(applicationJson())
        }
        body('''
                {
                    "responseFoo": "{{{ jsonPath request.body '$.foo' }}}",
                    "responseBaz": {{{ jsonPath request.body '$.baz' }}},
                    "responseBaz2": "Bla bla {{{ jsonPath request.body '$.foo' }}} bla bla"
                }
        '''.toString())
    }
}
yml
request:
  method: GET
  url: /api/v1/xxxx
  queryParameters:
    foo:
      - bar
      - bar2
  headers:
    Authorization:
      - secret
      - secret2
  body:
    foo: bar
    baz: 5
response:
  status: 200
  headers:
    Authorization: "foo {{{ request.headers.Authorization.0 }}} bar"
  body:
    url: "{{{ request.url }}}"
    path: "{{{ request.path }}}"
    pathIndex: "{{{ request.path.1 }}}"
    param: "{{{ request.query.foo }}}"
    paramIndex: "{{{ request.query.foo.1 }}}"
    authorization: "{{{ request.headers.Authorization.0 }}}"
    authorization2: "{{{ request.headers.Authorization.1 }}"
    fullBody: "{{{ request.body }}}"
    responseFoo: "{{{ jsonpath this '$.foo' }}}"
    responseBaz: "{{{ jsonpath this '$.baz' }}}"
    responseBaz2: "Bla bla {{{ jsonpath this '$.foo' }}} bla bla"
java
package contracts.beer.rest;

import java.util.function.Supplier;

import org.springframework.cloud.contract.spec.Contract;

import static org.springframework.cloud.contract.verifier.util.ContractVerifierUtil.map;

class shouldReturnStatsForAUser implements Supplier<Contract> {

    @Override
    public Contract get() {
        return Contract.make(c -> {
            c.request(r -> {
                r.method("POST");
                r.url("/stats");
                r.body(map().entry("name", r.anyAlphaUnicode()));
                r.headers(h -> {
                    h.contentType(h.applicationJson());
                });
            });
            c.response(r -> {
                r.status(r.OK());
                r.body(map()
                        .entry("text",
                                "Dear {{{jsonPath request.body '$.name'}}} thanks for your interested in drinking beer")
                        .entry("quantity", r.$(r.c(5), r.p(r.anyNumber()))));
                r.headers(h -> {
                    h.contentType(h.applicationJson());
                });
            });
        });
    }

}
kotlin
package contracts.beer.rest

import org.springframework.cloud.contract.spec.ContractDsl.Companion.contract

contract {
    request {
        method = method("POST")
        url = url("/stats")
        body(mapOf(
            "name" to anyAlphaUnicode
        ))
        headers {
            contentType = APPLICATION_JSON
        }
    }
    response {
        status = OK
        body(mapOf(
            "text" to "Don't worry ${fromRequest().body("$.name")} thanks for your interested in drinking beer",
            "quantity" to v(c(5), p(anyNumber))
        ))
        headers {
            contentType = fromRequest().header(CONTENT_TYPE)
        }
    }
}

Running a JUnit test generation leads to a test that resembles the following example:

// given:
 MockMvcRequestSpecification request = given()
   .header("Authorization", "secret")
   .header("Authorization", "secret2")
   .body("{\"foo\":\"bar\",\"baz\":5}");

// when:
 ResponseOptions response = given().spec(request)
   .queryParam("foo","bar")
   .queryParam("foo","bar2")
   .get("/api/v1/xxxx");

// then:
 assertThat(response.statusCode()).isEqualTo(200);
 assertThat(response.header("Authorization")).isEqualTo("foo secret bar");
// and:
 DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
 assertThatJson(parsedJson).field("['fullBody']").isEqualTo("{\"foo\":\"bar\",\"baz\":5}");
 assertThatJson(parsedJson).field("['authorization']").isEqualTo("secret");
 assertThatJson(parsedJson).field("['authorization2']").isEqualTo("secret2");
 assertThatJson(parsedJson).field("['path']").isEqualTo("/api/v1/xxxx");
 assertThatJson(parsedJson).field("['param']").isEqualTo("bar");
 assertThatJson(parsedJson).field("['paramIndex']").isEqualTo("bar2");
 assertThatJson(parsedJson).field("['pathIndex']").isEqualTo("v1");
 assertThatJson(parsedJson).field("['responseBaz']").isEqualTo(5);
 assertThatJson(parsedJson).field("['responseFoo']").isEqualTo("bar");
 assertThatJson(parsedJson).field("['url']").isEqualTo("/api/v1/xxxx?foo=bar&foo=bar2");
 assertThatJson(parsedJson).field("['responseBaz2']").isEqualTo("Bla bla bar bla bla");

As you can see, elements from the request have been properly referenced in the response.

The generated WireMock stub should resemble the following example:

{
  "request" : {
    "urlPath" : "/api/v1/xxxx",
    "method" : "POST",
    "headers" : {
      "Authorization" : {
        "equalTo" : "secret2"
      }
    },
    "queryParameters" : {
      "foo" : {
        "equalTo" : "bar2"
      }
    },
    "bodyPatterns" : [ {
      "matchesJsonPath" : "$[?(@.['baz'] == 5)]"
    }, {
      "matchesJsonPath" : "$[?(@.['foo'] == 'bar')]"
    } ]
  },
  "response" : {
    "status" : 200,
    "body" : "{\"authorization\":\"{{{request.headers.Authorization.[0]}}}\",\"path\":\"{{{request.path}}}\",\"responseBaz\":{{{jsonpath this '$.baz'}}} ,\"param\":\"{{{request.query.foo.[0]}}}\",\"pathIndex\":\"{{{request.path.[1]}}}\",\"responseBaz2\":\"Bla bla {{{jsonpath this '$.foo'}}} bla bla\",\"responseFoo\":\"{{{jsonpath this '$.foo'}}}\",\"authorization2\":\"{{{request.headers.Authorization.[1]}}}\",\"fullBody\":\"{{{escapejsonbody}}}\",\"url\":\"{{{request.url}}}\",\"paramIndex\":\"{{{request.query.foo.[1]}}}\"}",
    "headers" : {
      "Authorization" : "{{{request.headers.Authorization.[0]}}};foo"
    },
    "transformers" : [ "response-template" ]
  }
}

Sending a request such as the one presented in the request part of the contract results in sending the following response body:

{
  "url" : "/api/v1/xxxx?foo=bar&foo=bar2",
  "path" : "/api/v1/xxxx",
  "pathIndex" : "v1",
  "param" : "bar",
  "paramIndex" : "bar2",
  "authorization" : "secret",
  "authorization2" : "secret2",
  "fullBody" : "{\"foo\":\"bar\",\"baz\":5}",
  "responseFoo" : "bar",
  "responseBaz" : 5,
  "responseBaz2" : "Bla bla bar bla bla"
}
This feature works only with WireMock versions greater than or equal to 2.5.1. The Spring Cloud Contract Verifier uses WireMock’s response-template response transformer. It uses Handlebars to convert the Mustache {{{ }}} templates into proper values. Additionally, it registers two helper functions:
  • escapejsonbody: Escapes the request body in a format that can be embedded in a JSON.

  • jsonpath: For a given parameter, find an object in the request body.

Dynamic Properties in the Matchers Sections

If you work with Pact, the following discussion may seem familiar. Quite a few users are used to having a separation between the body and setting the dynamic parts of a contract.

You can use the bodyMatchers section for two reasons:

  • Define the dynamic values that should end up in a stub. You can set it in the request or inputMessage part of your contract.

  • Verify the result of your test. This section is present in the response or outputMessage side of the contract.

Currently, Spring Cloud Contract Verifier supports only JSON path-based matchers with the following matching possibilities:

Coded DSL
  • For the stubs (in tests on the consumer’s side):

    • byEquality(): The value taken from the consumer’s request in the provided JSON path must be equal to the value provided in the contract.

    • byRegex(…​): The value taken from the consumer’s request in the provided JSON path must match the regex. You can also pass the type of the expected matched value (for example, asString(), asLong(), and so on).

    • byDate(): The value taken from the consumer’s request in the provided JSON path must match the regex for an ISO Date value.

    • byTimestamp(): The value taken from the consumer’s request in the provided JSON path must match the regex for an ISO DateTime value.

    • byTime(): The value taken from the consumer’s request in the provided JSON path must match the regex for an ISO Time value.

  • For the verification (in generated tests on the Producer’s side):

    • byEquality(): The value taken from the producer’s response in the provided JSON path must be equal to the provided value in the contract.

    • byRegex(…​): The value taken from the producer’s response in the provided JSON path must match the regex.

    • byDate(): The value taken from the producer’s response in the provided JSON path must match the regex for an ISO Date value.

    • byTimestamp(): The value taken from the producer’s response in the provided JSON path must match the regex for an ISO DateTime value.

    • byTime(): The value taken from the producer’s response in the provided JSON path must match the regex for an ISO Time value.

    • byType(): The value taken from the producer’s response in the provided JSON path needs to be of the same type as the type defined in the body of the response in the contract. byType can take a closure, in which you can set minOccurrence and maxOccurrence. For the request side, you should use the closure to assert size of the collection. That way, you can assert the size of the flattened collection. To check the size of an unflattened collection, use a custom method with the byCommand(…​) testMatcher.

    • byCommand(…​): The value taken from the producer’s response in the provided JSON path is passed as an input to the custom method that you provide. For example, byCommand('thing($it)') results in calling a thing method to which the value matching the JSON Path gets passed. The type of the object read from the JSON can be one of the following, depending on the JSON path:

      • String: If you point to a String value.

      • JSONArray: If you point to a List.

      • Map: If you point to a Map.

      • Number: If you point to Integer, Double, or another kind of number.

      • Boolean: If you point to a Boolean.

    • byNull(): The value taken from the response in the provided JSON path must be null.

YAML
See the Groovy section for detailed explanation of what the types mean.

For YAML, the structure of a matcher resembles the following example:

- path: $.thing1
  type: by_regex
  value: thing2
  regexType: as_string

Alternatively, if you want to use one of the predefined regular expressions [only_alpha_unicode, number, any_boolean, ip_address, hostname, email, url, uuid, iso_date, iso_date_time, iso_time, iso_8601_with_offset, non_empty, non_blank], you can use something similar to the following example:

- path: $.thing1
  type: by_regex
  predefined: only_alpha_unicode

The following list shows the allowed list of type values:

  • For stubMatchers:

    • by_equality

    • by_regex

    • by_date

    • by_timestamp

    • by_time

    • by_type

      • Two additional fields (minOccurrence and maxOccurrence) are accepted.

  • For testMatchers:

    • by_equality

    • by_regex

    • by_date

    • by_timestamp

    • by_time

    • by_type

      • Two additional fields (minOccurrence and maxOccurrence) are accepted.

    • by_command

    • by_null

You can also define which type the regular expression corresponds to in the regexType field. The following list shows the allowed regular expression types:

  • as_integer

  • as_double

  • as_float

  • as_long

  • as_short

  • as_boolean

  • as_string

Consider the following example:

groovy
Contract contractDsl = Contract.make {
    request {
        method 'GET'
        urlPath '/get'
        body([
                duck                : 123,
                alpha               : 'abc',
                number              : 123,
                aBoolean            : true,
                date                : '2017-01-01',
                dateTime            : '2017-01-01T01:23:45',
                time                : '01:02:34',
                valueWithoutAMatcher: 'foo',
                valueWithTypeMatch  : 'string',
                key                 : [
                        'complex.key': 'foo'
                ]
        ])
        bodyMatchers {
            jsonPath('$.duck', byRegex("[0-9]{3}").asInteger())
            jsonPath('$.duck', byEquality())
            jsonPath('$.alpha', byRegex(onlyAlphaUnicode()).asString())
            jsonPath('$.alpha', byEquality())
            jsonPath('$.number', byRegex(number()).asInteger())
            jsonPath('$.aBoolean', byRegex(anyBoolean()).asBooleanType())
            jsonPath('$.date', byDate())
            jsonPath('$.dateTime', byTimestamp())
            jsonPath('$.time', byTime())
            jsonPath("\$.['key'].['complex.key']", byEquality())
        }
        headers {
            contentType(applicationJson())
        }
    }
    response {
        status OK()
        body([
                duck                 : 123,
                alpha                : 'abc',
                number               : 123,
                positiveInteger      : 1234567890,
                negativeInteger      : -1234567890,
                positiveDecimalNumber: 123.4567890,
                negativeDecimalNumber: -123.4567890,
                aBoolean             : true,
                date                 : '2017-01-01',
                dateTime             : '2017-01-01T01:23:45',
                time                 : "01:02:34",
                valueWithoutAMatcher : 'foo',
                valueWithTypeMatch   : 'string',
                valueWithMin         : [
                        1, 2, 3
                ],
                valueWithMax         : [
                        1, 2, 3
                ],
                valueWithMinMax      : [
                        1, 2, 3
                ],
                valueWithMinEmpty    : [],
                valueWithMaxEmpty    : [],
                key                  : [
                        'complex.key': 'foo'
                ],
                nullValue            : null
        ])
        bodyMatchers {
            // asserts the jsonpath value against manual regex
            jsonPath('$.duck', byRegex("[0-9]{3}").asInteger())
            // asserts the jsonpath value against the provided value
            jsonPath('$.duck', byEquality())
            // asserts the jsonpath value against some default regex
            jsonPath('$.alpha', byRegex(onlyAlphaUnicode()).asString())
            jsonPath('$.alpha', byEquality())
            jsonPath('$.number', byRegex(number()).asInteger())
            jsonPath('$.positiveInteger', byRegex(anInteger()).asInteger())
            jsonPath('$.negativeInteger', byRegex(anInteger()).asInteger())
            jsonPath('$.positiveDecimalNumber', byRegex(aDouble()).asDouble())
            jsonPath('$.negativeDecimalNumber', byRegex(aDouble()).asDouble())
            jsonPath('$.aBoolean', byRegex(anyBoolean()).asBooleanType())
            // asserts vs inbuilt time related regex
            jsonPath('$.date', byDate())
            jsonPath('$.dateTime', byTimestamp())
            jsonPath('$.time', byTime())
            // asserts that the resulting type is the same as in response body
            jsonPath('$.valueWithTypeMatch', byType())
            jsonPath('$.valueWithMin', byType {
                // results in verification of size of array (min 1)
                minOccurrence(1)
            })
            jsonPath('$.valueWithMax', byType {
                // results in verification of size of array (max 3)
                maxOccurrence(3)
            })
            jsonPath('$.valueWithMinMax', byType {
                // results in verification of size of array (min 1 & max 3)
                minOccurrence(1)
                maxOccurrence(3)
            })
            jsonPath('$.valueWithMinEmpty', byType {
                // results in verification of size of array (min 0)
                minOccurrence(0)
            })
            jsonPath('$.valueWithMaxEmpty', byType {
                // results in verification of size of array (max 0)
                maxOccurrence(0)
            })
            // will execute a method `assertThatValueIsANumber`
            jsonPath('$.duck', byCommand('assertThatValueIsANumber($it)'))
            jsonPath("\$.['key'].['complex.key']", byEquality())
            jsonPath('$.nullValue', byNull())
        }
        headers {
            contentType(applicationJson())
            header('Some-Header', $(c('someValue'), p(regex('[a-zA-Z]{9}'))))
        }
    }
}
yml
request:
  method: GET
  urlPath: /get/1
  headers:
    Content-Type: application/json
  cookies:
    foo: 2
    bar: 3
  queryParameters:
    limit: 10
    offset: 20
    filter: 'email'
    sort: name
    search: 55
    age: 99
    name: John.Doe
    email: '[email protected]'
  body:
    duck: 123
    alpha: "abc"
    number: 123
    aBoolean: true
    date: "2017-01-01"
    dateTime: "2017-01-01T01:23:45"
    time: "01:02:34"
    valueWithoutAMatcher: "foo"
    valueWithTypeMatch: "string"
    key:
      "complex.key": 'foo'
    nullValue: null
    valueWithMin:
      - 1
      - 2
      - 3
    valueWithMax:
      - 1
      - 2
      - 3
    valueWithMinMax:
      - 1
      - 2
      - 3
    valueWithMinEmpty: []
    valueWithMaxEmpty: []
  matchers:
    url:
      regex: /get/[0-9]
      # predefined:
      # execute a method
      #command: 'equals($it)'
    queryParameters:
      - key: limit
        type: equal_to
        value: 20
      - key: offset
        type: containing
        value: 20
      - key: sort
        type: equal_to
        value: name
      - key: search
        type: not_matching
        value: '^[0-9]{2}$'
      - key: age
        type: not_matching
        value: '^\\w*$'
      - key: name
        type: matching
        value: 'John.*'
      - key: hello
        type: absent
    cookies:
      - key: foo
        regex: '[0-9]'
      - key: bar
        command: 'equals($it)'
    headers:
      - key: Content-Type
        regex: "application/json.*"
    body:
      - path: $.duck
        type: by_regex
        value: "[0-9]{3}"
      - path: $.duck
        type: by_equality
      - path: $.alpha
        type: by_regex
        predefined: only_alpha_unicode
      - path: $.alpha
        type: by_equality
      - path: $.number
        type: by_regex
        predefined: number
      - path: $.aBoolean
        type: by_regex
        predefined: any_boolean
      - path: $.date
        type: by_date
      - path: $.dateTime
        type: by_timestamp
      - path: $.time
        type: by_time
      - path: "$.['key'].['complex.key']"
        type: by_equality
      - path: $.nullvalue
        type: by_null
      - path: $.valueWithMin
        type: by_type
        minOccurrence: 1
      - path: $.valueWithMax
        type: by_type
        maxOccurrence: 3
      - path: $.valueWithMinMax
        type: by_type
        minOccurrence: 1
        maxOccurrence: 3
response:
  status: 200
  cookies:
    foo: 1
    bar: 2
  body:
    duck: 123
    alpha: "abc"
    number: 123
    aBoolean: true
    date: "2017-01-01"
    dateTime: "2017-01-01T01:23:45"
    time: "01:02:34"
    valueWithoutAMatcher: "foo"
    valueWithTypeMatch: "string"
    valueWithMin:
      - 1
      - 2
      - 3
    valueWithMax:
      - 1
      - 2
      - 3
    valueWithMinMax:
      - 1
      - 2
      - 3
    valueWithMinEmpty: []
    valueWithMaxEmpty: []
    key:
      'complex.key': 'foo'
    nulValue: null
  matchers:
    headers:
      - key: Content-Type
        regex: "application/json.*"
    cookies:
      - key: foo
        regex: '[0-9]'
      - key: bar
        command: 'equals($it)'
    body:
      - path: $.duck
        type: by_regex
        value: "[0-9]{3}"
      - path: $.duck
        type: by_equality
      - path: $.alpha
        type: by_regex
        predefined: only_alpha_unicode
      - path: $.alpha
        type: by_equality
      - path: $.number
        type: by_regex
        predefined: number
      - path: $.aBoolean
        type: by_regex
        predefined: any_boolean
      - path: $.date
        type: by_date
      - path: $.dateTime
        type: by_timestamp
      - path: $.time
        type: by_time
      - path: $.valueWithTypeMatch
        type: by_type
      - path: $.valueWithMin
        type: by_type
        minOccurrence: 1
      - path: $.valueWithMax
        type: by_type
        maxOccurrence: 3
      - path: $.valueWithMinMax
        type: by_type
        minOccurrence: 1
        maxOccurrence: 3
      - path: $.valueWithMinEmpty
        type: by_type
        minOccurrence: 0
      - path: $.valueWithMaxEmpty
        type: by_type
        maxOccurrence: 0
      - path: $.duck
        type: by_command
        value: assertThatValueIsANumber($it)
      - path: $.nullValue
        type: by_null
        value: null
  headers:
    Content-Type: application/json

In the preceding example, you can see the dynamic portions of the contract in the matchers sections. For the request part, you can see that, for all fields but valueWithoutAMatcher, the values of the regular expressions that the stub should contain are explicitly set. For the valueWithoutAMatcher, the verification takes place in the same way as without the use of matchers. In that case, the test performs an equality check.

For the response side in the bodyMatchers section, we define the dynamic parts in a similar manner. The only difference is that the byType matchers are also present. The verifier engine checks four fields to verify whether the response from the test has a value for which the JSON path matches the given field, is of the same type as the one defined in the response body, and passes the following check (based on the method being called):

  • For $.valueWithTypeMatch, the engine checks whether the type is the same.

  • For $.valueWithMin, the engine checks the type and asserts whether the size is greater than or equal to the minimum occurrence.

  • For $.valueWithMax, the engine checks the type and asserts whether the size is smaller than or equal to the maximum occurrence.

  • For $.valueWithMinMax, the engine checks the type and asserts whether the size is between the minimum and maximum occurrence.

The resulting test resembles the following example (note that an and section separates the autogenerated assertions and the assertion from matchers):

// given:
 MockMvcRequestSpecification request = given()
   .header("Content-Type", "application/json")
   .body("{\"duck\":123,\"alpha\":\"abc\",\"number\":123,\"aBoolean\":true,\"date\":\"2017-01-01\",\"dateTime\":\"2017-01-01T01:23:45\",\"time\":\"01:02:34\",\"valueWithoutAMatcher\":\"foo\",\"valueWithTypeMatch\":\"string\",\"key\":{\"complex.key\":\"foo\"}}");

// when:
 ResponseOptions response = given().spec(request)
   .get("/get");

// then:
 assertThat(response.statusCode()).isEqualTo(200);
 assertThat(response.header("Content-Type")).matches("application/json.*");
// and:
 DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
 assertThatJson(parsedJson).field("['valueWithoutAMatcher']").isEqualTo("foo");
// and:
 assertThat(parsedJson.read("$.duck", String.class)).matches("[0-9]{3}");
 assertThat(parsedJson.read("$.duck", Integer.class)).isEqualTo(123);
 assertThat(parsedJson.read("$.alpha", String.class)).matches("[\\p{L}]*");
 assertThat(parsedJson.read("$.alpha", String.class)).isEqualTo("abc");
 assertThat(parsedJson.read("$.number", String.class)).matches("-?(\\d*\\.\\d+|\\d+)");
 assertThat(parsedJson.read("$.aBoolean", String.class)).matches("(true|false)");
 assertThat(parsedJson.read("$.date", String.class)).matches("(\\d\\d\\d\\d)-(0[1-9]|1[012])-(0[1-9]|[12][0-9]|3[01])");
 assertThat(parsedJson.read("$.dateTime", String.class)).matches("([0-9]{4})-(1[0-2]|0[1-9])-(3[01]|0[1-9]|[12][0-9])T(2[0-3]|[01][0-9]):([0-5][0-9]):([0-5][0-9])");
 assertThat(parsedJson.read("$.time", String.class)).matches("(2[0-3]|[01][0-9]):([0-5][0-9]):([0-5][0-9])");
 assertThat((Object) parsedJson.read("$.valueWithTypeMatch")).isInstanceOf(java.lang.String.class);
 assertThat((Object) parsedJson.read("$.valueWithMin")).isInstanceOf(java.util.List.class);
 assertThat((java.lang.Iterable) parsedJson.read("$.valueWithMin", java.util.Collection.class)).as("$.valueWithMin").hasSizeGreaterThanOrEqualTo(1);
 assertThat((Object) parsedJson.read("$.valueWithMax")).isInstanceOf(java.util.List.class);
 assertThat((java.lang.Iterable) parsedJson.read("$.valueWithMax", java.util.Collection.class)).as("$.valueWithMax").hasSizeLessThanOrEqualTo(3);
 assertThat((Object) parsedJson.read("$.valueWithMinMax")).isInstanceOf(java.util.List.class);
 assertThat((java.lang.Iterable) parsedJson.read("$.valueWithMinMax", java.util.Collection.class)).as("$.valueWithMinMax").hasSizeBetween(1, 3);
 assertThat((Object) parsedJson.read("$.valueWithMinEmpty")).isInstanceOf(java.util.List.class);
 assertThat((java.lang.Iterable) parsedJson.read("$.valueWithMinEmpty", java.util.Collection.class)).as("$.valueWithMinEmpty").hasSizeGreaterThanOrEqualTo(0);
 assertThat((Object) parsedJson.read("$.valueWithMaxEmpty")).isInstanceOf(java.util.List.class);
 assertThat((java.lang.Iterable) parsedJson.read("$.valueWithMaxEmpty", java.util.Collection.class)).as("$.valueWithMaxEmpty").hasSizeLessThanOrEqualTo(0);
 assertThatValueIsANumber(parsedJson.read("$.duck"));
 assertThat(parsedJson.read("$.['key'].['complex.key']", String.class)).isEqualTo("foo");
Notice that, for the byCommand method, the example calls the assertThatValueIsANumber. This method must be defined in the test base class or be statically imported to your tests. Notice that the byCommand call was converted to assertThatValueIsANumber(parsedJson.read("$.duck"));. That means that the engine took the method name and passed the proper JSON path as a parameter to it.

The resulting WireMock stub is in the following example:

                    '''
{
  "request" : {
    "urlPath" : "/get",
    "method" : "POST",
    "headers" : {
      "Content-Type" : {
        "matches" : "application/json.*"
      }
    },
    "bodyPatterns" : [ {
      "matchesJsonPath" : "$.['list'].['some'].['nested'][?(@.['anothervalue'] == 4)]"
    }, {
      "matchesJsonPath" : "$[?(@.['valueWithoutAMatcher'] == 'foo')]"
    }, {
      "matchesJsonPath" : "$[?(@.['valueWithTypeMatch'] == 'string')]"
    }, {
      "matchesJsonPath" : "$.['list'].['someother'].['nested'][?(@.['json'] == 'with value')]"
    }, {
      "matchesJsonPath" : "$.['list'].['someother'].['nested'][?(@.['anothervalue'] == 4)]"
    }, {
      "matchesJsonPath" : "$[?(@.duck =~ /([0-9]{3})/)]"
    }, {
      "matchesJsonPath" : "$[?(@.duck == 123)]"
    }, {
      "matchesJsonPath" : "$[?(@.alpha =~ /([\\\\p{L}]*)/)]"
    }, {
      "matchesJsonPath" : "$[?(@.alpha == 'abc')]"
    }, {
      "matchesJsonPath" : "$[?(@.number =~ /(-?(\\\\d*\\\\.\\\\d+|\\\\d+))/)]"
    }, {
      "matchesJsonPath" : "$[?(@.aBoolean =~ /((true|false))/)]"
    }, {
      "matchesJsonPath" : "$[?(@.date =~ /((\\\\d\\\\d\\\\d\\\\d)-(0[1-9]|1[012])-(0[1-9]|[12][0-9]|3[01]))/)]"
    }, {
      "matchesJsonPath" : "$[?(@.dateTime =~ /(([0-9]{4})-(1[0-2]|0[1-9])-(3[01]|0[1-9]|[12][0-9])T(2[0-3]|[01][0-9]):([0-5][0-9]):([0-5][0-9]))/)]"
    }, {
      "matchesJsonPath" : "$[?(@.time =~ /((2[0-3]|[01][0-9]):([0-5][0-9]):([0-5][0-9]))/)]"
    }, {
      "matchesJsonPath" : "$.list.some.nested[?(@.json =~ /(.*)/)]"
    }, {
      "matchesJsonPath" : "$[?(@.valueWithMin.size() >= 1)]"
    }, {
      "matchesJsonPath" : "$[?(@.valueWithMax.size() <= 3)]"
    }, {
      "matchesJsonPath" : "$[?(@.valueWithMinMax.size() >= 1 && @.valueWithMinMax.size() <= 3)]"
    }, {
      "matchesJsonPath" : "$[?(@.valueWithOccurrence.size() >= 4 && @.valueWithOccurrence.size() <= 4)]"
    } ]
  },
  "response" : {
    "status" : 200,
    "body" : "{\\"duck\\":123,\\"alpha\\":\\"abc\\",\\"number\\":123,\\"aBoolean\\":true,\\"date\\":\\"2017-01-01\\",\\"dateTime\\":\\"2017-01-01T01:23:45\\",\\"time\\":\\"01:02:34\\",\\"valueWithoutAMatcher\\":\\"foo\\",\\"valueWithTypeMatch\\":\\"string\\",\\"valueWithMin\\":[1,2,3],\\"valueWithMax\\":[1,2,3],\\"valueWithMinMax\\":[1,2,3],\\"valueWithOccurrence\\":[1,2,3,4]}",
    "headers" : {
      "Content-Type" : "application/json"
    },
    "transformers" : [ "response-template" ]
  }
}
'''
If you use a matcher, the part of the request and response that the matcher addresses with the JSON Path gets removed from the assertion. In the case of verifying a collection, you must create matchers for all the elements of the collection.

Consider the following example:

Contract.make {
    request {
        method 'GET'
        url("/foo")
    }
    response {
        status OK()
        body(events: [[
                                 operation          : 'EXPORT',
                                 eventId            : '16f1ed75-0bcc-4f0d-a04d-3121798faf99',
                                 status             : 'OK'
                         ], [
                                 operation          : 'INPUT_PROCESSING',
                                 eventId            : '3bb4ac82-6652-462f-b6d1-75e424a0024a',
                                 status             : 'OK'
                         ]
                ]
        )
        bodyMatchers {
            jsonPath('$.events[0].operation', byRegex('.+'))
            jsonPath('$.events[0].eventId', byRegex('^([a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})$'))
            jsonPath('$.events[0].status', byRegex('.+'))
        }
    }
}

The preceding code leads to creating the following test (the code block shows only the assertion section):

and:
    DocumentContext parsedJson = JsonPath.parse(response.body.asString())
    assertThatJson(parsedJson).array("['events']").contains("['eventId']").isEqualTo("16f1ed75-0bcc-4f0d-a04d-3121798faf99")
    assertThatJson(parsedJson).array("['events']").contains("['operation']").isEqualTo("EXPORT")
    assertThatJson(parsedJson).array("['events']").contains("['operation']").isEqualTo("INPUT_PROCESSING")
    assertThatJson(parsedJson).array("['events']").contains("['eventId']").isEqualTo("3bb4ac82-6652-462f-b6d1-75e424a0024a")
    assertThatJson(parsedJson).array("['events']").contains("['status']").isEqualTo("OK")
and:
    assertThat(parsedJson.read("\$.events[0].operation", String.class)).matches(".+")
    assertThat(parsedJson.read("\$.events[0].eventId", String.class)).matches("^([a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})\$")
    assertThat(parsedJson.read("\$.events[0].status", String.class)).matches(".+")

As you can see, the assertion is malformed. Only the first element of the array got asserted. In order to fix this, you should apply the assertion to the whole $.events collection and assert it with the byCommand(…​) method.

Asynchronous Support

If you use asynchronous communication on the server side (your controllers are returning Callable, DeferredResult, and so on), then, inside your contract, you must provide an async() method in the response section. The following code shows an example:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    request {
        method GET()
        url '/get'
    }
    response {
        status OK()
        body 'Passed'
        async()
    }
}
yml
response:
    async: true
java
class contract implements Supplier<Collection<Contract>> {

    @Override
    public Collection<Contract> get() {
        return Collections.singletonList(Contract.make(c -> {
            c.request(r -> {
                // ...
            });
            c.response(r -> {
                r.async();
                // ...
            });
        }));
    }

}
kotlin
import org.springframework.cloud.contract.spec.ContractDsl.Companion.contract

contract {
    request {
        // ...
    }
    response {
        async = true
        // ...
    }
}

You can also use the fixedDelayMilliseconds method or property to add delay to your stubs. The following example shows how to do so:

groovy
org.springframework.cloud.contract.spec.Contract.make {
    request {
        method GET()
        url '/get'
    }
    response {
        status 200
        body 'Passed'
        fixedDelayMilliseconds 1000
    }
}
yml
response:
    fixedDelayMilliseconds: 1000
java
class contract implements Supplier<Collection<Contract>> {

    @Override
    public Collection<Contract> get() {
        return Collections.singletonList(Contract.make(c -> {
            c.request(r -> {
                // ...
            });
            c.response(r -> {
                r.fixedDelayMilliseconds(1000);
                // ...
            });
        }));
    }

}
kotlin
import org.springframework.cloud.contract.spec.ContractDsl.Companion.contract

contract {
    request {
        // ...
    }
    response {
        delay = fixedMilliseconds(1000)
        // ...
    }
}
XML Support for HTTP

For HTTP contracts, we also support using XML in the request and response body. The XML body has to be passed within the body element as a String or GString. Also, body matchers can be provided for both the request and the response. In place of the jsonPath(…​) method, the org.springframework.cloud.contract.spec.internal.BodyMatchers.xPath method should be used, with the desired xPath provided as the first argument and the appropriate MatchingType as second. All the body matchers apart from byType() are supported.

The following example shows a Groovy DSL contract with XML in the response body:

groovy
                    Contract.make {
                        request {
                            method GET()
                            urlPath '/get'
                            headers {
                                contentType(applicationXml())
                            }
                        }
                        response {
                            status(OK())
                            headers {
                                contentType(applicationXml())
                            }
                            body """
<test>
<duck type='xtype'>123</duck>
<alpha>abc</alpha>
<list>
<elem>abc</elem>
<elem>def</elem>
<elem>ghi</elem>
</list>
<number>123</number>
<aBoolean>true</aBoolean>
<date>2017-01-01</date>
<dateTime>2017-01-01T01:23:45</dateTime>
<time>01:02:34</time>
<valueWithoutAMatcher>foo</valueWithoutAMatcher>
<key><complex>foo</complex></key>
</test>"""
                            bodyMatchers {
                                xPath('/test/duck/text()', byRegex("[0-9]{3}"))
                                xPath('/test/duck/text()', byCommand('equals($it)'))
                                xPath('/test/duck/xxx', byNull())
                                xPath('/test/duck/text()', byEquality())
                                xPath('/test/alpha/text()', byRegex(onlyAlphaUnicode()))
                                xPath('/test/alpha/text()', byEquality())
                                xPath('/test/number/text()', byRegex(number()))
                                xPath('/test/date/text()', byDate())
                                xPath('/test/dateTime/text()', byTimestamp())
                                xPath('/test/time/text()', byTime())
                                xPath('/test/*/complex/text()', byEquality())
                                xPath('/test/duck/@type', byEquality())
                            }
                        }
                    }
yml
include::/home/marcin/repo/spring-cloud-scripts/contract/spring-cloud-contract-verifier/src/test/resources/yml/contract_rest_xml.yml
java
import java.util.function.Supplier;

import org.springframework.cloud.contract.spec.Contract;

class contract_xml implements Supplier<Contract> {

    @Override
    public Contract get() {
        return Contract.make(c -> {
            c.request(r -> {
                r.method(r.GET());
                r.urlPath("/get");
                r.headers(h -> {
                    h.contentType(h.applicationXml());
                });
            });
            c.response(r -> {
                r.status(r.OK());
                r.headers(h -> {
                    h.contentType(h.applicationXml());
                });
                r.body("<test>\n" + "<duck type='xtype'>123</duck>\n"
                        + "<alpha>abc</alpha>\n" + "<list>\n" + "<elem>abc</elem>\n"
                        + "<elem>def</elem>\n" + "<elem>ghi</elem>\n" + "</list>\n"
                        + "<number>123</number>\n" + "<aBoolean>true</aBoolean>\n"
                        + "<date>2017-01-01</date>\n"
                        + "<dateTime>2017-01-01T01:23:45</dateTime>\n"
                        + "<time>01:02:34</time>\n"
                        + "<valueWithoutAMatcher>foo</valueWithoutAMatcher>\n"
                        + "<key><complex>foo</complex></key>\n" + "</test>");
                r.bodyMatchers(m -> {
                    m.xPath("/test/duck/text()", m.byRegex("[0-9]{3}"));
                    m.xPath("/test/duck/text()", m.byCommand("equals($it)"));
                    m.xPath("/test/duck/xxx", m.byNull());
                    m.xPath("/test/duck/text()", m.byEquality());
                    m.xPath("/test/alpha/text()", m.byRegex(r.onlyAlphaUnicode()));
                    m.xPath("/test/alpha/text()", m.byEquality());
                    m.xPath("/test/number/text()", m.byRegex(r.number()));
                    m.xPath("/test/date/text()", m.byDate());
                    m.xPath("/test/dateTime/text()", m.byTimestamp());
                    m.xPath("/test/time/text()", m.byTime());
                    m.xPath("/test/*/complex/text()", m.byEquality());
                    m.xPath("/test/duck/@type", m.byEquality());
                });
            });
        });
    };

}
kotlin
import org.springframework.cloud.contract.spec.ContractDsl.Companion.contract

contract {
    request {
        method = GET
        urlPath = path("/get")
        headers {
            contentType = APPLICATION_XML
        }
    }
    response {
        status = OK
        headers {
            contentType =APPLICATION_XML
        }
        body = body("<test>\n" + "<duck type='xtype'>123</duck>\n"
                + "<alpha>abc</alpha>\n" + "<list>\n" + "<elem>abc</elem>\n"
                + "<elem>def</elem>\n" + "<elem>ghi</elem>\n" + "</list>\n"
                + "<number>123</number>\n" + "<aBoolean>true</aBoolean>\n"
                + "<date>2017-01-01</date>\n"
                + "<dateTime>2017-01-01T01:23:45</dateTime>\n"
                + "<time>01:02:34</time>\n"
                + "<valueWithoutAMatcher>foo</valueWithoutAMatcher>\n"
                + "<key><complex>foo</complex></key>\n" + "</test>")
        bodyMatchers {
            xPath("/test/duck/text()", byRegex("[0-9]{3}"))
            xPath("/test/duck/text()", byCommand("equals(\$it)"))
            xPath("/test/duck/xxx", byNull)
            xPath("/test/duck/text()", byEquality)
            xPath("/test/alpha/text()", byRegex(onlyAlphaUnicode))
            xPath("/test/alpha/text()", byEquality)
            xPath("/test/number/text()", byRegex(number))
            xPath("/test/date/text()", byDate)
            xPath("/test/dateTime/text()", byTimestamp)
            xPath("/test/time/text()", byTime)
            xPath("/test/*/complex/text()", byEquality)
            xPath("/test/duck/@type", byEquality)
        }
    }
}

The following example shows an automatically generated test for XML in the response body:

@Test
public void validate_xmlMatches() throws Exception {
    // given:
    MockMvcRequestSpecification request = given()
                .header("Content-Type", "application/xml");

    // when:
    ResponseOptions response = given().spec(request).get("/get");

    // then:
    assertThat(response.statusCode()).isEqualTo(200);
    // and:
    DocumentBuilder documentBuilder = DocumentBuilderFactory.newInstance()
                    .newDocumentBuilder();
    Document parsedXml = documentBuilder.parse(new InputSource(
                new StringReader(response.getBody().asString())));
    // and:
    assertThat(valueFromXPath(parsedXml, "/test/list/elem/text()")).isEqualTo("abc");
    assertThat(valueFromXPath(parsedXml,"/test/list/elem[2]/text()")).isEqualTo("def");
    assertThat(valueFromXPath(parsedXml, "/test/duck/text()")).matches("[0-9]{3}");
    assertThat(nodeFromXPath(parsedXml, "/test/duck/xxx")).isNull();
    assertThat(valueFromXPath(parsedXml, "/test/alpha/text()")).matches("[\\p{L}]*");
    assertThat(valueFromXPath(parsedXml, "/test/*/complex/text()")).isEqualTo("foo");
    assertThat(valueFromXPath(parsedXml, "/test/duck/@type")).isEqualTo("xtype");
    }
Multiple Contracts in One File

You can define multiple contracts in one file. Such a contract might resemble the following example:

groovy
import org.springframework.cloud.contract.spec.Contract

[
    Contract.make {
        name("should post a user")
        request {
            method 'POST'
            url('/users/1')
        }
        response {
            status OK()
        }
    },
    Contract.make {
        request {
            method 'POST'
            url('/users/2')
        }
        response {
            status OK()
        }
    }
]
yml
---
name: should post a user
request:
  method: POST
  url: /users/1
response:
  status: 200
---
request:
  method: POST
  url: /users/2
response:
  status: 200
---
request:
  method: POST
  url: /users/3
response:
  status: 200
java
class contract implements Supplier<Collection<Contract>> {

    @Override
    public Collection<Contract> get() {
        return Arrays.asList(
            Contract.make(c -> {
                c.name("should post a user");
                // ...
            }), Contract.make(c -> {
                // ...
            }), Contract.make(c -> {
                // ...
            })
        );
    }

}
kotlin
import org.springframework.cloud.contract.spec.ContractDsl.Companion.contract

arrayOf(
    contract {
        name("should post a user")
        // ...
    },
    contract {
        // ...
    },
    contract {
        // ...
    }
}

In the preceding example, one contract has the name field and the other does not. This leads to generation of two tests that look more or less like the following:

package org.springframework.cloud.contract.verifier.tests.com.hello;

import com.example.TestBase;
import com.jayway.jsonpath.DocumentContext;
import com.jayway.jsonpath.JsonPath;
import com.jayway.restassured.module.mockmvc.specification.MockMvcRequestSpecification;
import com.jayway.restassured.response.ResponseOptions;
import org.junit.Test;

import static com.jayway.restassured.module.mockmvc.RestAssuredMockMvc.*;
import static com.toomuchcoding.jsonassert.JsonAssertion.assertThatJson;
import static org.assertj.core.api.Assertions.assertThat;

public class V1Test extends TestBase {

    @Test
    public void validate_should_post_a_user() throws Exception {
        // given:
            MockMvcRequestSpecification request = given();

        // when:
            ResponseOptions response = given().spec(request)
                    .post("/users/1");

        // then:
            assertThat(response.statusCode()).isEqualTo(200);
    }

    @Test
    public void validate_withList_1() throws Exception {
        // given:
            MockMvcRequestSpecification request = given();

        // when:
            ResponseOptions response = given().spec(request)
                    .post("/users/2");

        // then:
            assertThat(response.statusCode()).isEqualTo(200);
    }

}

Notice that, for the contract that has the name field, the generated test method is named validate_should_post_a_user. The one that does not have the name field is called validate_withList_1. It corresponds to the name of the file WithList.groovy and the index of the contract in the list.

The generated stubs are shown in the following example:

should post a user.json
1_WithList.json

The first file got the name parameter from the contract. The second got the name of the contract file (WithList.groovy) prefixed with the index (in this case, the contract had an index of 1 in the list of contracts in the file).

It is much better to name your contracts, because doing so makes your tests far more meaningful.
Stateful Contracts

Stateful contracts (known also as scenarios) are contract definitions that should be read in order. This might be useful in the following situations:

  • You want to execute the contract in a precisely defined order, since you use Spring Cloud Contract to test your stateful application

We really discourage you from doing that, since contract tests should be stateless.
  • You want the same endpoint to return different results for the same request.

To create stateful contracts (or scenarios), you need to use the proper naming convention while creating your contracts. The convention requires including an order number followed by an underscore. This works regardless of whether you work with YAML or Groovy. The following listing shows an example:

my_contracts_dir\
  scenario1\
    1_login.groovy
    2_showCart.groovy
    3_logout.groovy

Such a tree causes Spring Cloud Contract Verifier to generate WireMock’s scenario with a name of scenario1 and the three following steps:

  1. login, marked as Started pointing to…​

  2. showCart, marked as Step1 pointing to…​

  3. logout, marked as Step2 (which closes the scenario).

You can find more details about WireMock scenarios at https://wiremock.org/docs/stateful-behaviour/.

14.3.3. Integrations

JAX-RS

The Spring Cloud Contract supports the JAX-RS 2 Client API. The base class needs to define protected WebTarget webTarget and server initialization. The only option for testing JAX-RS API is to start a web server. Also, a request with a body needs to have a content type be set. Otherwise, the default of application/octet-stream gets used.

In order to use JAX-RS mode, use the following settings:

testMode = 'JAXRSCLIENT'

The following example shows a generated test API:

                    """\
package com.example;

import com.jayway.jsonpath.DocumentContext;
import com.jayway.jsonpath.JsonPath;
import org.junit.Test;
import org.junit.Rule;
import javax.ws.rs.client.Entity;
import javax.ws.rs.core.Response;

import static org.springframework.cloud.contract.verifier.assertion.SpringCloudContractAssertions.assertThat;
import static org.springframework.cloud.contract.verifier.util.ContractVerifierUtil.*;
import static com.toomuchcoding.jsonassert.JsonAssertion.assertThatJson;
import static javax.ws.rs.client.Entity.*;

@SuppressWarnings("rawtypes")
public class FooTest {
\tWebTarget webTarget;

\[email protected]
\tpublic void validate_() throws Exception {

\t\t// when:
\t\t\tResponse response = webTarget
\t\t\t\t\t\t\t.path("/users")
\t\t\t\t\t\t\t.queryParam("limit", "10")
\t\t\t\t\t\t\t.queryParam("offset", "20")
\t\t\t\t\t\t\t.queryParam("filter", "email")
\t\t\t\t\t\t\t.queryParam("sort", "name")
\t\t\t\t\t\t\t.queryParam("search", "55")
\t\t\t\t\t\t\t.queryParam("age", "99")
\t\t\t\t\t\t\t.queryParam("name", "Denis.Stepanov")
\t\t\t\t\t\t\t.queryParam("email", "[email protected]")
\t\t\t\t\t\t\t.request()
\t\t\t\t\t\t\t.build("GET")
\t\t\t\t\t\t\t.invoke();
\t\t\tString responseAsString = response.readEntity(String.class);

\t\t// then:
\t\t\tassertThat(response.getStatus()).isEqualTo(200);

\t\t// and:
\t\t\tDocumentContext parsedJson = JsonPath.parse(responseAsString);
\t\t\tassertThatJson(parsedJson).field("['property1']").isEqualTo("a");
\t}

}

"""
WebFlux with WebTestClient

You can work with WebFlux by using WebTestClient. The following listing shows how to configure WebTestClient as the test mode:

Maven
<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <version>${spring-cloud-contract.version}</version>
    <extensions>true</extensions>
    <configuration>
        <testMode>WEBTESTCLIENT</testMode>
    </configuration>
</plugin>
Gradle
contracts {
        testMode = 'WEBTESTCLIENT'
}

The following example shows how to set up a WebTestClient base class and RestAssured for WebFlux:

import io.restassured.module.webtestclient.RestAssuredWebTestClient;
import org.junit.Before;

public abstract class BeerRestBase {

    @Before
    public void setup() {
        RestAssuredWebTestClient.standaloneSetup(
        new ProducerController(personToCheck -> personToCheck.age >= 20));
    }
}
}
The WebTestClient mode is faster than the EXPLICIT mode.
WebFlux with Explicit Mode

You can also use WebFlux with the explicit mode in your generated tests to work with WebFlux. The following example shows how to configure using explicit mode:

Maven
<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <version>${spring-cloud-contract.version}</version>
    <extensions>true</extensions>
    <configuration>
        <testMode>EXPLICIT</testMode>
    </configuration>
</plugin>
Gradle
contracts {
        testMode = 'EXPLICIT'
}

The following example shows how to set up a base class and RestAssured for Web Flux:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = BeerRestBase.Config.class,
        webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT,
        properties = "server.port=0")
public abstract class BeerRestBase {

    // your tests go here

    // in this config class you define all controllers and mocked services
@Configuration
@EnableAutoConfiguration
static class Config {

    @Bean
    PersonCheckingService personCheckingService()  {
        return personToCheck -> personToCheck.age >= 20;
    }

    @Bean
    ProducerController producerController() {
        return new ProducerController(personCheckingService());
    }
}

}
Working with Context Paths

Spring Cloud Contract supports context paths.

The only change needed to fully support context paths is the switch on the producer side. Also, the autogenerated tests must use explicit mode. The consumer side remains untouched. In order for the generated test to pass, you must use explicit mode. The following example shows how to set the test mode to EXPLICIT:

Maven
<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <version>${spring-cloud-contract.version}</version>
    <extensions>true</extensions>
    <configuration>
        <testMode>EXPLICIT</testMode>
    </configuration>
</plugin>
Gradle
contracts {
        testMode = 'EXPLICIT'
}

That way, you generate a test that does not use MockMvc. It means that you generate real requests and you need to set up your generated test’s base class to work on a real socket.

Consider the following contract:

org.springframework.cloud.contract.spec.Contract.make {
    request {
        method 'GET'
        url '/my-context-path/url'
    }
    response {
        status OK()
    }
}

The following example shows how to set up a base class and RestAssured:

import io.restassured.RestAssured;
import org.junit.Before;
import org.springframework.boot.web.server.LocalServerPort;
import org.springframework.boot.test.context.SpringBootTest;

@SpringBootTest(classes = ContextPathTestingBaseClass.class, webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class ContextPathTestingBaseClass {

    @LocalServerPort int port;

    @Before
    public void setup() {
        RestAssured.baseURI = "http://localhost";
        RestAssured.port = this.port;
    }
}

If you do it this way:

  • All of your requests in the autogenerated tests are sent to the real endpoint with your context path included (for example, /my-context-path/url).

  • Your contracts reflect that you have a context path. Your generated stubs also have that information (for example, in the stubs, you have to call /my-context-path/url).

Working with REST Docs

You can use Spring REST Docs to generate documentation (for example, in Asciidoc format) for an HTTP API with Spring MockMvc, WebTestClient, or RestAssured. At the same time that you generate documentation for your API, you can also generate WireMock stubs by using Spring Cloud Contract WireMock. To do so, write your normal REST Docs test cases and use @AutoConfigureRestDocs to have stubs be automatically generated in the REST Docs output directory.

rest docs

The following example uses MockMvc:

@RunWith(SpringRunner.class)
@SpringBootTest
@AutoConfigureRestDocs(outputDir = "target/snippets")
@AutoConfigureMockMvc
public class ApplicationTests {

    @Autowired
    private MockMvc mockMvc;

    @Test
    public void contextLoads() throws Exception {
        mockMvc.perform(get("/resource"))
                .andExpect(content().string("Hello World"))
                .andDo(document("resource"));
    }
}

This test generates a WireMock stub at target/snippets/stubs/resource.json. It matches all GET requests to the /resource path. The same example with WebTestClient (used for testing Spring WebFlux applications) would be as follows:

@RunWith(SpringRunner.class)
@SpringBootTest
@AutoConfigureRestDocs(outputDir = "target/snippets")
@AutoConfigureWebTestClient
public class ApplicationTests {

    @Autowired
    private WebTestClient client;

    @Test
    public void contextLoads() throws Exception {
        client.get().uri("/resource").exchange()
                .expectBody(String.class).isEqualTo("Hello World")
                .consumeWith(document("resource"));
    }
}

Without any additional configuration, these tests create a stub with a request matcher for the HTTP method and all headers except host and content-length. To match the request more precisely (for example, to match the body of a POST or PUT), we need to explicitly create