Spring Cloud provides tools for developers to quickly build some of the common patterns in distributed systems (e.g. configuration management, service discovery, circuit breakers, intelligent routing, micro-proxy, control bus, one-time tokens, global locks, leadership election, distributed sessions, cluster state). Coordination of distributed systems leads to boiler plate patterns, and using Spring Cloud developers can quickly stand up services and applications that implement those patterns. They will work well in any distributed environment, including the developer’s own laptop, bare metal data centres, and managed platforms such as Cloud Foundry.

Version: Camden.SR3

Features

Spring Cloud focuses on providing good out of box experience for typical use cases and extensibility mechanism to cover others.

  • Distributed/versioned configuration

  • Service registration and discovery

  • Routing

  • Service-to-service calls

  • Load balancing

  • Circuit Breakers

  • Global locks

  • Leadership election and cluster state

  • Distributed messaging

Cloud Native Applications

Cloud Native is a style of application development that encourages easy adoption of best practices in the areas of continuous delivery and value-driven development. A related discipline is that of building 12-factor Apps in which development practices are aligned with delivery and operations goals, for instance by using declarative programming and management and monitoring. Spring Cloud facilitates these styles of development in a number of specific ways and the starting point is a set of features that all components in a distributed system either need or need easy access to when required.

Many of those features are covered by Spring Boot, which we build on in Spring Cloud. Some more are delivered by Spring Cloud as two libraries: Spring Cloud Context and Spring Cloud Commons. Spring Cloud Context provides utilities and special services for the ApplicationContext of a Spring Cloud application (bootstrap context, encryption, refresh scope and environment endpoints). Spring Cloud Commons is a set of abstractions and common classes used in different Spring Cloud implementations (eg. Spring Cloud Netflix vs. Spring Cloud Consul).

If you are getting an exception due to "Illegal key size" and you are using Sun’s JDK, you need to install the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files. See the following links for more information:

Extract files into JDK/jre/lib/security folder (whichever version of JRE/JDK x64/x86 you are using).

Note
Spring Cloud is released under the non-restrictive Apache 2.0 license. If you would like to contribute to this section of the documentation or if you find an error, please find the source code and issue trackers in the project at github.

Spring Cloud Context: Application Context Services

Spring Boot has an opinionated view of how to build an application with Spring: for instance it has conventional locations for common configuration file, and endpoints for common management and monitoring tasks. Spring Cloud builds on top of that and adds a few features that probably all components in a system would use or occasionally need.

The Bootstrap Application Context

A Spring Cloud application operates by creating a "bootstrap" context, which is a parent context for the main application. Out of the box it is responsible for loading configuration properties from the external sources, and also decrypting properties in the local external configuration files. The two contexts share an Environment which is the source of external properties for any Spring application. Bootstrap properties are added with high precedence, so they cannot be overridden by local configuration, by default.

The bootstrap context uses a different convention for locating external configuration than the main application context, so instead of application.yml (or .properties) you use bootstrap.yml, keeping the external configuration for bootstrap and main context nicely separate. Example:

bootstrap.yml
spring:
  application:
    name: foo
  cloud:
    config:
      uri: ${SPRING_CONFIG_URI:http://localhost:8888}

It is a good idea to set the spring.application.name (in bootstrap.yml or application.yml) if your application needs any application-specific configuration from the server.

You can disable the bootstrap process completely by setting spring.cloud.bootstrap.enabled=false (e.g. in System properties).

Application Context Hierarchies

If you build an application context from SpringApplication or SpringApplicationBuilder, then the Bootstrap context is added as a parent to that context. It is a feature of Spring that child contexts inherit property sources and profiles from their parent, so the "main" application context will contain additional property sources, compared to building the same context without Spring Cloud Config. The additional property sources are:

  • "bootstrap": an optional CompositePropertySource appears with high priority if any PropertySourceLocators are found in the Bootstrap context, and they have non-empty properties. An example would be properties from the Spring Cloud Config Server. See below for instructions on how to customize the contents of this property source.

  • "applicationConfig: [classpath:bootstrap.yml]" (and friends if Spring profiles are active). If you have a bootstrap.yml (or properties) then those properties are used to configure the Bootstrap context, and then they get added to the child context when its parent is set. They have lower precedence than the application.yml (or properties) and any other property sources that are added to the child as a normal part of the process of creating a Spring Boot application. See below for instructions on how to customize the contents of these property sources.

Because of the ordering rules of property sources the "bootstrap" entries take precedence, but note that these do not contain any data from bootstrap.yml, which has very low precedence, but can be used to set defaults.

You can extend the context hierarchy by simply setting the parent context of any ApplicationContext you create, e.g. using its own interface, or with the SpringApplicationBuilder convenience methods (parent(), child() and sibling()). The bootstrap context will be the parent of the most senior ancestor that you create yourself. Every context in the hierarchy will have its own "bootstrap" property source (possibly empty) to avoid promoting values inadvertently from parents down to their descendants. Every context in the hierarchy can also (in principle) have a different spring.application.name and hence a different remote property source if there is a Config Server. Normal Spring application context behaviour rules apply to property resolution: properties from a child context override those in the parent, by name and also by property source name (if the child has a property source with the same name as the parent, the one from the parent is not included in the child).

Note that the SpringApplicationBuilder allows you to share an Environment amongst the whole hierarchy, but that is not the default. Thus, sibling contexts in particular do not need to have the same profiles or property sources, even though they will share common things with their parent.

Changing the Location of Bootstrap Properties

The bootstrap.yml (or .properties) location can be specified using spring.cloud.bootstrap.name (default "bootstrap") or spring.cloud.bootstrap.location (default empty), e.g. in System properties. Those properties behave like the spring.config.* variants with the same name, in fact they are used to set up the bootstrap ApplicationContext by setting those properties in its Environment. If there is an active profile (from spring.profiles.active or through the Environment API in the context you are building) then properties in that profile will be loaded as well, just like in a regular Spring Boot app, e.g. from bootstrap-development.properties for a "development" profile.

Overriding the Values of Remote Properties

The property sources that are added to you application by the bootstrap context are often "remote" (e.g. from a Config Server), and by default they cannot be overridden locally, except on the command line. If you want to allow your applications to override the remote properties with their own System properties or config files, the remote property source has to grant it permission by setting spring.cloud.config.allowOverride=true (it doesn’t work to set this locally). Once that flag is set there are some finer grained settings to control the location of the remote properties in relation to System properties and the application’s local configuration: spring.cloud.config.overrideNone=true to override with any local property source, and spring.cloud.config.overrideSystemProperties=false if only System properties and env vars should override the remote settings, but not the local config files.

Customizing the Bootstrap Configuration

The bootstrap context can be trained to do anything you like by adding entries to /META-INF/spring.factories under the key org.springframework.cloud.bootstrap.BootstrapConfiguration. This is a comma-separated list of Spring @Configuration classes which will be used to create the context. Any beans that you want to be available to the main application context for autowiring can be created here, and also there is a special contract for @Beans of type ApplicationContextInitializer. Classes can be marked with an @Order if you want to control the startup sequence (the default order is "last").

Warning
Be careful when adding custom BootstrapConfiguration that the classes you add are not @ComponentScanned by mistake into your "main" application context, where they might not be needed. Use a separate package name for boot configuration classes that is not already covered by your @ComponentScan or @SpringBootApplication annotated configuration classes.

The bootstrap process ends by injecting initializers into the main SpringApplication instance (i.e. the normal Spring Boot startup sequence, whether it is running as a standalone app or deployed in an application server). First a bootstrap context is created from the classes found in spring.factories and then all @Beans of type ApplicationContextInitializer are added to the main SpringApplication before it is started.

Customizing the Bootstrap Property Sources

The default property source for external configuration added by the bootstrap process is the Config Server, but you can add additional sources by adding beans of type PropertySourceLocator to the bootstrap context (via spring.factories). You could use this to insert additional properties from a different server, or from a database, for instance.

As an example, consider the following trivial custom locator:

@Configuration
public class CustomPropertySourceLocator implements PropertySourceLocator {

    @Override
    public PropertySource<?> locate(Environment environment) {
        return new MapPropertySource("customProperty",
                Collections.<String, Object>singletonMap("property.from.sample.custom.source", "worked as intended"));
    }

}

The Environment that is passed in is the one for the ApplicationContext about to be created, i.e. the one that we are supplying additional property sources for. It will already have its normal Spring Boot-provided property sources, so you can use those to locate a property source specific to this Environment (e.g. by keying it on the spring.application.name, as is done in the default Config Server property source locator).

If you create a jar with this class in it and then add a META-INF/spring.factories containing:

org.springframework.cloud.bootstrap.BootstrapConfiguration=sample.custom.CustomPropertySourceLocator

then the "customProperty" PropertySource will show up in any application that includes that jar on its classpath.

Environment Changes

The application will listen for an EnvironmentChangedEvent and react to the change in a couple of standard ways (additional ApplicationListeners can be added as @Beans by the user in the normal way). When an EnvironmentChangedEvent is observed it will have a list of key values that have changed, and the application will use those to:

  • Re-bind any @ConfigurationProperties beans in the context

  • Set the logger levels for any properties in logging.level.*

Note that the Config Client does not by default poll for changes in the Environment, and generally we would not recommend that approach for detecting changes (although you could set it up with a @Scheduled annotation). If you have a scaled-out client application then it is better to broadcast the EnvironmentChangedEvent to all the instances instead of having them polling for changes (e.g. using the Spring Cloud Bus).

The EnvironmentChangedEvent covers a large class of refresh use cases, as long as you can actually make a change to the Environment and publish the event (those APIs are public and part of core Spring). You can verify the changes are bound to @ConfigurationProperties beans by visiting the /configprops endpoint (normal Spring Boot Actuator feature). For instance a DataSource can have its maxPoolSize changed at runtime (the default DataSource created by Spring Boot is an @ConfigurationProperties bean) and grow capacity dynamically. Re-binding @ConfigurationProperties does not cover another large class of use cases, where you need more control over the refresh, and where you need a change to be atomic over the whole ApplicationContext. To address those concerns we have @RefreshScope.

Refresh Scope

A Spring @Bean that is marked as @RefreshScope will get special treatment when there is a configuration change. This addresses the problem of stateful beans that only get their configuration injected when they are initialized. For instance if a DataSource has open connections when the database URL is changed via the Environment, we probably want the holders of those connections to be able to complete what they are doing. Then the next time someone borrows a connection from the pool he gets one with the new URL.

Refresh scope beans are lazy proxies that initialize when they are used (i.e. when a method is called), and the scope acts as a cache of initialized values. To force a bean to re-initialize on the next method call you just need to invalidate its cache entry.

The RefreshScope is a bean in the context and it has a public method refreshAll() to refresh all beans in the scope by clearing the target cache. There is also a refresh(String) method to refresh an individual bean by name. This functionality is exposed in the /refresh endpoint (over HTTP or JMX).

Note
@RefreshScope works (technically) on an @Configuration class, but it might lead to surprising behaviour: e.g. it does not mean that all the @Beans defined in that class are themselves @RefreshScope. Specifically, anything that depends on those beans cannot rely on them being updated when a refresh is initiated, unless it is itself in @RefreshScope (in which it will be rebuilt on a refresh and its dependencies re-injected, at which point they will be re-initialized from the refreshed @Configuration).

Encryption and Decryption

Spring Cloud has an Environment pre-processor for decrypting property values locally. It follows the same rules as the Config Server, and has the same external configuration via encrypt.*. Thus you can use encrypted values in the form {cipher}* and as long as there is a valid key then they will be decrypted before the main application context gets the Environment. To use the encryption features in an application you need to include Spring Security RSA in your classpath (Maven co-ordinates "org.springframework.security:spring-security-rsa") and you also need the full strength JCE extensions in your JVM.

If you are getting an exception due to "Illegal key size" and you are using Sun’s JDK, you need to install the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files. See the following links for more information:

Extract files into JDK/jre/lib/security folder (whichever version of JRE/JDK x64/x86 you are using).

Endpoints

For a Spring Boot Actuator application there are some additional management endpoints:

  • POST to /env to update the Environment and rebind @ConfigurationProperties and log levels

  • /refresh for re-loading the boot strap context and refreshing the @RefreshScope beans

  • /restart for closing the ApplicationContext and restarting it (disabled by default)

  • /pause and /resume for calling the Lifecycle methods (stop() and start() on the ApplicationContext)

Spring Cloud Commons: Common Abstractions

Patterns such as service discovery, load balancing and circuit breakers lend themselves to a common abstraction layer that can be consumed by all Spring Cloud clients, independent of the implementation (e.g. discovery via Eureka or Consul).

Spring RestTemplate as a Load Balancer Client

RestTemplate can be automatically configured to use ribbon. To create a load balanced RestTemplate create a RestTemplate @Bean and use the @LoadBalanced qualifier.

Warning
A RestTemplate bean is no longer created via auto configuration. It must be created by individual applications.
@Configuration
public class MyConfiguration {

    @LoadBalanced
    @Bean
    RestTemplate restTemplate() {
        return new RestTemplate();
    }
}

public class MyClass {
    @Autowired
    private RestTemplate restTemplate;

    public String doOtherStuff() {
        String results = restTemplate.getForObject("http://stores/stores", String.class);
        return results;
    }
}

The URI needs to use a virtual host name (ie. service name, not a host name). The Ribbon client is used to create a full physical address. See RibbonAutoConfiguration for details of how the RestTemplate is set up.

Retrying Failed Requests

A load balanced RestTemplate can be configured to retry failed requests. By default this logic is disabled, you can enable it by setting spring.cloud.loadbalancer.retry=true. The load balanced RestTemplate will honor some of the Ribbon configuration values related to retrying failed requests. The properties you can use are client.ribbon.MaxAutoRetries, client.ribbon.MaxAutoRetriesNextServer, and client.ribbon.OkToRetryOnAllOperations. See the Ribbon documentation for a description of what there properties do.

Note
client in the above examples should be replaced with your Ribbon client’s name.

Multiple RestTemplate objects

If you want a RestTemplate that is not load balanced, create a RestTemplate bean and inject it as normal. To access the load balanced RestTemplate use the `@LoadBalanced qualifier when you create your @Bean.

Important
Notice the @Primary annotation on the plain RestTemplate declaration in the example below, to disambiguate the unqualified @Autowired injection.
@Configuration
public class MyConfiguration {

    @LoadBalanced
    @Bean
    RestTemplate loadBalanced() {
        return new RestTemplate();
    }

    @Primary
    @Bean
    RestTemplate restTemplate() {
        return new RestTemplate();
    }
}

public class MyClass {
    @Autowired
    private RestTemplate restTemplate;

    @Autowired
    @LoadBalanced
    private RestTemplate loadBalanced;

    public String doOtherStuff() {
        return loadBalanced.getForObject("http://stores/stores", String.class);
    }

    public String doStuff() {
        return restTemplate.getForObject("http://example.com", String.class);
    }
}
Tip
If you see errors like java.lang.IllegalArgumentException: Can not set org.springframework.web.client.RestTemplate field com.my.app.Foo.restTemplate to com.sun.proxy.$Proxy89 try injecting RestOperations instead or setting spring.aop.proxyTargetClass=true.

Ignore Network Interfaces

Sometimes it is useful to ignore certain named network interfaces so they can be excluded from Service Discovery registration (eg. running in a Docker container). A list of regular expressions can be set that will cause the desired network interfaces to be ignored. The following configuration will ignore the "docker0" interface and all interfaces that start with "veth".

application.yml
spring:
  cloud:
    inetutils:
      ignoredInterfaces:
        - docker0
        - veth.*

You can also force to use only specified network addresses using list of regular expressions:

application.yml
spring:
  cloud:
    inetutils:
      preferredNetworks:
        - 192.168
        - 10.0

You can also force to use only site local addresses. See Inet4Address.html.isSiteLocalAddress() for more details what is site local address.

application.yml
spring:
  cloud:
    inetutils:
      useOnlySiteLocalInterfaces: true

Spring Cloud Config

Camden.SR3

Spring Cloud Config provides server and client-side support for externalized configuration in a distributed system. With the Config Server you have a central place to manage external properties for applications across all environments. The concepts on both client and server map identically to the Spring Environment and PropertySource abstractions, so they fit very well with Spring applications, but can be used with any application running in any language. As an application moves through the deployment pipeline from dev to test and into production you can manage the configuration between those environments and be certain that applications have everything they need to run when they migrate. The default implementation of the server storage backend uses git so it easily supports labelled versions of configuration environments, as well as being accessible to a wide range of tooling for managing the content. It is easy to add alternative implementations and plug them in with Spring configuration.

Quick Start

Start the server:

$ cd spring-cloud-config-server
$ ../mvnw spring-boot:run

The server is a Spring Boot application so you can run it from your IDE instead if you prefer (the main class is ConfigServerApplication). Then try out a client:

$ curl localhost:8888/foo/development
{"name":"development","label":"master","propertySources":[
  {"name":"https://github.com/scratches/config-repo/foo-development.properties","source":{"bar":"spam"}},
  {"name":"https://github.com/scratches/config-repo/foo.properties","source":{"foo":"bar"}}
]}

The default strategy for locating property sources is to clone a git repository (at spring.cloud.config.server.git.uri) and use it to initialize a mini SpringApplication. The mini-application’s Environment is used to enumerate property sources and publish them via a JSON endpoint.

The HTTP service has resources in the form:

/{application}/{profile}[/{label}]
/{application}-{profile}.yml
/{label}/{application}-{profile}.yml
/{application}-{profile}.properties
/{label}/{application}-{profile}.properties

where the "application" is injected as the spring.config.name in the SpringApplication (i.e. what is normally "application" in a regular Spring Boot app), "profile" is an active profile (or comma-separated list of properties), and "label" is an optional git label (defaults to "master".)

Spring Cloud Config Server pulls configuration for remote clients from a git repository (which must be provided):

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo

Client Side Usage

To use these features in an application, just build it as a Spring Boot application that depends on spring-cloud-config-client (e.g. see the test cases for the config-client, or the sample app). The most convenient way to add the dependency is via a Spring Boot starter org.springframework.cloud:spring-cloud-starter-config. There is also a parent pom and BOM (spring-cloud-starter-parent) for Maven users and a Spring IO version management properties file for Gradle and Spring CLI users. Example Maven configuration:

pom.xml
   <parent>
       <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-starter-parent</artifactId>
       <version>1.3.5.RELEASE</version>
       <relativePath /> <!-- lookup parent from repository -->
   </parent>

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Brixton.RELEASE</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

<dependencies>
	<dependency>
		<groupId>org.springframework.cloud</groupId>
		<artifactId>spring-cloud-starter-config</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-test</artifactId>
		<scope>test</scope>
	</dependency>
</dependencies>

<build>
	<plugins>
           <plugin>
               <groupId>org.springframework.boot</groupId>
               <artifactId>spring-boot-maven-plugin</artifactId>
           </plugin>
	</plugins>
</build>

   <!-- repositories also needed for snapshots and milestones -->

Then you can create a standard Spring Boot application, like this simple HTTP server:

@SpringBootApplication
@RestController
public class Application {

    @RequestMapping("/")
    public String home() {
        return "Hello World!";
    }

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

}

When it runs it will pick up the external configuration from the default local config server on port 8888 if it is running. To modify the startup behaviour you can change the location of the config server using bootstrap.properties (like application.properties but for the bootstrap phase of an application context), e.g.

spring.cloud.config.uri: http://myconfigserver.com

The bootstrap properties will show up in the /env endpoint as a high-priority property source, e.g.

$ curl localhost:8080/env
{
  "profiles":[],
  "configService:https://github.com/spring-cloud-samples/config-repo/bar.properties":{"foo":"bar"},
  "servletContextInitParams":{},
  "systemProperties":{...},
  ...
}

(a property source called "configService:<URL of remote repository>/<file name>" contains the property "foo" with value "bar" and is highest priority).

Note
the URL in the property source name is the git repository not the config server URL.

Spring Cloud Config Server

The Server provides an HTTP, resource-based API for external configuration (name-value pairs, or equivalent YAML content). The server is easily embeddable in a Spring Boot application using the @EnableConfigServer annotation. So this app is a config server:

ConfigServer.java
@SpringBootApplication
@EnableConfigServer
public class ConfigServer {
  public static void main(String[] args) {
    SpringApplication.run(ConfigServer.class, args);
  }
}

Like all Spring Boot apps it runs on port 8080 by default, but you can switch it to the conventional port 8888 in various ways. The easiest, which also sets a default configuration repository, is by launching it with spring.config.name=configserver (there is a configserver.yml in the Config Server jar). Another is to use your own application.properties, e.g.

application.properties
server.port: 8888
spring.cloud.config.server.git.uri: file://${user.home}/config-repo

where ${user.home}/config-repo is a git repository containing YAML and properties files.

Note
in Windows you need an extra "/" in the file URL if it is absolute with a drive prefix, e.g. file:///${user.home}/config-repo.
Tip

Here’s a recipe for creating the git repository in the example above:

$ cd $HOME
$ mkdir config-repo
$ cd config-repo
$ git init .
$ echo info.foo: bar > application.properties
$ git add -A .
$ git commit -m "Add application.properties"
Warning
using the local filesystem for your git repository is intended for testing only. Use a server to host your configuration repositories in production.
Warning
the initial clone of your configuration repository will be quick and efficient if you only keep text files in it. If you start to store binary files, especially large ones, you may experience delays on the first request for configuration and/or out of memory errors in the server.

Environment Repository

Where do you want to store the configuration data for the Config Server? The strategy that governs this behaviour is the EnvironmentRepository, serving Environment objects. This Environment is a shallow copy of the domain from the Spring Environment (including propertySources as the main feature). The Environment resources are parametrized by three variables:

  • {application} maps to "spring.application.name" on the client side;

  • {profile} maps to "spring.profiles.active" on the client (comma separated list); and

  • {label} which is a server side feature labelling a "versioned" set of config files.

Repository implementations generally behave just like a Spring Boot application loading configuration files from a "spring.config.name" equal to the {application} parameter, and "spring.profiles.active" equal to the {profiles} parameter. Precedence rules for profiles are also the same as in a regular Boot application: active profiles take precedence over defaults, and if there are multiple profiles the last one wins (like adding entries to a Map).

Example: a client application has this bootstrap configuration:

bootstrap.yml
spring:
  application:
    name: foo
  profiles:
    active: dev,mysql

(as usual with a Spring Boot application, these properties could also be set as environment variables or command line arguments).

If the repository is file-based, the server will create an Environment from application.yml (shared between all clients), and foo.yml (with foo.yml taking precedence). If the YAML files have documents inside them that point to Spring profiles, those are applied with higher precedence (in order of the profiles listed), and if there are profile-specific YAML (or properties) files these are also applied with higher precedence than the defaults. Higher precedence translates to a PropertySource listed earlier in the Environment. (These are the same rules as apply in a standalone Spring Boot application.)

Git Backend

The default implementation of EnvironmentRepository uses a Git backend, which is very convenient for managing upgrades and physical environments, and also for auditing changes. To change the location of the repository you can set the "spring.cloud.config.server.git.uri" configuration property in the Config Server (e.g. in application.yml). If you set it with a file: prefix it should work from a local repository so you can get started quickly and easily without a server, but in that case the server operates directly on the local repository without cloning it (it doesn’t matter if it’s not bare because the Config Server never makes changes to the "remote" repository). To scale the Config Server up and make it highly available, you would need to have all instances of the server pointing to the same repository, so only a shared file system would work. Even in that case it is better to use the ssh: protocol for a shared filesystem repository, so that the server can clone it and use a local working copy as a cache.

This repository implementation maps the {label} parameter of the HTTP resource to a git label (commit id, branch name or tag). If the git branch or tag name contains a slash ("/") then the label in the HTTP URL should be specified with the special string "(_)" instead (to avoid ambiguity with other URL paths). Be careful with the brackets in the URL if you are using a command line client like curl (e.g. escape them from the shell with quotes '').

Placeholders in Git URI

Spring Cloud Config Server supports a git repository URL with placeholders for the {application} and {profile} (and {label} if you need it, but remember that the label is applied as a git label anyway). So you can easily support a "one repo per application" policy using (for example):

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/myorg/{application}

or a "one repo per profile" policy using a similar pattern but with {profile}.

Pattern Matching and Multiple Repositories

There is also support for more complex requirements with pattern matching on the application and profile name. The pattern format is a comma-separated list of {application}/{profile} names with wildcards (where a pattern beginning with a wildcard may need to be quoted). Example:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo
          repos:
            simple: https://github.com/simple/config-repo
            special:
              pattern: special*/dev*,*special*/dev*
              uri: https://github.com/special/config-repo
            local:
              pattern: local*
              uri: file:/home/configsvc/config-repo

If {application}/{profile} does not match any of the patterns, it will use the default uri defined under "spring.cloud.config.server.git.uri". In the above example, for the "simple" repository, the pattern is simple/* (i.e. it only matches one application named "simple" in all profiles). The "local" repository matches all application names beginning with "local" in all profiles (the /* suffix is added automatically to any pattern that doesn’t have a profile matcher).

Note
the "one-liner" short cut used in the "simple" example above can only be used if the only property to be set is the URI. If you need to set anything else (credentials, pattern, etc.) you need to use the full form.

The pattern property in the repo is actually an array, so you can use a YAML array (or [0], [1], etc. suffixes in properties files) to bind to multiple patterns. You may need to do this if you are going to run apps with multiple profiles. Example:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo
          repos:
            development:
              pattern:
                - */development
                - */staging
              uri: https://github.com/development/config-repo
            staging:
              pattern:
                - */qa
                - */production
              uri: https://github.com/staging/config-repo
Note
Spring Cloud will guess that a pattern containing a profile that doesn’t end in * implies that you actually want to match a list of profiles starting with this pattern (so */staging is a shortcut for ["*/staging", "*/staging,*"]). This is common where you need to run apps in the "development" profile locally but also the "cloud" profile remotely, for instance.

Every repository can also optionally store config files in sub-directories, and patterns to search for those directories can be specified as searchPaths. For example at the top level:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo
          searchPaths: foo,bar*

In this example the server searches for config files in the top level and in the "foo/" sub-directory and also any sub-directory whose name begins with "bar".

By default the server clones remote repositories when configuration is first requested. The server can be configured to clone the repositories at startup. For example at the top level:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://git/common/config-repo.git
          repos:
            team-a:
                pattern: team-a-*
                cloneOnStart: true
                uri: http://git/team-a/config-repo.git
            team-b:
                pattern: team-b-*
                cloneOnStart: false
                uri: http://git/team-b/config-repo.git
            team-c:
                pattern: team-c-*
                uri: http://git/team-a/config-repo.git

In this example the server clones team-a’s config-repo on startup before it accepts any requests. All other repositories will not be cloned until configuration from the repository is requested.

Note
Setting a repository to be cloned when the Config Server starts up can help to identify a misconfigured configuration source (e.g., an invalid repository URI) quickly, while the Config Server is starting up. With cloneOnStart not enabled for a configuration source, the Config Server may start successfully with a misconfigured or invalid configuration source and not detect an error until an application requests configuration from that configuration source.

To use HTTP basic authentication on the remote repository add the "username" and "password" properties separately (not in the URL), e.g.

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo
          username: trolley
          password: strongpassword

If you don’t use HTTPS and user credentials, SSH should also work out of the box when you store keys in the default directories (~/.ssh) and the uri points to an SSH location, e.g. "[email protected]:configuration/cloud-configuration". It is important that all keys in ~/.ssh/known_hosts are in "ssh-rsa" format. The new "ecdsa-sha2-nistp256" format is NOT supported. The repository is accessed using JGit, so any documentation you find on that should be applicable. HTTPS proxy settings can be set in ~/.git/config or in the same way as for any other JVM process via system properties (-Dhttps.proxyHost and -Dhttps.proxyPort).

Tip
If you don’t know where your ~/.git directory is us git config --global to manipulate the settings (e.g. git config --global http.sslVerify false).
Placeholders in Git Search Paths

Spring Cloud Config Server also supports a search path with placeholders for the {application} and {profile} (and {label} if you need it). Example:

spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo
          searchPaths: '{application}'

searches the repository for files in the same name as the directory (as well as the top level). Wildcards are also valid in a search path with placeholders (any matching directory is included in the search).

Version Control Backend Filesystem Use

Warning
With VCS based backends (git, svn) files are checked out or cloned to the local filesystem. By default they are put in the system temporary directory with a prefix of config-repo-. On linux, for example it could be /tmp/config-repo-<randomid>. Some operating systems routinely clean out temporary directories. This can lead to unexpected behaviour such as missing properties. To avoid this problem, change the directory Config Server uses, by setting spring.cloud.config.server.git.basedir or spring.cloud.config.server.svn.basedir to a directory that does not reside in the system temp structure.

File System Backend

There is also a "native" profile in the Config Server that doesn’t use Git, but just loads the config files from the local classpath or file system (any static URL you want to point to with "spring.cloud.config.server.native.searchLocations"). To use the native profile just launch the Config Server with "spring.profiles.active=native".

Note
Remember to use the file: prefix for file resources (the default without a prefix is usually the classpath). Just as with any Spring Boot configuration you can embed ${}-style environment placeholders, but remember that absolute paths in Windows require an extra "/", e.g. file:///${user.home}/config-repo
Warning
The default value of the searchLocations is identical to a local Spring Boot application (so [classpath:/, classpath:/config, file:./, file:./config]). This does not expose the application.properties from the server to all clients because any property sources present in the server are removed before being sent to the client.
Tip
A filesystem backend is great for getting started quickly and for testing. To use it in production you need to be sure that the file system is reliable, and shared across all instances of the Config Server.

The search locations can contain placeholders for {application}, {profile} and {label}. In this way you can segregate the directories in the path, and choose a strategy that makes sense for you (e.g. sub-directory per application, or sub-directory per profile).

If you don’t use placeholders in the search locations, this repository also appends the {label} parameter of the HTTP resource to a suffix on the search path, so properties files are loaded from each search location and a subdirectory with the same name as the label (the labelled properties take precedence in the Spring Environment). Thus the default behaviour with no placeholders is the same as adding a search location ending with /{label}/. For example `file:/tmp/config is the same as file:/tmp/config,file:/tmp/config/{label}

Vault Backend

Spring Cloud Config Server also supports Vault as a backend.

Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, and more. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log.

For more information on Vault see the Vault quickstart guide.

To enable the config server to use a Vault backend you must run your config server with the vault profile. For example in your config server’s application.properties you can add spring.profiles.active=vault.

By default the config server will assume your Vault server is running at http://127.0.0.1:8200. It also will assume that the name of backend is secret and the key is application. All of these defaults can be configured in your config server’s application.properties. Below is a table of configurable Vault properties. All properties are prefixed with spring.cloud.config.server.vault.

Name Default Value

host

127.0.0.1

port

8200

scheme

http

backend

secret

defaultKey

application

profileSeparator

,

All configurable properties can be found in org.springframework.cloud.config.server.environment.VaultEnvironmentRepository.

With your config server running you can make HTTP requests to the server to retrieve values from the Vault backend. To do this you will need a token for your Vault server.

First place some data in you Vault. For example

$ vault write secret/application foo=bar baz=bam
$ vault write secret/myapp foo=myappsbar

Now make the HTTP request to your config server to retrieve the values.

$ curl -X "GET" "http://localhost:8888/myapp/default" -H "X-Config-Token: yourtoken"

You should see a response similar to this after making the above request.

{
   "name":"myapp",
   "profiles":[
      "default"
   ],
   "label":null,
   "version":null,
   "state":null,
   "propertySources":[
      {
         "name":"vault:myapp",
         "source":{
            "foo":"myappsbar"
         }
      },
      {
         "name":"vault:application",
         "source":{
            "baz":"bam",
            "foo":"bar"
         }
      }
   ]
}
Multiple Properties Sources

When using Vault you can provide your applications with multiple properties sources. For example, assume you have written data to the following paths in Vault.

secret/myApp,dev
secret/myApp
secret/application,dev
secret/application

Properties written to secret/application are available to all applications using the Config Server. An application with the name myApp would have any properties written to secret/myApp and secret/application available to it. When myApp has the dev profile enabled than properties written to all of the above paths would be available to it, with properties in the first path in the list taking priority over the others.

Sharing Configuration With All Applications

File Based Repositories

With file-based (i.e. git, svn and native) repositories, resources with file names in application* are shared between all client applications (so application.properties, application.yml, application-*.properties etc.). You can use resources with these file names to configure global defaults and have them overridden by application-specific files as necessary.

The #_property_overrides[property overrides] feature can also be used for setting global defaults, and with placeholders applications are allowed to override them locally.

Tip
With the "native" profile (local file system backend) it is recommended that you use an explicit search location that isn’t part of the server’s own configuration. Otherwise the application* resources in the default search locations are removed because they are part of the server.
Vault Server

When using Vault as a backend you can share configuration with all applications by placing configuration in html5/application. For example, if you run this Vault command

$ vault write secret/application foo=bar baz=bam

All applications using the config server will have the properties foo and baz available to them.

Property Overrides

The Config Server has an "overrides" feature that allows the operator to provide configuration properties to all applications that cannot be accidentally changed by the application using the normal Spring Boot hooks. To declare overrides just add a map of name-value pairs to spring.cloud.config.server.overrides. For example

spring:
  cloud:
    config:
      server:
        overrides:
          foo: bar

will cause all applications that are config clients to read foo=bar independent of their own configuration. (Of course an application can use the data in the Config Server in any way it likes, so overrides are not enforceable, but they do provide useful default behaviour if they are Spring Cloud Config clients.)

Tip
Normal, Spring environment placeholders with "${}" can be escaped (and resolved on the client) by using backslash ("\") to escape the "$" or the "{", e.g. \${app.foo:bar} resolves to "bar" unless the app provides its own "app.foo". Note that in YAML you don’t need to escape the backslash itself, but in properties files you do, when you configure the overrides on the server.

You can change the priority of all overrides in the client to be more like default values, allowing applications to supply their own values in environment variables or System properties, by setting the flag spring.cloud.config.overrideNone=true (default is false) in the remote repository.

Health Indicator

Config Server comes with a Health Indicator that checks if the configured EnvironmentRepository is working. By default it asks the EnvironmentRepository for an application named app, the default profile and the default label provided by the EnvironmentRepository implementation.

You can configure the Health Indicator to check more applications along with custom profiles and custom labels, e.g.

spring:
  cloud:
    config:
      server:
        health:
          repositories:
            myservice:
              label: mylabel
            myservice-dev:
              name: myservice
              profiles: development

You can disable the Health Indicator by setting spring.cloud.config.server.health.enabled=false.

Security

You are free to secure your Config Server in any way that makes sense to you (from physical network security to OAuth2 bearer tokens), and Spring Security and Spring Boot make it easy to do pretty much anything.

To use the default Spring Boot configured HTTP Basic security, just include Spring Security on the classpath (e.g. through spring-boot-starter-security). The default is a username of "user" and a randomly generated password, which isn’t going to be very useful in practice, so we recommend you configure the password (via security.user.password) and encrypt it (see below for instructions on how to do that).

Encryption and Decryption

Important
Prerequisites: to use the encryption and decryption features you need the full-strength JCE installed in your JVM (it’s not there by default). You can download the "Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files" from Oracle, and follow instructions for installation (essentially replace the 2 policy files in the JRE lib/security directory with the ones that you downloaded).

If the remote property sources contain encrypted content (values starting with {cipher}) they will be decrypted before sending to clients over HTTP. The main advantage of this set up is that the property values don’t have to be in plain text when they are "at rest" (e.g. in a git repository). If a value cannot be decrypted it is removed from the property source and an additional property is added with the same key, but prefixed with "invalid." and a value that means "not applicable" (usually "<n/a>"). This is largely to prevent cipher text being used as a password and accidentally leaking.

If you are setting up a remote config repository for config client applications it might contain an application.yml like this, for instance:

application.yml
spring:
  datasource:
    username: dbuser
    password: '{cipher}FKSAJDFGYOS8F7GLHAKERGFHLSAJ'

Encrypted values in a .properties file must not be wrapped in quotes, otherwise the value will not be decrypted:

application.properties
spring.datasource.username: dbuser
spring.datasource.password: {cipher}FKSAJDFGYOS8F7GLHAKERGFHLSAJ

You can safely push this plain text to a shared git repository and the secret password is protected.

The server also exposes /encrypt and /decrypt endpoints (on the assumption that these will be secured and only accessed by authorized agents). If you are editing a remote config file you can use the Config Server to encrypt values by POSTing to the /encrypt endpoint, e.g.

$ curl localhost:8888/encrypt -d mysecret
682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda

The inverse operation is also available via /decrypt (provided the server is configured with a symmetric key or a full key pair):

$ curl localhost:8888/decrypt -d 682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda
mysecret
Tip
If you are testing like this with curl, then use --data-urlencode (instead of -d) or set an explicit Content-Type: text/plain to make sure curl encodes the data correctly when there are special characters ('+' is particularly tricky).

Take the encrypted value and add the {cipher} prefix before you put it in the YAML or properties file, and before you commit and push it to a remote, potentially insecure store.

The /encrypt and /decrypt endpoints also both accept paths of the form /*/{name}/{profiles} which can be used to control cryptography per application (name) and profile when clients call into the main Environment resource.

Note
to control the cryptography in this granular way you must also provide a @Bean of type TextEncryptorLocator that creates a different encryptor per name and profiles. The one that is provided by default does not do this (so all encryptions use the same key).

The spring command line client (with Spring Cloud CLI extensions installed) can also be used to encrypt and decrypt, e.g.

$ spring encrypt mysecret --key foo
682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda
$ spring decrypt --key foo 682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda
mysecret

To use a key in a file (e.g. an RSA public key for encryption) prepend the key value with "@" and provide the file path, e.g.

$ spring encrypt mysecret --key @${HOME}/.ssh/id_rsa.pub
AQAjPgt3eFZQXwt8tsHAVv/QHiY5sI2dRcR+...

The key argument is mandatory (despite having a -- prefix).

Key Management

The Config Server can use a symmetric (shared) key or an asymmetric one (RSA key pair). The asymmetric choice is superior in terms of security, but it is often more convenient to use a symmetric key since it is just a single property value to configure.

To configure a symmetric key you just need to set encrypt.key to a secret String (or use an enviroment variable ENCRYPT_KEY to keep it out of plain text configuration files).

To configure an asymmetric key you can either set the key as a PEM-encoded text value (in encrypt.key), or via a keystore (e.g. as created by the keytool utility that comes with the JDK). The keystore properties are encrypt.keyStore.* with * equal to

  • location (a Resource location),

  • password (to unlock the keystore) and

  • alias (to identify which key in the store is to be used).

The encryption is done with the public key, and a private key is needed for decryption. Thus in principle you can configure only the public key in the server if you only want to do encryption (and are prepared to decrypt the values yourself locally with the private key). In practice you might not want to do that because it spreads the key management process around all the clients, instead of concentrating it in the server. On the other hand it’s a useful option if your config server really is relatively insecure and only a handful of clients need the encrypted properties.

Creating a Key Store for Testing

To create a keystore for testing you can do something like this:

$ keytool -genkeypair -alias mytestkey -keyalg RSA \
  -dname "CN=Web Server,OU=Unit,O=Organization,L=City,S=State,C=US" \
  -keypass changeme -keystore server.jks -storepass letmein

Put the server.jks file in the classpath (for instance) and then in your application.yml for the Config Server:

encrypt:
  keyStore:
    location: classpath:/server.jks
    password: letmein
    alias: mytestkey
    secret: changeme

Using Multiple Keys and Key Rotation

In addition to the {cipher} prefix in encrypted property values, the Config Server looks for {name:value} prefixes (zero or many) before the start of the (Base64 encoded) cipher text. The keys are passed to a TextEncryptorLocator which can do whatever logic it needs to locate a TextEncryptor for the cipher. If you have configured a keystore (encrypt.keystore.location) the default locator will look for keys in the store with aliases as supplied by the "key" prefix, i.e. with a cipher text like this:

foo:
  bar: `{cipher}{key:testkey}...`

the locator will look for a key named "testkey". A secret can also be supplied via a {secret:…​} value in the prefix, but if it is not the default is to use the keystore password (which is what you get when you build a keytore and don’t specify a secret). If you do supply a secret it is recommended that you also encrypt the secrets using a custom SecretLocator.

Key rotation is hardly ever necessary on cryptographic grounds if the keys are only being used to encrypt a few bytes of configuration data (i.e. they are not being used elsewhere), but occasionally you might need to change the keys if there is a security breach for instance. In that case all the clients would need to change their source config files (e.g. in git) and use a new {key:…​} prefix in all the ciphers, checking beforehand of course that the key alias is available in the Config Server keystore.

Tip
the {name:value} prefixes can also be added to plaintext posted to the /encrypt endpoint, if you want to let the Config Server handle all encryption as well as decryption.

Serving Encrypted Properties

Sometimes you want the clients to decrypt the configuration locally, instead of doing it in the server. In that case you can still have /encrypt and /decrypt endpoints (if you provide the encrypt.* configuration to locate a key), but you need to explicitly switch off the decryption of outgoing properties using spring.cloud.config.server.encrypt.enabled=false. If you don’t care about the endpoints, then it should work if you configure neither the key nor the enabled flag.

Serving Alternative Formats

The default JSON format from the environment endpoints is perfect for consumption by Spring applications because it maps directly onto the Environment abstraction. If you prefer you can consume the same data as YAML or Java properties by adding a suffix to the resource path (".yml", ".yaml" or ".properties"). This can be useful for consumption by applications that do not care about the structure of the JSON endpoints, or the extra metadata they provide, for example an application that is not using Spring might benefit from the simplicity of this approach.

The YAML and properties representations have an additional flag (provided as a boolean query parameter resolvePlaceholders) to signal that placeholders in the source documents, in the standard Spring ${…​} form, should be resolved in the output where possible before rendering. This is a useful feature for consumers that don’t know about the Spring placeholder conventions.

Note
there are limitations in using the YAML or properties formats, mainly in relation to the loss of metadata. The JSON is structured as an ordered list of property sources, for example, with names that correlate with the source. The YAML and properties forms are coalesced into a single map, even if the origin of the values has multiple sources, and the names of the original source files are lost. The YAML representation is not necessarily a faithful representation of the YAML source in a backing repository either: it is constructed from a list of flat property sources, and assumptions have to be made about the form of the keys.

Serving Plain Text

Instead of using the Environment abstraction (or one of the alternative representations of it in YAML or properties format) your applications might need generic plain text configuration files, tailored to their environment. The Config Server provides these through an additional endpoint at /{name}/{profile}/{label}/{path} where "name", "profile" and "label" have the same meaning as the regular environment endpoint, but "path" is a file name (e.g. log.xml). The source files for this endpoint are located in the same way as for the environment endpoints: the same search path is used as for properties or YAML files, but instead of aggregating all matching resources, only the first one to match is returned.

After a resource is located, placeholders in the normal format (${…​}) are resolved using the effective Environment for the application name, profile and label supplied. In this way the resource endpoint is tightly integrated with the environment endpoints. Example, if you have this layout for a GIT (or SVN) repository:

application.yml
nginx.conf

where nginx.conf looks like this:

server {
    listen              80;
    server_name         ${nginx.server.name};
}

and application.yml like this:

nginx:
  server:
    name: example.com
---
spring:
  profiles: development
nginx:
  server:
    name: develop.com

then the /foo/default/master/nginx.conf resource looks like this:

server {
    listen              80;
    server_name         example.com;
}

and /foo/development/master/nginx.conf like this:

server {
    listen              80;
    server_name         develop.com;
}
Note
just like the source files for environment configuration, the "profile" is used to resolve the file name, so if you want a profile-specific file then /*/development/*/logback.xml will be resolved by a file called logback-development.xml (in preference to logback.xml).

Embedding the Config Server

The Config Server runs best as a standalone application, but if you need to you can embed it in another application. Just use the @EnableConfigServer annotation. An optional property that can be useful in this case is spring.cloud.config.server.bootstrap which is a flag to indicate that the server should configure itself from its own remote repository. The flag is off by default because it can delay startup, but when embedded in another application it makes sense to initialize the same way as any other application.

Note
It should be obvious, but remember that if you use the bootstrap flag the config server will need to have its name and repository URI configured in bootstrap.yml.

To change the location of the server endpoints you can (optionally) set spring.cloud.config.server.prefix, e.g. "/config", to serve the resources under a prefix. The prefix should start but not end with a "/". It is applied to the @RequestMappings in the Config Server (i.e. underneath the Spring Boot prefixes server.servletPath and server.contextPath).

If you want to read the configuration for an application directly from the backend repository (instead of from the config server) that’s basically an embedded config server with no endpoints. You can switch off the endpoints entirely if you don’t use the @EnableConfigServer annotation (just set spring.cloud.config.server.bootstrap=true).

Push Notifications and Spring Cloud Bus

Many source code repository providers (like Github, Gitlab or Bitbucket for instance) will notify you of changes in a repository through a webhook. You can configure the webhook via the provider’s user interface as a URL and a set of events in which you are interested. For instance Github will POST to the webhook with a JSON body containing a list of commits, and a header "X-Github-Event" equal to "push". If you add a dependency on the spring-cloud-config-monitor library and activate the Spring Cloud Bus in your Config Server, then a "/monitor" endpoint is enabled.

When the webhook is activated the Config Server will send a RefreshRemoteApplicationEvent targeted at the applications it thinks might have changed. The change detection can be strategized, but by default it just looks for changes in files that match the application name (e.g. "foo.properties" is targeted at the "foo" application, and "application.properties" is targeted at all applications). The strategy if you want to override the behaviour is PropertyPathNotificationExtractor which accepts the request headers and body as parameters and returns a list of file paths that changed.

The default configuration works out of the box with Github, Gitlab or Bitbucket. In addition to the JSON notifications from Github, Gitlab or Bitbucket you can trigger a change notification by POSTing to "/monitor" with a form-encoded body parameters path={name}. This will broadcast to applications matching the "{name}" pattern (can contain wildcards).

Note
the RefreshRemoteApplicationEvent will only be transmitted if the spring-cloud-bus is activated in the Config Server and in the client application.
Note
the default configuration also detects filesystem changes in local git repositories (the webhook is not used in that case but as soon as you edit a config file a refresh will be broadcast).

Spring Cloud Config Client

A Spring Boot application can take immediate advantage of the Spring Config Server (or other external property sources provided by the application developer), and it will also pick up some additional useful features related to Environment change events.

Config First Bootstrap

This is the default behaviour for any application which has the Spring Cloud Config Client on the classpath. When a config client starts up it binds to the Config Server (via the bootstrap configuration property spring.cloud.config.uri) and initializes Spring Environment with remote property sources.

The net result of this is that all client apps that want to consume the Config Server need a bootstrap.yml (or an environment variable) with the server address in spring.cloud.config.uri (defaults to "http://localhost:8888").

Discovery First Bootstrap

If you are using a `DiscoveryClient implementation, such as Spring Cloud Netflix and Eureka Service Discovery or Spring Cloud Consul (Spring Cloud Zookeeper does not support this yet), then you can have the Config Server register with the Discovery Service if you want to, but in the default "Config First" mode, clients won’t be able to take advantage of the registration.

If you prefer to use DiscoveryClient to locate the Config Server, you can do that by setting spring.cloud.config.discovery.enabled=true (default "false"). The net result of that is that client apps all need a bootstrap.yml (or an environment variable) with the appropriate discovery configuration. For example, with Spring Cloud Netflix, you need to define the Eureka server address, e.g. in eureka.client.serviceUrl.defaultZone. The price for using this option is an extra network round trip on start up to locate the service registration. The benefit is that the Config Server can change its co-ordinates, as long as the Discovery Service is a fixed point. The default service id is "configserver" but you can change that on the client with spring.cloud.config.discovery.serviceId (and on the server in the usual way for a service, e.g. by setting spring.application.name).

The discovery client implementations all support some kind of metadata map (e.g. for Eureka we have eureka.instance.metadataMap). Some additional properties of the Config Server may need to be configured in its service registration metadata so that clients can connect correctly. If the Config Server is secured with HTTP Basic you can configure the credentials as "username" and "password". And if the Config Server has a context path you can set "configPath". Example, for a Config Server that is a Eureka client:

bootstrap.yml
eureka:
  instance:
    ...
    metadataMap:
      user: osufhalskjrtl
      password: lviuhlszvaorhvlo5847
      configPath: /config

Config Client Fail Fast

In some cases, it may be desirable to fail startup of a service if it cannot connect to the Config Server. If this is the desired behavior, set the bootstrap configuration property spring.cloud.config.failFast=true and the client will halt with an Exception.

Config Client Retry

If you expect that the config server may occasionally be unavailable when your app starts, you can ask it to keep trying after a failure. First you need to set spring.cloud.config.failFast=true, and then you need to add spring-retry and spring-boot-starter-aop to your classpath. The default behaviour is to retry 6 times with an initial backoff interval of 1000ms and an exponential multiplier of 1.1 for subsequent backoffs. You can configure these properties (and others) using spring.cloud.config.retry.* configuration properties.

Tip
To take full control of the retry add a @Bean of type RetryOperationsInterceptor with id "configServerRetryInterceptor". Spring Retry has a RetryInterceptorBuilder that makes it easy to create one.

Locating Remote Configuration Resources

The Config Service serves property sources from /{name}/{profile}/{label}, where the default bindings in the client app are

  • "name" = ${spring.application.name}

  • "profile" = ${spring.profiles.active} (actually Environment.getActiveProfiles())

  • "label" = "master"

All of them can be overridden by setting spring.cloud.config.* (where * is "name", "profile" or "label"). The "label" is useful for rolling back to previous versions of configuration; with the default Config Server implementation it can be a git label, branch name or commit id. Label can also be provided as a comma-separated list, in which case the items in the list are tried on-by-one until one succeeds. This can be useful when working on a feature branch, for instance, when you might want to align the config label with your branch, but make it optional (e.g. spring.cloud.config.label=myfeature,develop).

Security

If you use HTTP Basic security on the server then clients just need to know the password (and username if it isn’t the default). You can do that via the config server URI, or via separate username and password properties, e.g.

bootstrap.yml
spring:
  cloud:
    config:
     uri: https://user:[email protected]

or

bootstrap.yml
spring:
  cloud:
    config:
     uri: https://myconfig.mycompany.com
     username: user
     password: secret

The spring.cloud.config.password and spring.cloud.config.username values override anything that is provided in the URI.

If you deploy your apps on Cloud Foundry then the best way to provide the password is through service credentials, e.g. in the URI, since then it doesn’t even need to be in a config file. An example which works locally and for a user-provided service on Cloud Foundry named "configserver":

bootstrap.yml
spring:
  cloud:
    config:
     uri: ${vcap.services.configserver.credentials.uri:http://user:[email protected]:8888}

If you use another form of security you might need to provide a RestTemplate to the ConfigServicePropertySourceLocator (e.g. by grabbing it in the bootstrap context and injecting one).

Health Indicator

The Config Client supplies a Spring Boot Health Indicator that attempts to load configuration from Config Server. The health indicator can be disabled by setting health.config.enabled=false. The response is also cached for performance reasons. The default cache time to live is 5 minutes. To change that value set the health.config.time-to-live property (in milliseconds).

Providing A Custom RestTemplate

In some cases you might need to customize the requests made to the config server from the client. Typically this involves passing special Authorization headers to authenticate requests to the server. To provide a custom RestTemplate follow the steps below.

  1. Set spring.cloud.config.enabled=false to disable the existing config server property source.

  2. Create a new configuration bean with an implementation of PropertySourceLocator.

CustomConfigServiceBootstrapConfiguration.java
@Configuration
public class CustomConfigServiceBootstrapConfiguration {
    @Bean
    public ConfigClientProperties configClientProperties() {
        ConfigClientProperties client = new ConfigClientProperties(this.environment);
        client.setEnabled(false);
        return client;
    }

    @Bean
    public ConfigServicePropertySourceLocator configServicePropertySourceLocator() {
        ConfigClientProperties clientProperties = configClientProperties();
       ConfigServicePropertySourceLocator configServicePropertySourceLocator =  new ConfigServicePropertySourceLocator(clientProperties);
        configServicePropertySourceLocator.setRestTemplate(customRestTemplate(clientProperties));
        return configServicePropertySourceLocator;
    }
}
  1. In resources/META-INF create a file called spring.factories and specify your custom configuration.

spring.factorties
org.springframework.cloud.bootstrap.BootstrapConfiguration = com.my.config.client.CustomConfigServiceBootstrapConfiguration

Vault

When using Vault as a backend to your config server the client will need to supply a token for the server to retrieve values from Vault. This token can be provided within the client by setting spring.cloud.config.token in bootstrap.yml.

bootstrap.yml
spring:
  cloud:
    config:
      token: YourVaultToken

Vault

Nested Keys In Vault

Vault supports the ability to nest keys in a value stored in Vault. For example

echo -n '{"appA": {"secret": "appAsecret"}, "bar": "baz"}' | vault write secret/myapp -

This command will write a JSON object to your Vault. To access these values in Spring you would use the traditional dot(.) annotation. For example

@Value("${appA.secret}")
String name = "World";

The above code would set the name variable to appAsecret.

Spring Cloud Netflix

Camden.SR3

This project provides Netflix OSS integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations you can quickly enable and configure the common patterns inside your application and build large distributed systems with battle-tested Netflix components. The patterns provided include Service Discovery (Eureka), Circuit Breaker (Hystrix), Intelligent Routing (Zuul) and Client Side Load Balancing (Ribbon).

Service Discovery: Eureka Clients

Service Discovery is one of the key tenets of a microservice based architecture. Trying to hand configure each client or some form of convention can be very difficult to do and can be very brittle. Eureka is the Netflix Service Discovery Server and Client. The server can be configured and deployed to be highly available, with each server replicating state about the registered services to the others.

How to Include Eureka Client

To include Eureka Client in your project use the starter with group org.springframework.cloud and artifact id spring-cloud-starter-eureka. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

Registering with Eureka

When a client registers with Eureka, it provides meta-data about itself such as host and port, health indicator URL, home page etc. Eureka receives heartbeat messages from each instance belonging to a service. If the heartbeat fails over a configurable timetable, the instance is normally removed from the registry.

Example eureka client:

@Configuration
@ComponentScan
@EnableAutoConfiguration
@EnableEurekaClient
@RestController
public class Application {

    @RequestMapping("/")
    public String home() {
        return "Hello world";
    }

    public static void main(String[] args) {
        new SpringApplicationBuilder(Application.class).web(true).run(args);
    }

}

(i.e. utterly normal Spring Boot app). In this example we use @EnableEurekaClient explicitly, but with only Eureka available you could also use @EnableDiscoveryClient. Configuration is required to locate the Eureka server. Example:

application.yml
eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/

where "defaultZone" is a magic string fallback value that provides the service URL for any client that doesn’t express a preference (i.e. it’s a useful default).

The default application name (service ID), virtual host and non-secure port, taken from the Environment, are ${spring.application.name}, ${spring.application.name} and ${server.port} respectively.

@EnableEurekaClient makes the app into both a Eureka "instance" (i.e. it registers itself) and a "client" (i.e. it can query the registry to locate other services). The instance behaviour is driven by eureka.instance.* configuration keys, but the defaults will be fine if you ensure that your application has a spring.application.name (this is the default for the Eureka service ID, or VIP).

See EurekaInstanceConfigBean and EurekaClientConfigBean for more details of the configurable options.

Authenticating with the Eureka Server

HTTP basic authentication will be automatically added to your eureka client if one of the eureka.client.serviceUrl.defaultZone URLs has credentials embedded in it (curl style, like http://user:[email protected]:8761/eureka). For more complex needs you can create a @Bean of type DiscoveryClientOptionalArgs and inject ClientFilter instances into it, all of which will be applied to the calls from the client to the server.

Note
Because of a limitation in Eureka it isn’t possible to support per-server basic auth credentials, so only the first set that are found will be used.

Status Page and Health Indicator

The status page and health indicators for a Eureka instance default to "/info" and "/health" respectively, which are the default locations of useful endpoints in a Spring Boot Actuator application. You need to change these, even for an Actuator application if you use a non-default context path or servlet path (e.g. server.servletPath=/foo) or management endpoint path (e.g. management.contextPath=/admin). Example:

application.yml
eureka:
  instance:
    statusPageUrlPath: ${management.context-path}/info
    healthCheckUrlPath: ${management.context-path}/health

These links show up in the metadata that is consumed by clients, and used in some scenarios to decide whether to send requests to your application, so it’s helpful if they are accurate.

Registering a Secure Application

If your app wants to be contacted over HTTPS you can set two flags in the EurekaInstanceConfig, viz eureka.instance.[nonSecurePortEnabled,securePortEnabled]=[false,true] respectively. This will make Eureka publish instance information showing an explicit preference for secure communication. The Spring Cloud DiscoveryClient will always return an https://…​; URI for a service configured this way, and the Eureka (native) instance information will have a secure health check URL.

Because of the way Eureka works internally, it will still publish a non-secure URL for status and home page unless you also override those explicitly. You can use placeholders to configure the eureka instance urls, e.g.

application.yml
eureka:
  instance:
    statusPageUrl: https://${eureka.hostname}/info
    healthCheckUrl: https://${eureka.hostname}/health
    homePageUrl: https://${eureka.hostname}/

(Note that ${eureka.hostname} is a native placeholder only available in later versions of Eureka. You could achieve the same thing with Spring placeholders as well, e.g. using ${eureka.instance.hostName}.)

Note
If your app is running behind a proxy, and the SSL termination is in the proxy (e.g. if you run in Cloud Foundry or other platforms as a service) then you will need to ensure that the proxy "forwarded" headers are intercepted and handled by the application. An embedded Tomcat container in a Spring Boot app does this automatically if it has explicit configuration for the 'X-Forwarded-\*` headers. A sign that you got this wrong will be that the links rendered by your app to itself will be wrong (the wrong host, port or protocol).

Eureka’s Health Checks

By default, Eureka uses the client heartbeat to determine if a client is up. Unless specified otherwise the Discovery Client will not propagate the current health check status of the application per the Spring Boot Actuator. Which means that after successful registration Eureka will always announce that the application is in 'UP' state. This behaviour can be altered by enabling Eureka health checks, which results in propagating application status to Eureka. As a consequence every other application won’t be sending traffic to application in state other then 'UP'.

application.yml
eureka:
  client:
    healthcheck:
      enabled: true
Warning
eureka.client.healthcheck.enabled=true should only be set in application.yml. Setting the value in bootstrap.yml will cause undesirable side effects like registering in eureka with an UNKNOWN status.

If you require more control over the health checks, you may consider implementing your own com.netflix.appinfo.HealthCheckHandler.

Eureka Metadata for Instances and Clients

It’s worth spending a bit of time understanding how the Eureka metadata works, so you can use it in a way that makes sense in your platform. There is standard metadata for things like hostname, IP address, port numbers, status page and health check. These are published in the service registry and used by clients to contact the services in a straightforward way. Additional metadata can be added to the instance registration in the eureka.instance.metadataMap, and this will be accessible in the remote clients, but in general will not change the behaviour of the client, unless it is made aware of the meaning of the metadata. There are a couple of special cases described below where Spring Cloud already assigns meaning to the metadata map.

Using Eureka on Cloudfoundry

Cloudfoundry has a global router so that all instances of the same app have the same hostname (it’s the same in other PaaS solutions with a similar architecture). This isn’t necessarily a barrier to using Eureka, but if you use the router (recommended, or even mandatory depending on the way your platform was set up), you need to explicitly set the hostname and port numbers (secure or non-secure) so that they use the router. You might also want to use instance metadata so you can distinguish between the instances on the client (e.g. in a custom load balancer). By default, the eureka.instance.instanceId is vcap.application.instance_id. For example:

application.yml
eureka:
  instance:
    hostname: ${vcap.application.uris[0]}
    nonSecurePort: 80

Depending on the way the security rules are set up in your Cloudfoundry instance, you might be able to register and use the IP address of the host VM for direct service-to-service calls. This feature is not (yet) available on Pivotal Web Services (PWS).

Using Eureka on AWS

If the application is planned to be deployed to an AWS cloud, then the Eureka instance will have to be configured to be Amazon aware and this can be done by customizing the EurekaInstanceConfigBean the following way:

@Bean
@Profile("!default")
public EurekaInstanceConfigBean eurekaInstanceConfig() {
  EurekaInstanceConfigBean b = new EurekaInstanceConfigBean();
  AmazonInfo info = AmazonInfo.Builder.newBuilder().autoBuild("eureka");
  b.setDataCenterInfo(info);
  return b;
}

Changing the Eureka Instance ID

A vanilla Netflix Eureka instance is registered with an ID that is equal to its host name (i.e. only one service per host). Spring Cloud Eureka provides a sensible default that looks like this: ${spring.cloud.client.hostname}:${spring.application.name}:${spring.application.instance_id:${server.port}}}. For example myhost:myappname:8080.

Using Spring Cloud you can override this by providing a unique identifier in eureka.instance.instanceId. For example:

application.yml
eureka:
  instance:
    instanceId: ${spring.application.name}:${vcap.application.instance_id:${spring.application.instance_id:${random.value}}}

With this metadata, and multiple service instances deployed on localhost, the random value will kick in there to make the instance unique. In Cloudfoundry the vcap.application.instance_id will be populated automatically in a Spring Boot application, so the random value will not be needed.

Using the EurekaClient

Once you have an app that is @EnableDiscoveryClient (or @EnableEurekaClient) you can use it to discover service instances from the Eureka Server. One way to do that is to use the native com.netflix.discovery.EurekaClient (as opposed to the Spring Cloud DiscoveryClient), e.g.

@Autowired
private EurekaClient discoveryClient;

public String serviceUrl() {
    InstanceInfo instance = discoveryClient.getNextServerFromEureka("STORES", false);
    return instance.getHomePageUrl();
}
Tip

Don’t use the EurekaClient in @PostConstruct method or in a @Scheduled method (or anywhere where the ApplicationContext might not be started yet). It is initialized in a SmartLifecycle (with phase=0) so the earliest you can rely on it being available is in another SmartLifecycle with higher phase.

Alternatives to the native Netflix EurekaClient

You don’t have to use the raw Netflix EurekaClient and usually it is more convenient to use it behind a wrapper of some sort. Spring Cloud has support for Feign (a REST client builder) and also Spring RestTemplate using the logical Eureka service identifiers (VIPs) instead of physical URLs. To configure Ribbon with a fixed list of physical servers you can simply set <client>.ribbon.listOfServers to a comma-separated list of physical addresses (or hostnames), where <client> is the ID of the client.

You can also use the org.springframework.cloud.client.discovery.DiscoveryClient which provides a simple API for discovery clients that is not specific to Netflix, e.g.

@Autowired
private DiscoveryClient discoveryClient;

public String serviceUrl() {
    List<ServiceInstance> list = discoveryClient.getInstances("STORES");
    if (list != null && list.size() > 0 ) {
        return list.get(0).getUri();
    }
    return null;
}

Why is it so Slow to Register a Service?

Being an instance also involves a periodic heartbeat to the registry (via the client’s serviceUrl) with default duration 30 seconds. A service is not available for discovery by clients until the instance, the server and the client all have the same metadata in their local cache (so it could take 3 heartbeats). You can change the period using eureka.instance.leaseRenewalIntervalInSeconds and this will speed up the process of getting clients connected to other services. In production it’s probably better to stick with the default because there are some computations internally in the server that make assumptions about the lease renewal period.

Service Discovery: Eureka Server

How to Include Eureka Server

To include Eureka Server in your project use the starter with group org.springframework.cloud and artifact id spring-cloud-starter-eureka-server. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

How to Run a Eureka Server

Example eureka server;

@SpringBootApplication
@EnableEurekaServer
public class Application {

    public static void main(String[] args) {
        new SpringApplicationBuilder(Application.class).web(true).run(args);
    }

}

The server has a home page with a UI, and HTTP API endpoints per the normal Eureka functionality under /eureka/*.

Eureka background reading: see flux capacitor and google group discussion.

Tip

Due to Gradle’s dependency resolution rules and the lack of a parent bom feature, simply depending on spring-cloud-starter-eureka-server can cause failures on application startup. To remedy this the Spring Boot Gradle plugin must be added and the Spring cloud starter parent bom must be imported like so:

build.gradle
buildscript {
  dependencies {
    classpath("org.springframework.boot:spring-boot-gradle-plugin:1.3.5.RELEASE")
  }
}

apply plugin: "spring-boot"

dependencyManagement {
  imports {
    mavenBom "org.springframework.cloud:spring-cloud-dependencies:Brixton.RELEASE"
  }
}

High Availability, Zones and Regions

The Eureka server does not have a backend store, but the service instances in the registry all have to send heartbeats to keep their registrations up to date (so this can be done in memory). Clients also have an in-memory cache of eureka registrations (so they don’t have to go to the registry for every single request to a service).

By default every Eureka server is also a Eureka client and requires (at least one) service URL to locate a peer. If you don’t provide it the service will run and work, but it will shower your logs with a lot of noise about not being able to register with the peer.

See also below for details of Ribbon support on the client side for Zones and Regions.

Standalone Mode

The combination of the two caches (client and server) and the heartbeats make a standalone Eureka server fairly resilient to failure, as long as there is some sort of monitor or elastic runtime keeping it alive (e.g. Cloud Foundry). In standalone mode, you might prefer to switch off the client side behaviour, so it doesn’t keep trying and failing to reach its peers. Example:

application.yml (Standalone Eureka Server)
server:
  port: 8761

eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

Notice that the serviceUrl is pointing to the same host as the local instance.

Peer Awareness

Eureka can be made even more resilient and available by running multiple instances and asking them to register with each other. In fact, this is the default behaviour, so all you need to do to make it work is add a valid serviceUrl to a peer, e.g.

application.yml (Two Peer Aware Eureka Servers)
---
spring:
  profiles: peer1
eureka:
  instance:
    hostname: peer1
  client:
    serviceUrl:
      defaultZone: http://peer2/eureka/

---
spring:
  profiles: peer2
eureka:
  instance:
    hostname: peer2
  client:
    serviceUrl:
      defaultZone: http://peer1/eureka/

In this example we have a YAML file that can be used to run the same server on 2 hosts (peer1 and peer2), by running it in different Spring profiles. You could use this configuration to test the peer awareness on a single host (there’s not much value in doing that in production) by manipulating /etc/hosts to resolve the host names. In fact, the eureka.instance.hostname is not needed if you are running on a machine that knows its own hostname (it is looked up using java.net.InetAddress by default).

You can add multiple peers to a system, and as long as they are all connected to each other by at least one edge, they will synchronize the registrations amongst themselves. If the peers are physically separated (inside a data centre or between multiple data centres) then the system can in principle survive split-brain type failures.

Prefer IP Address

In some cases, it is preferable for Eureka to advertise the IP Adresses of services rather than the hostname. Set eureka.instance.preferIpAddress to true and when the application registers with eureka, it will use its IP Address rather than its hostname.

Circuit Breaker: Hystrix Clients

Netflix has created a library called Hystrix that implements the circuit breaker pattern. In a microservice architecture it is common to have multiple layers of service calls.

HystrixGraph
Figure 1. Microservice Graph

A service failure in the lower level of services can cause cascading failure all the way up to the user. When calls to a particular service reach a certain threshold (20 failures in 5 seconds is the default in Hystrix), the circuit opens and the call is not made. In cases of error and an open circuit a fallback can be provided by the developer.

HystrixFallback
Figure 2. Hystrix fallback prevents cascading failures

Having an open circuit stops cascading failures and allows overwhelmed or failing services time to heal. The fallback can be another Hystrix protected call, static data or a sane empty value. Fallbacks may be chained so the first fallback makes some other business call which in turn falls back to static data.

How to Include Hystrix

To include Hystrix in your project use the starter with group org.springframework.cloud and artifact id spring-cloud-starter-hystrix. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

Example boot app:

@SpringBootApplication
@EnableCircuitBreaker
public class Application {

    public static void main(String[] args) {
        new SpringApplicationBuilder(Application.class).web(true).run(args);
    }

}

@Component
public class StoreIntegration {

    @HystrixCommand(fallbackMethod = "defaultStores")
    public Object getStores(Map<String, Object> parameters) {
        //do stuff that might fail
    }

    public Object defaultStores(Map<String, Object> parameters) {
        return /* something useful */;
    }
}

The @HystrixCommand is provided by a Netflix contrib library called "javanica". Spring Cloud automatically wraps Spring beans with that annotation in a proxy that is connected to the Hystrix circuit breaker. The circuit breaker calculates when to open and close the circuit, and what to do in case of a failure.

To configure the @HystrixCommand you can use the commandProperties attribute with a list of @HystrixProperty annotations. See here for more details. See the Hystrix wiki for details on the properties available.

Propagating the Security Context or using Spring Scopes

If you want some thread local context to propagate into a @HystrixCommand the default declaration will not work because it executes the command in a thread pool (in case of timeouts). You can switch Hystrix to use the same thread as the caller using some configuration, or directly in the annotation, by asking it to use a different "Isolation Strategy". For example:

@HystrixCommand(fallbackMethod = "stubMyService",
    commandProperties = {
      @HystrixProperty(name="execution.isolation.strategy", value="SEMAPHORE")
    }
)
...

The same thing applies if you are using @SessionScope or @RequestScope. You will know when you need to do this because of a runtime exception that says it can’t find the scoped context.

You also have the option to set the hystrix.shareSecurityContext property to true. Doing so will auto configure an Hystrix concurrency strategy plugin hook who will transfer the SecurityContext from your main thread to the one used by the Hystrix command. Hystrix does not allow multiple hystrix concurrency strategy to be registered so an extension mechanism is available by declaring your own HystrixConcurrencyStrategy as a Spring bean. Spring Cloud will lookup for your implementation within the Spring context and wrap it inside its own plugin.

Health Indicator

The state of the connected circuit breakers are also exposed in the /health endpoint of the calling application.

{
    "hystrix": {
        "openCircuitBreakers": [
            "StoreIntegration::getStoresByLocationLink"
        ],
        "status": "CIRCUIT_OPEN"
    },
    "status": "UP"
}

Hystrix Metrics Stream

To enable the Hystrix metrics stream include a dependency on spring-boot-starter-actuator. This will expose the /hystrix.stream as a management endpoint.

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>

Circuit Breaker: Hystrix Dashboard

One of the main benefits of Hystrix is the set of metrics it gathers about each HystrixCommand. The Hystrix Dashboard displays the health of each circuit breaker in an efficient manner.

Hystrix
Figure 3. Hystrix Dashboard

How to Include Hystrix Dashboard

To include the Hystrix Dashboard in your project use the starter with group org.springframework.cloud and artifact id spring-cloud-starter-hystrix-dashboard. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

To run the Hystrix Dashboard annotate your Spring Boot main class with @EnableHystrixDashboard. You then visit /hystrix and point the dashboard to an individual instances /hystrix.stream endpoint in a Hystrix client application.

Turbine

Looking at an individual instances Hystrix data is not very useful in terms of the overall health of the system. Turbine is an application that aggregates all of the relevant /hystrix.stream endpoints into a combined /turbine.stream for use in the Hystrix Dashboard. Individual instances are located via Eureka. Running Turbine is as simple as annotating your main class with the @EnableTurbine annotation (e.g. using spring-cloud-starter-turbine to set up the classpath). All of the documented configuration properties from the Turbine 1 wiki apply. The only difference is that the turbine.instanceUrlSuffix does not need the port prepended as this is handled automatically unless turbine.instanceInsertPort=false.

Note
By default, Turbine looks for the /hystrix.stream endpoint on a registered instance by looking up its homePageUrl entry in Eureka, then appending /hystrix.stream to it. This means that if spring-boot-actuator is running on its own port (which is the default), the call to /hystrix.stream will fail. To make turbine find the Hystrix stream at the correct port, you need to add management.port to the instances' metadata:
eureka:
  instance:
    metadata-map:
      management.port: ${management.port:8081}

The configuration key turbine.appConfig is a list of eureka serviceIds that turbine will use to lookup instances. The turbine stream is then used in the Hystrix dashboard using a url that looks like: http://my.turbine.sever:8080/turbine.stream?cluster=<CLUSTERNAME>; (the cluster parameter can be omitted if the name is "default"). The cluster parameter must match an entry in turbine.aggregator.clusterConfig. Values returned from eureka are uppercase, thus we expect this example to work if there is an app registered with Eureka called "customers":

turbine:
  aggregator:
    clusterConfig: CUSTOMERS
  appConfig: customers

The clusterName can be customized by a SPEL expression in turbine.clusterNameExpression with root an instance of InstanceInfo. The default value is appName, which means that the Eureka serviceId ends up as the cluster key (i.e. the InstanceInfo for customers has an appName of "CUSTOMERS"). A different example would be turbine.clusterNameExpression=aSGName, which would get the cluster name from the AWS ASG name. Another example:

turbine:
  aggregator:
    clusterConfig: SYSTEM,USER
  appConfig: customers,stores,ui,admin
  clusterNameExpression: metadata['cluster']

In this case, the cluster name from 4 services is pulled from their metadata map, and is expected to have values that include "SYSTEM" and "USER".

To use the "default" cluster for all apps you need a string literal expression (with single quotes, and escaped with double quotes if it is in YAML as well):

turbine:
  appConfig: customers,stores
  clusterNameExpression: "'default'"

Spring Cloud provides a spring-cloud-starter-turbine that has all the dependencies you need to get a Turbine server running. Just create a Spring Boot application and annotate it with @EnableTurbine.

Note
by default Spring Cloud allows Turbine to use the host and port to allow multiple processes per host, per cluster. If you want the native Netflix behaviour built into Turbine that does not allow multiple processes per host, per cluster (the key to the instance id is the hostname), then set the property turbine.combineHostPort=false.

Turbine Stream

In some environments (e.g. in a PaaS setting), the classic Turbine model of pulling metrics from all the distributed Hystrix commands doesn’t work. In that case you might want to have your Hystrix commands push metrics to Turbine, and Spring Cloud enables that with messaging. All you need to do on the client is add a dependency to spring-cloud-netflix-hystrix-stream and the spring-cloud-starter-stream-* of your choice (see Spring Cloud Stream documentation for details on the brokers, and how to configure the client credentials, but it should work out of the box for a local broker).

On the server side Just create a Spring Boot application and annotate it with @EnableTurbineStream and by default it will come up on port 8989 (point your Hystrix dashboard to that port, any path). You can customize the port using either server.port or turbine.stream.port. If you have spring-boot-starter-web and spring-boot-starter-actuator on the classpath as well, then you can open up the Actuator endpoints on a separate port (with Tomcat by default) by providing a management.port which is different.

You can then point the Hystrix Dashboard to the Turbine Stream Server instead of individual Hystrix streams. If Turbine Stream is running on port 8989 on myhost, then put http://myhost:8989 in the stream input field in the Hystrix Dashboard. Circuits will be prefixed by their respective serviceId, followed by a dot, then the circuit name.

Spring Cloud provides a spring-cloud-starter-turbine-stream that has all the dependencies you need to get a Turbine Stream server running - just add the Stream binder of your choice, e.g. spring-cloud-starter-stream-rabbit. You need Java 8 to run the app because it is Netty-based.

Client Side Load Balancer: Ribbon

Ribbon is a client side load balancer which gives you a lot of control over the behaviour of HTTP and TCP clients. Feign already uses Ribbon, so if you are using @FeignClient then this section also applies.

A central concept in Ribbon is that of the named client. Each load balancer is part of an ensemble of components that work together to contact a remote server on demand, and the ensemble has a name that you give it as an application developer (e.g. using the @FeignClient annotation). Spring Cloud creates a new ensemble as an ApplicationContext on demand for each named client using RibbonClientConfiguration. This contains (amongst other things) an ILoadBalancer, a RestClient, and a ServerListFilter.

How to Include Ribbon

To include Ribbon in your project use the starter with group org.springframework.cloud and artifact id spring-cloud-starter-ribbon. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

Customizing the Ribbon Client

You can configure some bits of a Ribbon client using external properties in <client>.ribbon.*, which is no different than using the Netflix APIs natively, except that you can use Spring Boot configuration files. The native options can be inspected as static fields in CommonClientConfigKey (part of ribbon-core).

Spring Cloud also lets you take full control of the client by declaring additional configuration (on top of the RibbonClientConfiguration) using @RibbonClient. Example:

@Configuration
@RibbonClient(name = "foo", configuration = FooConfiguration.class)
public class TestConfiguration {
}

In this case the client is composed from the components already in RibbonClientConfiguration together with any in FooConfiguration (where the latter generally will override the former).

Warning
The FooConfiguration has to be @Configuration but take care that it is not in a @ComponentScan for the main application context, otherwise it will be shared by all the @RibbonClients. If you use @ComponentScan (or @SpringBootApplication) you need to take steps to avoid it being included (for instance put it in a separate, non-overlapping package, or specify the packages to scan explicitly in the @ComponentScan).

Spring Cloud Netflix provides the following beans by default for ribbon (BeanType beanName: ClassName):

  • IClientConfig ribbonClientConfig: DefaultClientConfigImpl

  • IRule ribbonRule: ZoneAvoidanceRule

  • IPing ribbonPing: NoOpPing

  • ServerList<Server> ribbonServerList: ConfigurationBasedServerList

  • ServerListFilter<Server> ribbonServerListFilter: ZonePreferenceServerListFilter

  • ILoadBalancer ribbonLoadBalancer: ZoneAwareLoadBalancer

Creating a bean of one of those type and placing it in a @RibbonClient configuration (such as FooConfiguration above) allows you to override each one of the beans described. Example:

@Configuration
public class FooConfiguration {
    @Bean
    public IPing ribbonPing(IClientConfig config) {
        return new PingUrl();
    }
}

This replaces the NoOpPing with PingUrl.

Customizing the Ribbon Client using properties

Starting with version 1.2.0, Spring Cloud Netflix now supports customizing Ribbon clients using properties to be compatible with the Ribbon documentation.

This allows you to change behavior at start up time in different environments.

The supported properties are listed below and should be prefixed by <clientName>.ribbon.:

  • NFLoadBalancerClassName: should implement ILoadBalancer

  • NFLoadBalancerRuleClassName: should implement IRule

  • NFLoadBalancerPingClassName: should implement IPing

  • NIWSServerListClassName: should implement ServerList

  • NIWSServerListFilterClassName should implement ServerListFilter

Note
Classes defined in these properties have precedence over beans defined using @RibbonClient(configuration=MyRibbonConfig.class) and the defaults provided by Spring Cloud Netflix.

To set the IRule for a service name users you could set the following:

application.yml
users:
  ribbon:
    NFLoadBalancerRuleClassName: com.netflix.loadbalancer.WeightedResponseTimeRule

See the Ribbon documentation for implementations provided by Ribbon.

Using Ribbon with Eureka

When Eureka is used in conjunction with Ribbon (i.e., both are on the classpath) the ribbonServerList is overridden with an extension of DiscoveryEnabledNIWSServerList which populates the list of servers from Eureka. It also replaces the IPing interface with NIWSDiscoveryPing which delegates to Eureka to determine if a server is up. The ServerList that is installed by default is a DomainExtractingServerList and the purpose of this is to make physical metadata available to the load balancer without using AWS AMI metadata (which is what Netflix relies on). By default the server list will be constructed with "zone" information as provided in the instance metadata (so on the remote clients set eureka.instance.metadataMap.zone), and if that is missing it can use the domain name from the server hostname as a proxy for zone (if the flag approximateZoneFromHostname is set). Once the zone information is available it can be used in a ServerListFilter. By default it will be used to locate a server in the same zone as the client because the default is a ZonePreferenceServerListFilter. The zone of the client is determined the same way as the remote instances by default, i.e. via eureka.instance.metadataMap.zone.

Note
The orthodox "archaius" way to set the client zone is via a configuration property called "@zone", and Spring Cloud will use that in preference to all other settings if it is available (note that the key will have to be quoted in YAML configuration).
Note
If there is no other source of zone data then a guess is made based on the client configuration (as opposed to the instance configuration). We take eureka.client.availabilityZones, which is a map from region name to a list of zones, and pull out the first zone for the instance’s own region (i.e. the eureka.client.region, which defaults to "us-east-1" for comatibility with native Netflix).

Example: How to Use Ribbon Without Eureka

Eureka is a convenient way to abstract the discovery of remote servers so you don’t have to hard code their URLs in clients, but if you prefer not to use it, Ribbon and Feign are still quite amenable. Suppose you have declared a @RibbonClient for "stores", and Eureka is not in use (and not even on the classpath). The Ribbon client defaults to a configured server list, and you can supply the configuration like this

application.yml
stores:
  ribbon:
    listOfServers: example.com,google.com

Example: Disable Eureka use in Ribbon

Setting the property ribbon.eureka.enabled = false will explicitly disable the use of Eureka in Ribbon.

application.yml
ribbon:
  eureka:
   enabled: false

Using the Ribbon API Directly

You can also use the LoadBalancerClient directly. Example:

public class MyClass {
    @Autowired
    private LoadBalancerClient loadBalancer;

    public void doStuff() {
        ServiceInstance instance = loadBalancer.choose("stores");
        URI storesUri = URI.create(String.format("http://%s:%s", instance.getHost(), instance.getPort()));
        // ... do something with the URI
    }
}

Declarative REST Client: Feign

Feign is a declarative web service client. It makes writing web service clients easier. To use Feign create an interface and annotate it. It has pluggable annotation support including Feign annotations and JAX-RS annotations. Feign also supports pluggable encoders and decoders. Spring Cloud adds support for Spring MVC annotations and for using the same HttpMessageConverters used by default in Spring Web. Spring Cloud integrates Ribbon and Eureka to provide a load balanced http client when using Feign.

How to Include Feign

To include Feign in your project use the starter with group org.springframework.cloud and artifact id spring-cloud-starter-feign. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

Example spring boot app

@Configuration
@ComponentScan
@EnableAutoConfiguration
@EnableEurekaClient
@EnableFeignClients
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

}
StoreClient.java
@FeignClient("stores")
public interface StoreClient {
    @RequestMapping(method = RequestMethod.GET, value = "/stores")
    List<Store> getStores();

    @RequestMapping(method = RequestMethod.POST, value = "/stores/{storeId}", consumes = "application/json")
    Store update(@PathVariable("storeId") Long storeId, Store store);
}

In the @FeignClient annotation the String value ("stores" above) is an arbitrary client name, which is used to create a Ribbon load balancer (see below for details of Ribbon support). You can also specify a URL using the url attribute (absolute value or just a hostname). The name of the bean in the application context is the fully qualified name of the interface. An alias is also created which is the 'name' attribute plus 'FeignClient'. For the example above, @Qualifier("storesFeignClient") could be used to reference the bean. If you want to change the default @Qualifier value, this can be done with the qualifier value in @FeignClient.

The Ribbon client above will want to discover the physical addresses for the "stores" service. If your application is a Eureka client then it will resolve the service in the Eureka service registry. If you don’t want to use Eureka, you can simply configure a list of servers in your external configuration (see above for example).

Overriding Feign Defaults

A central concept in Spring Cloud’s Feign support is that of the named client. Each feign client is part of an ensemble of components that work together to contact a remote server on demand, and the ensemble has a name that you give it as an application developer using the @FeignClient annotation. Spring Cloud creates a new ensemble as an ApplicationContext on demand for each named client using FeignClientsConfiguration. This contains (amongst other things) an feign.Decoder, a feign.Encoder, and a feign.Contract.

Spring Cloud lets you take full control of the feign client by declaring additional configuration (on top of the FeignClientsConfiguration) using @FeignClient. Example:

@FeignClient(name = "stores", configuration = FooConfiguration.class)
public interface StoreClient {
    //..
}

In this case the client is composed from the components already in FeignClientsConfiguration together with any in FooConfiguration (where the latter will override the former).

Warning
The FooConfiguration has to be @Configuration but take care that it is not in a @ComponentScan for the main application context, otherwise it will be used for every @FeignClient. If you use @ComponentScan (or @SpringBootApplication) you need to take steps to avoid it being included (for instance put it in a separate, non-overlapping package, or specify the packages to scan explicitly in the @ComponentScan).
Note
The serviceId attribute is now deprecated in favor of the name attribute.
Warning
Previously, using the url attribute, did not require the name attribute. Using name is now required.

Placeholders are supported in the name and url attributes.

@FeignClient(name = "${feign.name}", url = "${feign.url}")
public interface StoreClient {
    //..
}

Spring Cloud Netflix provides the following beans by default for feign (BeanType beanName: ClassName):

  • Decoder feignDecoder: ResponseEntityDecoder (which wraps a SpringDecoder)

  • Encoder feignEncoder: SpringEncoder

  • Logger feignLogger: Slf4jLogger

  • Contract feignContract: SpringMvcContract

  • Feign.Builder feignBuilder: HystrixFeign.Builder

  • Client feignClient: if Ribbon is enabled it is a LoadBalancerFeignClient, otherwise the default feign client is used.

The OkHttpClient and ApacheHttpClient feign clients can be used by setting feign.okhttp.enabled or feign.httpclient.enabled to true, respectively, and having them on the classpath.

Spring Cloud Netflix does not provide the following beans by default for feign, but still looks up beans of these types from the application context to create the feign client:

  • Logger.Level

  • Retryer

  • ErrorDecoder

  • Request.Options

  • Collection<RequestInterceptor>

Creating a bean of one of those type and placing it in a @FeignClient configuration (such as FooConfiguration above) allows you to override each one of the beans described. Example:

@Configuration
public class FooConfiguration {
    @Bean
    public Contract feignContract() {
        return new feign.Contract.Default();
    }

    @Bean
    public BasicAuthRequestInterceptor basicAuthRequestInterceptor() {
        return new BasicAuthRequestInterceptor("user", "password");
    }
}

This replaces the SpringMvcContract with feign.Contract.Default and adds a RequestInterceptor to the collection of RequestInterceptor.

Default configurations can be specified in the @EnableFeignClients attribute defaultConfiguration in a similar manner as described above. The difference is that this configuration will apply to all feign clients.

Creating Feign Clients Manually

In some cases it might be necessary to customize your Feign Clients in a way that is not possible using the methods above. In this case you can create Clients using the Feign Builder API. Below is an example which creates two Feign Clients with the same interface but configures each one with a separate request interceptor.

@Import(FeignClientsConfiguration.class)
class FooController {

	private FooClient fooClient;

	private FooClient adminClient;

    @Autowired
	public FooController(
			ResponseEntityDecoder decoder, SpringEncoder encoder, Client client) {
		this.fooClient = Feign.builder().client(client)
				.encoder(encoder)
				.decoder(decoder)
				.requestInterceptor(new BasicAuthRequestInterceptor("user", "user"))
				.target(FooClient.class, "http://PROD-SVC");
		this.adminClient = Feign.builder().client(client)
				.encoder(encoder)
				.decoder(decoder)
				.requestInterceptor(new BasicAuthRequestInterceptor("admin", "admin"))
				.target(FooClient.class, "http://PROD-SVC");
    }
}
Note
In the above example FeignClientsConfiguration.class is the default configuration provided by Spring Cloud Netflix.
Note
PROD-SVC is the name of the service the Clients will be making requests to.

Feign Hystrix Support

If Hystrix is on the classpath, by default Feign will wrap all methods with a circuit breaker. Returning a com.netflix.hystrix.HystrixCommand is also available. This lets you use reactive patterns (with a call to .toObservable() or .observe() or asynchronous use (with a call to .queue()).

To disable Hystrix support for Feign, set feign.hystrix.enabled=false.

To disable Hystrix support on a per-client basis create a vanilla Feign.Builder with the "prototype" scope, e.g.:

@Configuration
public class FooConfiguration {
    @Bean
	@Scope("prototype")
	public Feign.Builder feignBuilder() {
		return Feign.builder();
	}
}

Feign Hystrix Fallbacks

Hystrix supports the notion of a fallback: a default code path that is executed when they circuit is open or there is an error. To enable fallbacks for a given @FeignClient set the fallback attribute to the class name that implements the fallback.

@FeignClient(name = "hello", fallback = HystrixClientFallback.class)
protected interface HystrixClient {
    @RequestMapping(method = RequestMethod.GET, value = "/hello")
    Hello iFailSometimes();
}

static class HystrixClientFallback implements HystrixClient {
    @Override
    public Hello iFailSometimes() {
        return new Hello("fallback");
    }
}

If one needs access to the cause that made the fallback trigger, one can use the fallbackFactory attribute inside @FeignClient.

@FeignClient(name = "hello", fallbackFactory = HystrixClientFallbackFactory.class)
protected interface HystrixClient {
	@RequestMapping(method = RequestMethod.GET, value = "/hello")
	Hello iFailSometimes();
}

@Component
static class HystrixClientFallbackFactory implements FallbackFactory<HystrixClient> {
	@Override
	public HystrixClient create(Throwable cause) {
		return new HystrixClientWithFallBackFactory() {
			@Override
			public Hello iFailSometimes() {
				return new Hello("fallback; reason was: " + cause.getMessage());
			}
		};
	}
}
Warning
There is a limitation with the implementation of fallbacks in Feign and how Hystrix fallbacks work. Fallbacks are currently not supported for methods that return com.netflix.hystrix.HystrixCommand and rx.Observable.

Feign Inheritance Support

Feign supports boilerplate apis via single-inheritance interfaces. This allows grouping common operations into convenient base interfaces.

UserService.java
public interface UserService {

    @RequestMapping(method = RequestMethod.GET, value ="/users/{id}")
    User getUser(@PathVariable("id") long id);
}
UserResource.java
@RestController
public class UserResource implements UserService {

}
UserClient.java
package project.user;

@FeignClient("users")
public interface UserClient extends UserService {

}
Note
It is generally not advisable to share an interface between a server and a client. It introduces tight coupling, and also actually doesn’t work with Spring MVC in its current form (method parameter mapping is not inherited).

Feign request/response compression

You may consider enabling the request or response GZIP compression for your Feign requests. You can do this by enabling one of the properties:

feign.compression.request.enabled=true
feign.compression.response.enabled=true

Feign request compression gives you settings similar to what you may set for your web server:

feign.compression.request.enabled=true
feign.compression.request.mime-types=text/xml,application/xml,application/json
feign.compression.request.min-request-size=2048

These properties allow you to be selective about the compressed media types and minimum request threshold length.

Feign logging

A logger is created for each Feign client created. By default the name of the logger is the full class name of the interface used to create the Feign client. Feign logging only responds to the DEBUG level.

application.yml
logging.level.project.user.UserClient: DEBUG

The Logger.Level object that you may configure per client, tells Feign how much to log. Choices are:

  • NONE, No logging (DEFAULT).

  • BASIC, Log only the request method and URL and the response status code and execution time.

  • HEADERS, Log the basic information along with request and response headers.

  • FULL, Log the headers, body, and metadata for both requests and responses.

For example, the following would set the Logger.Level to FULL:

@Configuration
public class FooConfiguration {
    @Bean
    Logger.Level feignLoggerLevel() {
        return Logger.Level.FULL;
    }
}

External Configuration: Archaius

Archaius is the Netflix client side configuration library. It is the library used by all of the Netflix OSS components for configuration. Archaius is an extension of the Apache Commons Configuration project. It allows updates to configuration by either polling a source for changes or for a source to push changes to the client. Archaius uses Dynamic<Type>Property classes as handles to properties.

Archaius Example
class ArchaiusTest {
    DynamicStringProperty myprop = DynamicPropertyFactory
            .getInstance()
            .getStringProperty("my.prop");

    void doSomething() {
        OtherClass.someMethod(myprop.get());
    }
}

Archaius has its own set of configuration files and loading priorities. Spring applications should generally not use Archaius directly, but the need to configure the Netflix tools natively remains. Spring Cloud has a Spring Environment Bridge so Archaius can read properties from the Spring Environment. This allows Spring Boot projects to use the normal configuration toolchain, while allowing them to configure the Netflix tools, for the most part, as documented.

Router and Filter: Zuul

Routing in an integral part of a microservice architecture. For example, / may be mapped to your web application, /api/users is mapped to the user service and /api/shop is mapped to the shop service. Zuul is a JVM based router and server side load balancer by Netflix.

Netflix uses Zuul for the following:

  • Authentication

  • Insights

  • Stress Testing

  • Canary Testing

  • Dynamic Routing

  • Service Migration

  • Load Shedding

  • Security

  • Static Response handling

  • Active/Active traffic management

Zuul’s rule engine allows rules and filters to be written in essentially any JVM language, with built in support for Java and Groovy.

Note
The configuration property zuul.max.host.connections has been replaced by two new properties, zuul.host.maxTotalConnections and zuul.host.maxPerRouteConnections which default to 200 and 20 respectively.
Note
Default Hystrix isolation pattern (ExecutionIsolationStrategy) for all routes is SEMAPHORE. zuul.ribbonIsolationStrategy can be changed to THREAD if this isolation pattern is preferred.

How to Include Zuul

To include Zuul in your project use the starter with group org.springframework.cloud and artifact id spring-cloud-starter-zuul. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

Embedded Zuul Reverse Proxy

Spring Cloud has created an embedded Zuul proxy to ease the development of a very common use case where a UI application wants to proxy calls to one or more back end services. This feature is useful for a user interface to proxy to the backend services it requires, avoiding the need to manage CORS and authentication concerns independently for all the backends.

To enable it, annotate a Spring Boot main class with @EnableZuulProxy, and this forwards local calls to the appropriate service. By convention, a service with the ID "users", will receive requests from the proxy located at /users (with the prefix stripped). The proxy uses Ribbon to locate an instance to forward to via discovery, and all requests are executed in a hystrix command, so failures will show up in Hystrix metrics, and once the circuit is open the proxy will not try to contact the service.

Note
the Zuul starter does not include a discovery client, so for routes based on service IDs you need to provide one of those on the classpath as well (e.g. Eureka is one choice).

To skip having a service automatically added, set zuul.ignored-services to a list of service id patterns. If a service matches a pattern that is ignored, but also included in the explicitly configured routes map, then it will be unignored. Example:

application.yml
 zuul:
  ignoredServices: '*'
  routes:
    users: /myusers/**

In this example, all services are ignored except "users".

To augment or change the proxy routes, you can add external configuration like the following:

application.yml
 zuul:
  routes:
    users: /myusers/**

This means that http calls to "/myusers" get forwarded to the "users" service (for example "/myusers/101" is forwarded to "/101").

To get more fine-grained control over a route you can specify the path and the serviceId independently:

application.yml
 zuul:
  routes:
    users:
      path: /myusers/**
      serviceId: users_service

This means that http calls to "/myusers" get forwarded to the "users_service" service. The route has to have a "path" which can be specified as an ant-style pattern, so "/myusers/*" only matches one level, but "/myusers/**" matches hierarchically.

The location of the backend can be specified as either a "serviceId" (for a service from discovery) or a "url" (for a physical location), e.g.

application.yml
 zuul:
  routes:
    users:
      path: /myusers/**
      url: http://example.com/users_service

These simple url-routes don’t get executed as a HystrixCommand nor can you loadbalance multiple URLs with Ribbon. To achieve this, specify a service-route and configure a Ribbon client for the serviceId (this currently requires disabling Eureka support in Ribbon: see above for more information), e.g.

application.yml
zuul:
  routes:
    users:
      path: /myusers/**
      serviceId: users

ribbon:
  eureka:
    enabled: false

users:
  ribbon:
    listOfServers: example.com,google.com

You can provide convention between serviceId and routes using regexmapper. It uses regular expression named groups to extract variables from serviceId and inject them into a route pattern.

ApplicationConfiguration.java
@Bean
public PatternServiceRouteMapper serviceRouteMapper() {
    return new PatternServiceRouteMapper(
        "(?<name>^.+)-(?<version>v.+$)",
        "${version}/${name}");
}

This means that a serviceId "myusers-v1" will be mapped to route "/v1/myusers/**". Any regular expression is accepted but all named groups must be present in both servicePattern and routePattern. If servicePattern does not match a serviceId, the default behavior is used. In the example above, a serviceId "myusers" will be mapped to route "/myusers/**" (no version detected) This feature is disable by default and only applies to discovered services.

To add a prefix to all mappings, set zuul.prefix to a value, such as /api. The proxy prefix is stripped from the request before the request is forwarded by default (switch this behaviour off with zuul.stripPrefix=false). You can also switch off the stripping of the service-specific prefix from individual routes, e.g.

application.yml
 zuul:
  routes:
    users:
      path: /myusers/**
      stripPrefix: false
Note
zuul.stripPrefix only applies to the prefix set in zuul.prefix. It does have any effect on prefixes defined within a given route’s path.

In this example, requests to "/myusers/101" will be forwarded to "/myusers/101" on the "users" service.

The zuul.routes entries actually bind to an object of type ZuulProperties. If you look at the properties of that object you will see that it also has a "retryable" flag. Set that flag to "true" to have the Ribbon client automatically retry failed requests (and if you need to you can modify the parameters of the retry operations using the Ribbon client configuration).

The X-Forwarded-Host header is added to the forwarded requests by default. To turn it off set zuul.addProxyHeaders = false. The prefix path is stripped by default, and the request to the backend picks up a header "X-Forwarded-Prefix" ("/myusers" in the examples above).

An application with @EnableZuulProxy could act as a standalone server if you set a default route ("/"), for example zuul.route.home: / would route all traffic (i.e. "/**") to the "home" service.

If more fine-grained ignoring is needed, you can specify specific patterns to ignore. These patterns are evaluated at the start of the route location process, which means prefixes should be included in the pattern to warrant a match. Ignored patterns span all services and supersede any other route specification.

application.yml
 zuul:
  ignoredPatterns: /**/admin/**
  routes:
    users: /myusers/**

This means that all calls such as "/myusers/101" will be forwarded to "/101" on the "users" service. But calls including "/admin/" will not resolve.

Warning
If you need your routes to have their order preserved you need to use a YAML file as the ordering will be lost using a properties file. For example:
application.yml
 zuul:
  routes:
    users:
      path: /myusers/**
    legacy:
      path: /**

If you were to use a properties file, the legacy path may end up in front of the users path rendering the users path unreachable.

Zuul Http Client

The default HTTP client used by zuul is now backed by the Apache HTTP Client instead of the deprecated Ribbon RestClient. To use RestClient or to use the okhttp3.OkHttpClient set ribbon.restclient.enabled=true or ribbon.okhttp.enabled=true respectively.

Cookies and Sensitive Headers

It’s OK to share headers between services in the same system, but you probably don’t want sensitive headers leaking downstream into external servers. You can specify a list of ignored headers as part of the route configuration. Cookies play a special role because they have well-defined semantics in browsers, and they are always to be treated as sensitive. If the consumer of your proxy is a browser, then cookies for downstream services also cause problems for the user because they all get jumbled up (all downstream services look like they come from the same place).

If you are careful with the design of your services, for example if only one of the downstream services sets cookies, then you might be able to let them flow from the backend all the way up to the caller. Also, if your proxy sets cookies and all your back end services are part of the same system, it can be natural to simply share them (and for instance use Spring Session to link them up to some shared state). Other than that, any cookies that get set by downstream services are likely to be not very useful to the caller, so it is recommended that you make (at least) "Set-Cookie" and "Cookie" into sensitive headers for routes that are not part of your domain. Even for routes that are part of your domain, try to think carefully about what it means before allowing cookies to flow between them and the proxy.

The sensitive headers can be configured as a comma-separated list per route, e.g.

application.yml
 zuul:
  routes:
    users:
      path: /myusers/**
      sensitiveHeaders: Cookie,Set-Cookie,Authorization
      url: https://downstream

Sensitive headers can also be set globally by setting zuul.sensitiveHeaders. If sensitiveHeaders is set on a route, this will override the global sensitiveHeaders setting.

Note
this is the default value for sensitiveHeaders, so you don’t need to set it unless you want it to be different. N.B. this is new in Spring Cloud Netflix 1.1 (in 1.0 the user had no control over headers and all cookies flow in both directions).

Ignored Headers

In addition to the per-route sensitive headers, you can set a global value for zuul.ignoredHeaders for values that should be discarded (both request and response) during interactions with downstream services. By default these are empty, if Spring Security is not on the classpath, and otherwise they are initialized to a set of well-known "security" headers (e.g. involving caching) as specified by Spring Security. The assumption in this case is that the downstream services might add these headers too, and we want the values from the proxy. To not discard these well known security headers in case Spring Security is on the classpath you can set zuul.ignoreSecurityHeaders to false. This can be useful if you disabled the HTTP Security response headers in Spring Security and want the values provided by downstream services

The Routes Endpoint

If you are using @EnableZuulProxy with tha Spring Boot Actuator you will enable (by default) an additional endpoint, available via HTTP as /routes. A GET to this endpoint will return a list of the mapped routes. A POST will force a refresh of the existing routes (e.g. in case there have been changes in the service catalog).

Note
the routes should respond automatically to changes in the service catalog, but the POST to /routes is a way to force the change to happen immediately.

Strangulation Patterns and Local Forwards

A common pattern when migrating an existing application or API is to "strangle" old endpoints, slowly replacing them with different implementations. The Zuul proxy is a useful tool for this because you can use it to handle all traffic from clients of the old endpoints, but redirect some of the requests to new ones.

Example configuration:

application.yml
 zuul:
  routes:
    first:
      path: /first/**
      url: http://first.example.com
    second:
      path: /second/**
      url: forward:/second
    third:
      path: /third/**
      url: forward:/3rd
    legacy:
      path: /**
      url: http://legacy.example.com

In this example we are strangling the "legacy" app which is mapped to all requests that do not match one of the other patterns. Paths in /first/** have been extracted into a new service with an external URL. And paths in /second/** are forwared so they can be handled locally, e.g. with a normal Spring @RequestMapping. Paths in /third/** are also forwarded, but with a different prefix (i.e. /third/foo is forwarded to /3rd/foo).

Note
The ignored patterns aren’t completely ignored, they just aren’t handled by the proxy (so they are also effectively forwarded locally).

Uploading Files through Zuul

If you @EnableZuulProxy you can use the proxy paths to upload files and it should just work as long as the files are small. For large files there is an alternative path which bypasses the Spring DispatcherServlet (to avoid multipart processing) in "/zuul/*". I.e. if zuul.routes.customers=/customers/** then you can POST large files to "/zuul/customers/*". The servlet path is externalized via zuul.servletPath. Extremely large files will also require elevated timeout settings if the proxy route takes you through a Ribbon load balancer, e.g.

application.yml
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds: 60000
ribbon:
  ConnectTimeout: 3000
  ReadTimeout: 60000

Note that for streaming to work with large files, you need to use chunked encoding in the request (which some browsers do not do by default). E.g. on the command line:

$ curl -v -H "Transfer-Encoding: chunked" \
    -F "[email protected]" localhost:9999/zuul/simple/file

Plain Embedded Zuul

You can also run a Zuul server without the proxying, or switch on parts of the proxying platform selectively, if you use @EnableZuulServer (instead of @EnableZuulProxy). Any beans that you add to the application of type ZuulFilter will be installed automatically, as they are with @EnableZuulProxy, but without any of the proxy filters being added automatically.

In this case the routes into the Zuul server are still specified by configuring "zuul.routes.*", but there is no service discovery and no proxying, so the "serviceId" and "url" settings are ignored. For example:

application.yml
 zuul:
  routes:
    api: /api/**

maps all paths in "/api/**" to the Zuul filter chain.

Disable Zuul Filters

Zuul for Spring Cloud comes with a number of ZuulFilter beans enabled by default in both proxy and server mode. See the zuul filters package for the possible filters that are enabled. If you want to disable one, simply set zuul.<SimpleClassName>.<filterType>.disable=true. By convention, the package after filters is the Zuul filter type. For example to disable org.springframework.cloud.netflix.zuul.filters.post.SendResponseFilter set zuul.SendResponseFilter.post.disable=true.

Providing Hystrix Fallbacks For Routes

When a circuit for a given route in Zuul is tripped you can provide a fallback response by creating a bean of type ZuulFallbackProvider. Within this bean you need to specify the route ID the fallback is for and provide a ClientHttpResponse to return as a fallback. Here is a very simple ZuulFallbackProvider implementation.

class MyFallbackProvider implements ZuulFallbackProvider {
    @Override
    public String getRoute() {
        return "customers";
    }

    @Override
    public ClientHttpResponse fallbackResponse() {
        return new ClientHttpResponse() {
            @Override
            public HttpStatus getStatusCode() throws IOException {
                return HttpStatus.OK;
            }

            @Override
            public int getRawStatusCode() throws IOException {
                return 200;
            }

            @Override
            public String getStatusText() throws IOException {
                return "OK";
            }

            @Override
            public void close() {

            }

            @Override
            public InputStream getBody() throws IOException {
                return new ByteArrayInputStream("fallback".getBytes());
            }

            @Override
            public HttpHeaders getHeaders() {
                HttpHeaders headers = new HttpHeaders();
                headers.setContentType(MediaType.APPLICATION_JSON);
                return headers;
            }
        };
    }
}

And here is what the route configuration would look like.

zuul:
  routes:
    customers: /customers/**

Polyglot support with Sidecar

Do you have non-jvm languages you want to take advantage of Eureka, Ribbon and Config Server? The Spring Cloud Netflix Sidecar was inspired by Netflix Prana. It includes a simple http api to get all of the instances (ie host and port) for a given service. You can also proxy service calls through an embedded Zuul proxy which gets its route entries from Eureka. The Spring Cloud Config Server can be accessed directly via host lookup or through the Zuul Proxy. The non-jvm app should implement a health check so the Sidecar can report to eureka if the app is up or down.

To include Sidecar in your project use the dependency with group org.springframework.cloud and artifact id spring-cloud-netflix-sidecar.

To enable the Sidecar, create a Spring Boot application with @EnableSidecar. This annotation includes @EnableCircuitBreaker, @EnableDiscoveryClient, and @EnableZuulProxy. Run the resulting application on the same host as the non-jvm application.

To configure the side car add sidecar.port and sidecar.health-uri to application.yml. The sidecar.port property is the port the non-jvm app is listening on. This is so the Sidecar can properly register the app with Eureka. The sidecar.health-uri is a uri accessible on the non-jvm app that mimicks a Spring Boot health indicator. It should return a json document like the following:

health-uri-document
{
  "status":"UP"
}

Here is an example application.yml for a Sidecar application:

application.yml
server:
  port: 5678
spring:
  application:
    name: sidecar

sidecar:
  port: 8000
  health-uri: http://localhost:8000/health.json

The api for the DiscoveryClient.getInstances() method is /hosts/{serviceId}. Here is an example response for /hosts/customers that returns two instances on different hosts. This api is accessible to the non-jvm app (if the sidecar is on port 5678) at http://localhost:5678/hosts/{serviceId}.

/hosts/customers
[
    {
        "host": "myhost",
        "port": 9000,
        "uri": "http://myhost:9000",
        "serviceId": "CUSTOMERS",
        "secure": false
    },
    {
        "host": "myhost2",
        "port": 9000,
        "uri": "http://myhost2:9000",
        "serviceId": "CUSTOMERS",
        "secure": false
    }
]

The Zuul proxy automatically adds routes for each service known in eureka to /<serviceId>, so the customers service is available at /customers. The Non-jvm app can access the customer service via http://localhost:5678/customers (assuming the sidecar is listening on port 5678).

If the Config Server is registered with Eureka, non-jvm application can access it via the Zuul proxy. If the serviceId of the ConfigServer is configserver and the Sidecar is on port 5678, then it can be accessed at http://localhost:5678/configserver

Non-jvm app can take advantage of the Config Server’s ability to return YAML documents. For example, a call to http://sidecar.local.spring.io:5678/configserver/default-master.yml might result in a YAML document like the following

eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/
  password: password
info:
  description: Spring Cloud Samples
  url: https://github.com/spring-cloud-samples

RxJava with Spring MVC

Spring Cloud Netflix includes the RxJava.

RxJava is a Java VM implementation of Reactive Extensions: a library for composing asynchronous and event-based programs by using observable sequences.

Spring Cloud Netflix provides support for returning rx.Single objects from Spring MVC Controllers. It also supports using rx.Observable objects for Server-sent events (SSE). This can be very convenient if your internal APIs are already built using RxJava (see Feign Hystrix Support for examples).

Here are some examples of using rx.Single:

@RequestMapping(method = RequestMethod.GET, value = "/single")
public Single<String> single() {
	return Single.just("single value");
}

@RequestMapping(method = RequestMethod.GET, value = "/singleWithResponse")
public ResponseEntity<Single<String>> singleWithResponse() {
	return new ResponseEntity<>(Single.just("single value"),
			HttpStatus.NOT_FOUND);
}

@RequestMapping(method = RequestMethod.GET, value = "/singleCreatedWithResponse")
public Single<ResponseEntity<String>> singleOuterWithResponse() {
	return Single.just(new ResponseEntity<>("single value", HttpStatus.CREATED));
}

@RequestMapping(method = RequestMethod.GET, value = "/throw")
public Single<Object> error() {
	return Single.error(new RuntimeException("Unexpected"));
}

If you have an Observable, rather than a single, you can use .toSingle() or .toList().toSingle(). Here are some examples:

@RequestMapping(method = RequestMethod.GET, value = "/single")
public Single<String> single() {
    return Observable.just("single value").toSingle();
}

@RequestMapping(method = RequestMethod.GET, value = "/multiple")
public Single<List<String>> multiple() {
    return Observable.just("multiple", "values").toList().toSingle();
}

@RequestMapping(method = RequestMethod.GET, value = "/responseWithObservable")
public ResponseEntity<Single<String>> responseWithObservable() {

    Observable<String> observable = Observable.just("single value");
    HttpHeaders headers = new HttpHeaders();
    headers.setContentType(APPLICATION_JSON_UTF8);
    return new ResponseEntity<>(observable.toSingle(), headers, HttpStatus.CREATED);
}

@RequestMapping(method = RequestMethod.GET, value = "/timeout")
public Observable<String> timeout() {
    return Observable.timer(1, TimeUnit.MINUTES).map(new Func1<Long, String>() {
        @Override
        public String call(Long aLong) {
            return "single value";
        }
    });
}

If you have a streaming endpoint and client, SSE could be an option. To convert rx.Observable to a Spring SseEmitter use RxResponse.sse(). Here are some examples:

@RequestMapping(method = RequestMethod.GET, value = "/sse")
public SseEmitter single() {
	return RxResponse.sse(Observable.just("single value"));
}

@RequestMapping(method = RequestMethod.GET, value = "/messages")
public SseEmitter messages() {
	return RxResponse.sse(Observable.just("message 1", "message 2", "message 3"));
}

@RequestMapping(method = RequestMethod.GET, value = "/events")
public SseEmitter event() {
	return RxResponse.sse(APPLICATION_JSON_UTF8,
			Observable.just(new EventDto("Spring io", getDate(2016, 5, 19)),
					new EventDto("SpringOnePlatform", getDate(2016, 8, 1))));
}

Metrics: Spectator, Servo, and Atlas

When used together, Spectator/Servo and Atlas provide a near real-time operational insight platform.

Spectator and Servo are Netflix’s metrics collection libraries. Atlas is a Netflix metrics backend to manage dimensional time series data.

Servo served Netflix for several years and is still usable, but is gradually being phased out in favor of Spectator, which is only designed to work with Java 8. Spring Cloud Netflix provides support for both, but Java 8 based applications are encouraged to use Spectator.

Dimensional vs. Hierarchical Metrics

Spring Boot Actuator metrics are hierarchical and metrics are separated only by name. These names often follow a naming convention that embeds key/value attribute pairs (dimensions) into the name separated by periods. Consider the following metrics for two endpoints, root and star-star:

{
    "counter.status.200.root": 20,
    "counter.status.400.root": 3,
    "counter.status.200.star-star": 5,
}

The first metric gives us a normalized count of successful requests against the root endpoint per unit of time. But what if the system had 20 endpoints and you want to get a count of successful requests against all the endpoints? Some hierarchical metrics backends would allow you to specify a wild card such as counter.status.200. that would read all 20 metrics and aggregate the results. Alternatively, you could provide a HandlerInterceptorAdapter that intercepts and records a metric like counter.status.200.all for all successful requests irrespective of the endpoint, but now you must write 20+1 different metrics. Similarly if you want to know the total number of successful requests for all endpoints in the service, you could specify a wild card such as counter.status.2.*.

Even in the presence of wildcarding support on a hierarchical metrics backend, naming consistency can be difficult. Specifically the position of these tags in the name string can slip with time, breaking queries. For example, suppose we add an additional dimension to the hierarchical metrics above for HTTP method. Then counter.status.200.root becomes counter.status.200.method.get.root, etc. Our counter.status.200.* suddenly no longer has the same semantic meaning. Furthermore, if the new dimension is not applied uniformly across the codebase, certain queries may become impossible. This can quickly get out of hand.

Netflix metrics are tagged (a.k.a. dimensional). Each metric has a name, but this single named metric can contain multiple statistics and 'tag' key/value pairs that allows more querying flexibility. In fact, the statistics themselves are recorded in a special tag.

Recorded with Netflix Servo or Spectator, a timer for the root endpoint described above contains 4 statistics per status code, where the count statistic is identical to Spring Boot Actuator’s counter. In the event that we have encountered an HTTP 200 and 400 thus far, there will be 8 available data points:

{
    "root(status=200,stastic=count)": 20,
    "root(status=200,stastic=max)": 0.7265630630000001,
    "root(status=200,stastic=totalOfSquares)": 0.04759702862580789,
    "root(status=200,stastic=totalTime)": 0.2093076914666667,
    "root(status=400,stastic=count)": 1,
    "root(status=400,stastic=max)": 0,
    "root(status=400,stastic=totalOfSquares)": 0,
    "root(status=400,stastic=totalTime)": 0,
}

Default Metrics Collection

Without any additional dependencies or configuration, a Spring Cloud based service will autoconfigure a Servo MonitorRegistry and begin collecting metrics on every Spring MVC request. By default, a Servo timer with the name rest will be recorded for each MVC request which is tagged with:

  1. HTTP method

  2. HTTP status (e.g. 200, 400, 500)

  3. URI (or "root" if the URI is empty), sanitized for Atlas

  4. The exception class name, if the request handler threw an exception

  5. The caller, if a request header with a key matching netflix.metrics.rest.callerHeader is set on the request. There is no default key for netflix.metrics.rest.callerHeader. You must add it to your application properties if you wish to collect caller information.

Set the netflix.metrics.rest.metricName property to change the name of the metric from rest to a name you provide.

If Spring AOP is enabled and org.aspectj:aspectjweaver is present on your runtime classpath, Spring Cloud will also collect metrics on every client call made with RestTemplate. A Servo timer with the name of restclient will be recorded for each MVC request which is tagged with:

  1. HTTP method

  2. HTTP status (e.g. 200, 400, 500), "CLIENT_ERROR" if the response returned null, or "IO_ERROR" if an IOException occurred during the execution of the RestTemplate method

  3. URI, sanitized for Atlas

  4. Client name

Warning
Avoid using hardcoded url parameters within RestTemplate. When targeting dynamic endpoints use URL variables. This will avoid potential "GC Overhead Limit Reached" issues where ServoMonitorCache treats each url as a unique key.
// recommended
String orderid = "1";
restTemplate.getForObject("http://testeurekabrixtonclient/orders/{orderid}", String.class, orderid)

// avoid
restTemplate.getForObject("http://testeurekabrixtonclient/orders/1", String.class)

Metrics Collection: Spectator

To enable Spectator metrics, include a dependency on spring-boot-starter-spectator:

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-spectator</artifactId>
    </dependency>

In Spectator parlance, a meter is a named, typed, and tagged configuration and a metric represents the value of a given meter at a point in time. Spectator meters are created and controlled by a registry, which currently has several different implementations. Spectator provides 4 meter types: counter, timer, gauge, and distribution summary.

Spring Cloud Spectator integration configures an injectable com.netflix.spectator.api.Registry instance for you. Specifically, it configures a ServoRegistry instance in order to unify the collection of REST metrics and the exporting of metrics to the Atlas backend under a single Servo API. Practically, this means that your code may use a mixture of Servo monitors and Spectator meters and both will be scooped up by Spring Boot Actuator MetricReader instances and both will be shipped to the Atlas backend.

Spectator Counter

A counter is used to measure the rate at which some event is occurring.

// create a counter with a name and a set of tags
Counter counter = registry.counter("counterName", "tagKey1", "tagValue1", ...);
counter.increment(); // increment when an event occurs
counter.increment(10); // increment by a discrete amount

The counter records a single time-normalized statistic.

Spectator Timer

A timer is used to measure how long some event is taking. Spring Cloud automatically records timers for Spring MVC requests and conditionally RestTemplate requests, which can later be used to create dashboards for request related metrics like latency:

Request Latency

image::RequestLatency.png []

// create a timer with a name and a set of tags
Timer timer = registry.timer("timerName", "tagKey1", "tagValue1", ...);

// execute an operation and time it at the same time
T result = timer.record(() -> fooReturnsT());

// alternatively, if you must manually record the time
Long start = System.nanoTime();
T result = fooReturnsT();
timer.record(System.nanoTime() - start, TimeUnit.NANOSECONDS);

The timer simultaneously records 4 statistics: count, max, totalOfSquares, and totalTime. The count statistic will always match the single normalized value provided by a counter if you had called increment() once on the counter for each time you recorded a timing, so it is rarely necessary to count and time separately for a single operation.

For long running operations, Spectator provides a special LongTaskTimer.

Spectator Gauge

Gauges are used to determine some current value like the size of a queue or number of threads in a running state. Since gauges are sampled, they provide no information about how these values fluctuate between samples.

The normal use of a gauge involves registering the gauge once in initialization with an id, a reference to the object to be sampled, and a function to get or compute a numeric value based on the object. The reference to the object is passed in separately and the Spectator registry will keep a weak reference to the object. If the object is garbage collected, then Spectator will automatically drop the registration. See the note in Spectator’s documentation about potential memory leaks if this API is misused.

// the registry will automatically sample this gauge periodically
registry.gauge("gaugeName", pool, Pool::numberOfRunningThreads);

// manually sample a value in code at periodic intervals -- last resort!
registry.gauge("gaugeName", Arrays.asList("tagKey1", "tagValue1", ...), 1000);

Spectator Distribution Summaries

A distribution summary is used to track the distribution of events. It is similar to a timer, but more general in that the size does not have to be a period of time. For example, a distribution summary could be used to measure the payload sizes of requests hitting a server.

// the registry will automatically sample this gauge periodically
DistributionSummary ds = registry.distributionSummary("dsName", "tagKey1", "tagValue1", ...);
ds.record(request.sizeInBytes());

Metrics Collection: Servo

Warning
If your code is compiled on Java 8, please use Spectator instead of Servo as Spectator is destined to replace Servo entirely in the long term.

In Servo parlance, a monitor is a named, typed, and tagged configuration and a metric represents the value of a given monitor at a point in time. Servo monitors are logically equivalent to Spectator meters. Servo monitors are created and controlled by a MonitorRegistry. In spite of the above warning, Servo does have a wider array of monitor options than Spectator has meters.

Spring Cloud integration configures an injectable com.netflix.servo.MonitorRegistry instance for you. Once you have created the appropriate Monitor type in Servo, the process of recording data is wholly similar to Spectator.

Creating Servo Monitors

If you are using the Servo MonitorRegistry instance provided by Spring Cloud (specifically, an instance of DefaultMonitorRegistry), Servo provides convenience classes for retrieving counters and timers. These convenience classes ensure that only one Monitor is registered for each unique combination of name and tags.

To manually create a Monitor type in Servo, especially for the more exotic monitor types for which convenience methods are not provided, instantiate the appropriate type by providing a MonitorConfig instance:

MonitorConfig config = MonitorConfig.builder("timerName").withTag("tagKey1", "tagValue1").build();

// somewhere we should cache this Monitor by MonitorConfig
Timer timer = new BasicTimer(config);
monitorRegistry.register(timer);

Metrics Backend: Atlas

Atlas was developed by Netflix to manage dimensional time series data for near real-time operational insight. Atlas features in-memory data storage, allowing it to gather and report very large numbers of metrics, very quickly.

Atlas captures operational intelligence. Whereas business intelligence is data gathered for analyzing trends over time, operational intelligence provides a picture of what is currently happening within a system.

Spring Cloud provides a spring-cloud-starter-atlas that has all the dependencies you need. Then just annotate your Spring Boot application with @EnableAtlas and provide a location for your running Atlas server with the netflix.atlas.uri property.

Global tags

Spring Cloud enables you to add tags to every metric sent to the Atlas backend. Global tags can be used to separate metrics by application name, environment, region, etc.

Each bean implementing AtlasTagProvider will contribute to the global tag list:

@Bean
AtlasTagProvider atlasCommonTags(
    @Value("${spring.application.name}") String appName) {
  return () -> Collections.singletonMap("app", appName);
}

Using Atlas

To bootstrap a in-memory standalone Atlas instance:

$ curl -LO https://github.com/Netflix/atlas/releases/download/v1.4.2/atlas-1.4.2-standalone.jar
$ java -jar atlas-1.4.2-standalone.jar
Tip
An Atlas standalone node running on an r3.2xlarge (61GB RAM) can handle roughly 2 million metrics per minute for a given 6 hour window.

Once running and you have collected a handful of metrics, verify that your setup is correct by listing tags on the Atlas server:

$ curl http://ATLAS/api/v1/tags
Tip
After executing several requests against your service, you can gather some very basic information on the request latency of every request by pasting the following url in your browser: http://ATLAS/api/v1/graph?q=name,rest,:eq,:avg

The Atlas wiki contains a compilation of sample queries for various scenarios.

Make sure to check out the alerting philosophy and docs on using double exponential smoothing to generate dynamic alert thresholds.

Spring Cloud Stream

Spring Cloud Stream Reference Guide

Sabby Anandan; Marius Bogoevici; Eric Bottard; Mark Fisher; Ilayaperumal Gopinathan; Gunnar Hillert; Mark Pollack; Patrick Peralta; Glenn Renfro; Thomas Risberg; Dave Syer; David Turanski; Janne Valkealahti; Benjamin Klein :doctype: book :toc: :toclevels: 4 :source-highlighter: prettify :numbered: :icons: font :hide-uri-scheme: :spring-cloud-stream-repo: snapshot :github-tag: master :spring-cloud-stream-docs-version: current :spring-cloud-stream-docs: http://docs.spring.io/spring-cloud-stream/docs/{spring-cloud-stream-docs-version}/reference :spring-cloud-stream-docs-current: http://docs.spring.io/spring-cloud-stream/docs/current-SNAPSHOT/reference/html/ :github-repo: spring-cloud/spring-cloud-stream :github-raw: http://raw.github.com/spring-cloud/spring-cloud-netflix/master :github-code: http://github.com/spring-cloud/spring-cloud-netflix/tree/master :github-wiki: http://github.com/spring-cloud/spring-cloud-netflix/wiki :github-master-code: http://github.com/spring-cloud/spring-cloud-netflix/tree/master :sc-ext: java

Spring Cloud Stream Core

Unresolved directive in loud-stream-docs/src/main/asciidoc/spring-cloud-stream-aggregate.adoc - include::../../../core/spring-cloud-stream-core-docs/src/main/asciidoc/spring-cloud-stream-overview.adoc[]

Binder Implementations

Apache Kafka Binder

Unresolved directive in loud-stream-docs/src/main/asciidoc/spring-cloud-stream-aggregate.adoc - include::../../../kafka/spring-cloud-stream-binder-kafka-docs/src/main/asciidoc/overview.adoc[leveloffset=+1]

RabbitMQ Binder

Unresolved directive in loud-stream-docs/src/main/asciidoc/spring-cloud-stream-aggregate.adoc - include::../../../rabbit/spring-cloud-stream-binder-rabbit-docs/src/main/asciidoc/overview.adoc[leveloffset=+1] Unresolved directive in loud-stream-docs/src/main/asciidoc/spring-cloud-stream-aggregate.adoc - include::../../../rabbit/spring-cloud-stream-binder-rabbit-docs/src/main/asciidoc/dlq.adoc[leveloffset=+1]

Appendices

Appendix A: Building

Basic Compile and Test

To build the source you will need to install JDK 1.7.

The build uses the Maven wrapper so you don’t have to install a specific version of Maven. To enable the tests for Redis, Rabbit, and Kafka bindings you should have those servers running before building. See below for more information on running the servers.

The main build command is

$ ./mvnw clean install

You can also add '-DskipTests' if you like, to avoid running the tests.

Note
You can also install Maven (>=3.3.3) yourself and run the mvn command in place of ./mvnw in the examples below. If you do that you also might need to add -P spring if your local Maven settings do not contain repository declarations for spring pre-release artifacts.
Note
Be aware that you might need to increase the amount of memory available to Maven by setting a MAVEN_OPTS environment variable with a value like -Xmx512m -XX:MaxPermSize=128m. We try to cover this in the .mvn configuration, so if you find you have to do it to make a build succeed, please raise a ticket to get the settings added to source control.

The projects that require middleware generally include a docker-compose.yml, so consider using Docker Compose to run the middeware servers in Docker containers. See the README in the scripts demo repository for specific instructions about the common cases of mongo, rabbit and redis.

Documentation

There is a "full" profile that will generate documentation.

Working with the code

If you don’t have an IDE preference we would recommend that you use Spring Tools Suite or Eclipse when working with the code. We use the m2eclipe eclipse plugin for maven support. Other IDEs and tools should also work without issue.

Importing into eclipse with m2eclipse

We recommend the m2eclipe eclipse plugin when working with eclipse. If you don’t already have m2eclipse installed it is available from the "eclipse marketplace".

Unfortunately m2e does not yet support Maven 3.3, so once the projects are imported into Eclipse you will also need to tell m2eclipse to use the .settings.xml file for the projects. If you do not do this you may see many different errors related to the POMs in the projects. Open your Eclipse preferences, expand the Maven preferences, and select User Settings. In the User Settings field click Browse and navigate to the Spring Cloud project you imported selecting the .settings.xml file in that project. Click Apply and then OK to save the preference changes.

Note
Alternatively you can copy the repository settings from .settings.xml into your own ~/.m2/settings.xml.

Importing into eclipse without m2eclipse

If you prefer not to use m2eclipse you can generate eclipse project metadata using the following command:

$ ./mvnw eclipse:eclipse

The generated eclipse projects can be imported by selecting import existing projects from the file menu. [[contributing] == Contributing

Spring Cloud is released under the non-restrictive Apache 2.0 license, and follows a very standard Github development process, using Github tracker for issues and merging pull requests into master. If you want to contribute even something trivial please do not hesitate, but follow the guidelines below.

Sign the Contributor License Agreement

Before we accept a non-trivial patch or pull request we will need you to sign the contributor’s agreement. Signing the contributor’s agreement does not grant anyone commit rights to the main repository, but it does mean that we can accept your contributions, and you will get an author credit if we do. Active contributors might be asked to join the core team, and given the ability to merge pull requests.

Code Conventions and Housekeeping

None of these is essential for a pull request, but they will all help. They can also be added after the original pull request but before a merge.

  • Use the Spring Framework code format conventions. If you use Eclipse you can import formatter settings using the eclipse-code-formatter.xml file from the Spring Cloud Build project. If using IntelliJ, you can use the Eclipse Code Formatter Plugin to import the same file.

  • Make sure all new .java files to have a simple Javadoc class comment with at least an @author tag identifying you, and preferably at least a paragraph on what the class is for.

  • Add the ASF license header comment to all new .java files (copy from existing files in the project)

  • Add yourself as an @author to the .java files that you modify substantially (more than cosmetic changes).

  • Add some Javadocs and, if you change the namespace, some XSD doc elements.

  • A few unit tests would help a lot as well — someone has to do it.

  • If no-one else is using your branch, please rebase it against the current master (or other target branch in the main project).

  • When writing a commit message please follow these conventions, if you are fixing an existing issue please add Fixes gh-XXXX at the end of the commit message (where XXXX is the issue number).

Spring Cloud Bus

Spring Cloud Bus links nodes of a distributed system with a lightweight message broker. This can then be used to broadcast state changes (e.g. configuration changes) or other management instructions. A key idea is that the Bus is like a distributed Actuator for a Spring Boot application that is scaled out, but it can also be used as a communication channel between apps. The only implementation currently is with an AMQP broker as the transport, but the same basic feature set (and some more depending on the transport) is on the roadmap for other transports.

Note
Spring Cloud is released under the non-restrictive Apache 2.0 license. If you would like to contribute to this section of the documentation or if you find an error, please find the source code and issue trackers in the project at github.

Quick Start

Spring Cloud Bus works by adding Spring Boot autconfiguration if it detects itself on the classpath. All you need to do to enable the bus is to add spring-cloud-starter-bus-amqp or spring-cloud-starter-bus-kafka to your dependency management and Spring Cloud takes care of the rest. Make sure the broker (RabbitMQ or Kafka) is available and configured: running on localhost you shouldn’t have to do anything, but if you are running remotely use Spring Cloud Connectors, or Spring Boot conventions to define the broker credentials, e.g. for Rabbit

application.yml
spring:
  rabbitmq:
    host: mybroker.com
    port: 5672
    username: user
    password: secret

The bus currently supports sending messages to all nodes listening or all nodes for a particular service (as defined by Eureka). More selector criteria may be added in the future (ie. only service X nodes in data center Y, etc…​). There are also some http endpoints under the /bus/* actuator namespace. There are currently two implemented. The first, /bus/env, sends key/value pairs to update each node’s Spring Environment. The second, /bus/refresh, will reload each application’s configuration, just as if they had all been pinged on their /refresh endpoint.

Note
The Bus starters cover Rabbit and Kafka, because those are the two most common implementations, but Spring Cloud Stream is quite flexible and binder will work combined with spring-cloud-bus.

Addressing an Instance

The HTTP endpoints accept a "destination" parameter, e.g. "/bus/refresh?destination=customers:9000", where the destination is an ApplicationContext ID. If the ID is owned by an instance on the Bus then it will process the message and all other instances will ignore it. Spring Boot sets the ID for you in the ContextIdApplicationContextInitializer to a combination of the spring.application.name, active profiles and server.port by default.

Addressing all instances of a service

The "destination" parameter is used in a Spring PathMatcher (with the path separator as a colon :) to determine if an instance will process the message. Using the example from above, "/bus/refresh?destination=customers:**" will target all instances of the "customers" service regardless of the profiles and ports set as the ApplicationContext ID.

Application Context ID must be unique

The bus tries to eliminate processing an event twice, once from the original ApplicationEvent and once from the queue. To do this, it checks the sending application context id againts the current application context id. If multiple instances of a service have the same application context id, events will not be processed. Running on a local machine, each service will be on a different port and that will be part of the application context id. Cloud Foundry supplies an index to differentiate. To ensure that the application context id is the unique, set spring.application.index to something unique for each instance of a service. For example, in lattice, set spring.application.index=${INSTANCE_INDEX} in application.properties (or bootstrap.properties if using configserver).

Customizing the Message Broker

Spring Cloud Bus uses Spring Cloud Stream to broadcast the messages so to get messages to flow you only need to include the binder implementation of your choice in the classpath. There are convenient starters specifically for the bus with AMQP (RabbitMQ) and Kafka (spring-cloud-starter-bus-[amqp,kafka]). Generally speaking Spring Cloud Stream relies on Spring Boot autoconfiguration conventions for configuring middleware, so for instance the AMQP broker address can be changed with spring.rabbitmq.* configuration properties. Spring Cloud Bus has a handful of native configuration properties in spring.cloud.bus.* (e.g. spring.cloud.bus.destination is the name of the topic to use the the externall middleware). Normally the defaults will suffice.

To lean more about how to customize the message broker settings consult the Spring Cloud Stream documentation.

Tracing Bus Events

Bus events (subclasses of RemoteApplicationEvent) can be traced by setting spring.cloud.bus.trace.enabled=true. If you do this then the Spring Boot TraceRepository (if it is present) will show each event sent and all the acks from each service instance. Example (from the /trace endpoint):

{
  "timestamp": "2015-11-26T10:24:44.411+0000",
  "info": {
    "signal": "spring.cloud.bus.ack",
    "type": "RefreshRemoteApplicationEvent",
    "id": "c4d374b7-58ea-4928-a312-31984def293b",
    "origin": "stores:8081",
    "destination": "*:**"
  }
  },
  {
  "timestamp": "2015-11-26T10:24:41.864+0000",
  "info": {
    "signal": "spring.cloud.bus.sent",
    "type": "RefreshRemoteApplicationEvent",
    "id": "c4d374b7-58ea-4928-a312-31984def293b",
    "origin": "customers:9000",
    "destination": "*:**"
  }
  },
  {
  "timestamp": "2015-11-26T10:24:41.862+0000",
  "info": {
    "signal": "spring.cloud.bus.ack",
    "type": "RefreshRemoteApplicationEvent",
    "id": "c4d374b7-58ea-4928-a312-31984def293b",
    "origin": "customers:9000",
    "destination": "*:**"
  }
}

This trace shows that a RefreshRemoteApplicationEvent was sent from customers:9000, broadcast to all services, and it was received (acked) by customers:9000 and stores:8081.

To handle the ack signals yourself you could add an @EventListener for the AckRemoteApplicationEvent and SentApplicationEvent types to your app (and enable tracing). Or you could tap into the TraceRepository and mine the data from there.

Note
Any Bus application can trace acks, but sometimes it will be useful to do this in a central service that can do more complex queries on the data. Or forward it to a specialized tracing service.

Broadcasting Your Own Events

The Bus can carry any event of type RemoteApplicationEvent, but the default transport is JSON and the deserializer needs to know which types are going to be used ahead of time. To register a new type it needs to be in a subpackage of org.springframework.cloud.bus.event.

To customise the event name you can use @JsonTypeName on your custom class or rely on the default strategy which is to use the simple name of the class. Note that both the producer and the consumer will need access to the class definition.

Registering events in custom packages

If you cannot or don’t want to use a subpackage of org.springframework.cloud.bus.event for your custom events, you must specify which packages to scan for events of type RemoteApplicationEvent using @RemoteApplicationEventScan. Packages specified with @RemoteApplicationEventScan include subpackages.

For example, if you have a custom event called FooEvent:

package com.acme;

public class FooEvent extends RemoteApplicationEvent {
    ...
}

you can register this event with the deserializer in the following way:

package com.acme;

@Configuration
@RemoteApplicationEventScan
public class BusConfiguration {
    ...
}

Without specifying a value, the package of the class where @RemoteApplicationEventScan is used will be registered. In this example com.acme will be registered using the package of BusConfiguration.

You can also explicitly specify the packages to scan using the value, basePackages or basePackageClasses properties on @RemoteApplicationEventScan. For example:

package com.acme;

@Configuration
//@RemoteApplicationEventScan({"com.acme", "foo.bar"})
//@RemoteApplicationEventScan(basePackages = {"com.acme", "foo.bar", "fizz.buzz"})
@RemoteApplicationEventScan(basePackageClasses = BusConfiguration.class)
public class BusConfiguration {
    ...
}

All examples of @RemoteApplicationEventScan above are equivalent, in that the com.acme package will be registered by explicitly specifying the packages on @RemoteApplicationEventScan. Note, you can specify multiple base packages to scan.

Spring Cloud Sleuth

Adrian Cole, Spencer Gibb, Marcin Grzejszczak, Dave Syer

Camden.SR3

Spring Cloud Sleuth implements a distributed tracing solution for Spring Cloud.

Terminology

Spring Cloud Sleuth borrows Dapper’s terminology.

Span: The basic unit of work. For example, sending an RPC is a new span, as is sending a response to an RPC. Span’s are identified by a unique 64-bit ID for the span and another 64-bit ID for the trace the span is a part of. Spans also have other data, such as descriptions, timestamped events, key-value annotations (tags), the ID of the span that caused them, and process ID’s (normally IP address).

Spans are started and stopped, and they keep track of their timing information. Once you create a span, you must stop it at some point in the future.

Tip
The initial span that starts a trace is called a root span. The value of span id of that span is equal to trace id.

Trace: A set of spans forming a tree-like structure. For example, if you are running a distributed big-data store, a trace might be formed by a put request.

Annotation: is used to record existence of an event in time. Some of the core annotations used to define the start and stop of a request are:

  • cs - Client Sent - The client has made a request. This annotation depicts the start of the span.

  • sr - Server Received - The server side got the request and will start processing it. If one subtracts the cs timestamp from this timestamp one will receive the network latency.

  • ss - Server Sent - Annotated upon completion of request processing (when the response got sent back to the client). If one subtracts the sr timestamp from this timestamp one will receive the time needed by the server side to process the request.

  • cr - Client Received - Signifies the end of the span. The client has successfully received the response from the server side. If one subtracts the cs timestamp from this timestamp one will receive the whole time needed by the client to receive the response from the server.

Visualization of what Span and Trace will look in a system together with the Zipkin annotations:

Trace Info propagation

Each color of a note signifies a span (7 spans - from A to G). If you have such information in the note:

Trace Id = X
Span Id = D
Client Sent

That means that the current span has Trace-Id set to X, Span-Id set to D. It also has emitted Client Sent event.

This is how the visualization of the parent / child relationship of spans would look like:

Parent child relationship

Purpose

In the following sections the example from the image above will be taken into consideration.

Distributed tracing with Zipkin

Altogether there are 10 spans . If you go to traces in Zipkin you will see this number:

Traces

However if you pick a particular trace then you will see 7 spans:

Traces Info propagation
Note
When picking a particular trace you will see merged spans. That means that if there were 2 spans sent to Zipkin with Server Received and Server Sent / Client Received and Client Sent annotations then they will presented as a single span.

In the image depicting the visualization of what Span and Trace is you can see 20 colorful labels. How does it happen that in Zipkin 10 spans are received?

  • 2 span A labels signify span started and closed. Upon closing a single span is sent to Zipkin.

  • 4 span B labels are in fact are single span with 4 annotations. However this span is composed of two separate instances. One sent from service 1 and one from service 2. So in fact two span instances will be sent to Zipkin and merged there.

  • 2 span C labels signify span started and closed. Upon closing a single span is sent to Zipkin.

  • 4 span B labels are in fact are single span with 4 annotations. However this span is composed of two separate instances. One sent from service 2 and one from service 3. So in fact two span instances will be sent to Zipkin and merged there.

  • 2 span E labels signify span started and closed. Upon closing a single span is sent to Zipkin.

  • 4 span B labels are in fact are single span with 4 annotations. However this span is composed of two separate instances. One sent from service 2 and one from service 4. So in fact two span instances will be sent to Zipkin and merged there.

  • 2 span G labels signify span started and closed. Upon closing a single span is sent to Zipkin.

So 1 span from A, 2 spans from B, 1 span from C, 2 spans from D, 1 span from E, 2 spans from F and 1 from G. Altogether 10 spans.

Zipkin deployed on Pivotal Web Services
Click Pivotal Web Services icon to see it live!Click Pivotal Web Services icon to see it live!

The dependency graph in Zipkin would look like this:

Dependencies
Zipkin deployed on Pivotal Web Services
Click Pivotal Web Services icon to see it live!Click Pivotal Web Services icon to see it live!

Log correlation

When grepping the logs of those four applications by trace id equal to e.g. 2485ec27856c56f4 one would get the following:

service1.log:2016-02-26 11:15:47.561  INFO [service1,2485ec27856c56f4,2485ec27856c56f4,true] 68058 --- [nio-8081-exec-1] i.s.c.sleuth.docs.service1.Application   : Hello from service1. Calling service2
service2.log:2016-02-26 11:15:47.710  INFO [service2,2485ec27856c56f4,9aa10ee6fbde75fa,true] 68059 --- [nio-8082-exec-1] i.s.c.sleuth.docs.service2.Application   : Hello from service2. Calling service3 and then service4
service3.log:2016-02-26 11:15:47.895  INFO [service3,2485ec27856c56f4,1210be13194bfe5,true] 68060 --- [nio-8083-exec-1] i.s.c.sleuth.docs.service3.Application   : Hello from service3
service2.log:2016-02-26 11:15:47.924  INFO [service2,2485ec27856c56f4,9aa10ee6fbde75fa,true] 68059 --- [nio-8082-exec-1] i.s.c.sleuth.docs.service2.Application   : Got response from service3 [Hello from service3]
service4.log:2016-02-26 11:15:48.134  INFO [service4,2485ec27856c56f4,1b1845262ffba49d,true] 68061 --- [nio-8084-exec-1] i.s.c.sleuth.docs.service4.Application   : Hello from service4
service2.log:2016-02-26 11:15:48.156  INFO [service2,2485ec27856c56f4,9aa10ee6fbde75fa,true] 68059 --- [nio-8082-exec-1] i.s.c.sleuth.docs.service2.Application   : Got response from service4 [Hello from service4]
service1.log:2016-02-26 11:15:48.182  INFO [service1,2485ec27856c56f4,2485ec27856c56f4,true] 68058 --- [nio-8081-exec-1] i.s.c.sleuth.docs.service1.Application   : Got response from service2 [Hello from service2, response from service3 [Hello from service3] and from service4 [Hello from service4]]

If you’re using a log aggregating tool like Kibana, Splunk etc. you can order the events that took place. An example of Kibana would look like this:

Log correlation with Kibana

If you want to use Logstash here is the Grok pattern for Logstash:

filter {
       # pattern matching logback pattern
       grok {
              match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}\s+%{LOGLEVEL:severity}\s+\[%{DATA:service},%{DATA:trace},%{DATA:span},%{DATA:exportable}\]\s+%{DATA:pid}---\s+\[%{DATA:thread}\]\s+%{DATA:class}\s+:\s+%{GREEDYDATA:rest}" }
       }
}
Note
If you want to use Grok together with the logs from Cloud Foundry you have to use this pattern:
filter {
       # pattern matching logback pattern
       grok {
              match => { "message" => "(?m)OUT\s+%{TIMESTAMP_ISO8601:timestamp}\s+%{LOGLEVEL:severity}\s+\[%{DATA:service},%{DATA:trace},%{DATA:span},%{DATA:exportable}\]\s+%{DATA:pid}---\s+\[%{DATA:thread}\]\s+%{DATA:class}\s+:\s+%{GREEDYDATA:rest}" }
       }
}
JSON Logback with Logstash

Often you do not want to store your logs in a text file but in a JSON file that Logstash can immediately pick. To do that you have to do the following (for readability we’re passing the dependencies in the groupId:artifactId:version notation.

Dependencies setup

  • Ensure that Logback is on the classpath (ch.qos.logback:logback-core)

  • Add Logstash Logback encode - example for version 4.6 : net.logstash.logback:logstash-logback-encoder:4.6

Logback setup

Below you can find an example of a Logback configuration (file named logback-spring.xml) that:

  • logs information from the application in a JSON format to a build/${spring.application.name}.json file

  • has commented out two additional appenders - console and standard log file

  • has the same logging pattern as the one presented in the previous section

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
	<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
	​
	<springProperty scope="context" name="springAppName" source="spring.application.name"/>
	<!-- Example for logging into the build folder of your project -->
	<property name="LOG_FILE" value="${BUILD_FOLDER:-build}/${springAppName}"/>​

	<property name="CONSOLE_LOG_PATTERN"
			  value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr([${springAppName:-},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%X{X-Span-Export:-}]){yellow} %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>

	<!-- Appender to log to console -->
	<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
		<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
			<!-- Minimum logging level to be presented in the console logs-->
			<level>INFO</level>
		</filter>
		<encoder>
			<pattern>${CONSOLE_LOG_PATTERN}</pattern>
			<charset>utf8</charset>
		</encoder>
	</appender>

	<!-- Appender to log to file -->​
	<appender name="flatfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
		<file>${LOG_FILE}</file>
		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
			<fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.gz</fileNamePattern>
			<maxHistory>7</maxHistory>
		</rollingPolicy>
		<encoder>
			<pattern>${CONSOLE_LOG_PATTERN}</pattern>
			<charset>utf8</charset>
		</encoder>
	</appender>
	​
	<!-- Appender to log to file in a JSON format -->
	<appender name="logstash" class="ch.qos.logback.core.rolling.RollingFileAppender">
		<file>${LOG_FILE}.json</file>
		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
			<fileNamePattern>${LOG_FILE}.json.%d{yyyy-MM-dd}.gz</fileNamePattern>
			<maxHistory>7</maxHistory>
		</rollingPolicy>
		<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
			<providers>
				<timestamp>
					<timeZone>UTC</timeZone>
				</timestamp>
				<pattern>
					<pattern>
						{
						"severity": "%level",
						"service": "${springAppName:-}",
						"trace": "%X{X-B3-TraceId:-}",
						"span": "%X{X-B3-SpanId:-}",
						"exportable": "%X{X-Span-Export:-}",
						"pid": "${PID:-}",
						"thread": "%thread",
						"class": "%logger{40}",
						"rest": "%message"
						}
					</pattern>
				</pattern>
			</providers>
		</encoder>
	</appender>
	​
	<root level="INFO">
		<!--<appender-ref ref="console"/>-->
		<appender-ref ref="logstash"/>
		<!--<appender-ref ref="flatfile"/>-->
	</root>
</configuration>
Note
If you’re using a custom logback-spring.xml then you have to pass the spring.application.name in bootstrap instead of application property file. Otherwise your custom logback file won’t read the property properly.

Adding to the project

Only Sleuth (log correlation)

If you want to profit only from Spring Cloud Sleuth without the Zipkin integration just add the spring-cloud-starter-sleuth module to your project.

Maven
<dependencyManagement> (1)
         <dependencies>
             <dependency>
                 <groupId>org.springframework.cloud</groupId>
                 <artifactId>spring-cloud-dependencies</artifactId>
                 <version>Brixton.RELEASE</version>
                 <type>pom</type>
                 <scope>import</scope>
             </dependency>
         </dependencies>
   </dependencyManagement>

   <dependency> (2)
       <groupId>org.springframework.cloud</groupId>
       <artifactId>spring-cloud-starter-sleuth</artifactId>
   </dependency>
  1. In order not to pick versions by yourself it’s much better if you add the dependency management via the Spring BOM

  2. Add the dependency to spring-cloud-starter-sleuth

Gradle
dependencyManagement { (1)
    imports {
        mavenBom "org.springframework.cloud:spring-cloud-dependencies:Brixton.RELEASE"
    }
}

dependencies { (2)
    compile "org.springframework.cloud:spring-cloud-starter-sleuth"
}
  1. In order not to pick versions by yourself it’s much better if you add the dependency management via the Spring BOM

  2. Add the dependency to spring-cloud-starter-sleuth

Sleuth with Zipkin via HTTP

If you want both Sleuth and Zipkin just add the spring-cloud-starter-zipkin dependency.

Maven
<dependencyManagement> (1)
         <dependencies>
             <dependency>
                 <groupId>org.springframework.cloud</groupId>
                 <artifactId>spring-cloud-dependencies</artifactId>
                 <version>Brixton.RELEASE</version>
                 <type>pom</type>
                 <scope>import</scope>
             </dependency>
         </dependencies>
   </dependencyManagement>

   <dependency> (2)
       <groupId>org.springframework.cloud</groupId>
       <artifactId>spring-cloud-starter-zipkin</artifactId>
   </dependency>
  1. In order not to pick versions by yourself it’s much better if you add the dependency management via the Spring BOM

  2. Add the dependency to spring-cloud-starter-zipkin

Gradle
dependencyManagement { (1)
    imports {
        mavenBom "org.springframework.cloud:spring-cloud-dependencies:Brixton.RELEASE"
    }
}

dependencies { (2)
    compile "org.springframework.cloud:spring-cloud-starter-zipkin"
}
  1. In order not to pick versions by yourself it’s much better if you add the dependency management via the Spring BOM

  2. Add the dependency to spring-cloud-starter-zipkin

Sleuth with Zipkin via Spring Cloud Stream

If you want both Sleuth and Zipkin just add the spring-cloud-sleuth-stream dependency.

Maven
<dependencyManagement> (1)
         <dependencies>
             <dependency>
                 <groupId>org.springframework.cloud</groupId>
                 <artifactId>spring-cloud-dependencies</artifactId>
                 <version>Brixton.RELEASE</version>
                 <type>pom</type>
                 <scope>import</scope>
             </dependency>
         </dependencies>
   </dependencyManagement>

   <dependency> (2)
       <groupId>org.springframework.cloud</groupId>
       <artifactId>spring-cloud-sleuth-stream</artifactId>
   </dependency>
   <dependency> (3)
       <groupId>org.springframework.cloud</groupId>
       <artifactId>spring-cloud-starter-sleuth</artifactId>
   </dependency>
   <!-- EXAMPLE FOR RABBIT BINDING -->
   <dependency> (4)
       <groupId>org.springframework.cloud</groupId>
       <artifactId>spring-cloud-stream-binder-rabbit</artifactId>
   </dependency>
  1. In order not to pick versions by yourself it’s much better if you add the dependency management via the Spring BOM

  2. Add the dependency to spring-cloud-sleuth-stream

  3. Add the dependency to spring-cloud-starter-sleuth - that way all dependant dependencies will be downloaded

  4. Add a binder (e.g. Rabbit binder) to tell Spring Cloud Stream what it should bind to

Gradle
dependencyManagement { (1)
    imports {
        mavenBom "org.springframework.cloud:spring-cloud-dependencies:Brixton.RELEASE"
    }
}

dependencies {
    compile "org.springframework.cloud:spring-cloud-sleuth-stream" (2)
    compile "org.springframework.cloud:spring-cloud-starter-sleuth" (3)
    // Example for Rabbit binding
    compile "org.springframework.cloud:spring-cloud-stream-binder-rabbit" (4)
}
  1. In order not to pick versions by yourself it’s much better if you add the dependency management via the Spring BOM

  2. Add the dependency to spring-cloud-sleuth-stream

  3. Add the dependency to spring-cloud-starter-sleuth - that way all dependant dependencies will be downloaded

  4. Add a binder (e.g. Rabbit binder) to tell Spring Cloud Stream what it should bind to

Spring Cloud Sleuth Stream Zipkin Collector

If you want to start a Spring Cloud Sleuth Stream Zipkin collector just add the spring-cloud-sleuth-zipkin-stream dependency

Maven
<dependencyManagement> (1)
         <dependencies>
             <dependency>
                 <groupId>org.springframework.cloud</groupId>
                 <artifactId>spring-cloud-dependencies</artifactId>
                 <version>Brixton.RELEASE</version>
                 <type>pom</type>
                 <scope>import</scope>
             </dependency>
         </dependencies>
   </dependencyManagement>

   <dependency> (2)
       <groupId>org.springframework.cloud</groupId>
       <artifactId>spring-cloud-sleuth-zipkin-stream</artifactId>
   </dependency>
   <dependency> (3)
       <groupId>org.springframework.cloud</groupId>
       <artifactId>spring-cloud-starter-sleuth</artifactId>
   </dependency>
   <!-- EXAMPLE FOR RABBIT BINDING -->
   <dependency> (4)
       <groupId>org.springframework.cloud</groupId>
       <artifactId>spring-cloud-stream-binder-rabbit</artifactId>
   </dependency>
  1. In order not to pick versions by yourself it’s much better if you add the dependency management via the Spring BOM

  2. Add the dependency to spring-cloud-sleuth-zipkin-stream

  3. Add the dependency to spring-cloud-starter-sleuth - that way all dependant dependencies will be downloaded

  4. Add a binder (e.g. Rabbit binder) to tell Spring Cloud Stream what it should bind to

Gradle
dependencyManagement { (1)
    imports {
        mavenBom "org.springframework.cloud:spring-cloud-dependencies:Brixton.RELEASE"
    }
}

dependencies {
    compile "org.springframework.cloud:spring-cloud-sleuth-zipkin-stream" (2)
    compile "org.springframework.cloud:spring-cloud-starter-sleuth" (3)
    // Example for Rabbit binding
    compile "org.springframework.cloud:spring-cloud-stream-binder-rabbit" (4)
}
  1. In order not to pick versions by yourself it’s much better if you add the dependency management via the Spring BOM

  2. Add the dependency to spring-cloud-sleuth-zipkin-stream

  3. Add the dependency to spring-cloud-starter-sleuth - that way all dependant dependencies will be downloaded

  4. Add a binder (e.g. Rabbit binder) to tell Spring Cloud Stream what it should bind to

and then just annotate your main class with @EnableZipkinStreamServer annotation:

package example;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.sleuth.zipkin.stream.EnableZipkinStreamServer;

@SpringBootApplication
@EnableZipkinStreamServer
public class ZipkinStreamServerApplication {

	public static void main(String[] args) throws Exception {
		SpringApplication.run(ZipkinStreamServerApplication.class, args);
	}

}

Features

  • Adds trace and span ids to the Slf4J MDC, so you can extract all the logs from a given trace or span in a log aggregator. Example logs:

    2016-02-02 15:30:57.902  INFO [bar,6bfd228dc00d216b,6bfd228dc00d216b,false] 23030 --- [nio-8081-exec-3] ...
    2016-02-02 15:30:58.372 ERROR [bar,6bfd228dc00d216b,6bfd228dc00d216b,false] 23030 --- [nio-8081-exec-3] ...
    2016-02-02 15:31:01.936  INFO [bar,46ab0d418373cbc9,46ab0d418373cbc9,false] 23030 --- [nio-8081-exec-4] ...

    notice the [appname,traceId,spanId,exportable] entries from the MDC:

    • spanId - the id of a specific operation that took place

    • appname - the name of the application that logged the span

    • traceId - the id of the latency graph that contains the span

    • exportable - whether the log should be exported to Zipkin or not. When would you like the span not to be exportable? In the case in which you want to wrap some operation in a Span and have it written to the logs only.

  • Provides an abstraction over common distributed tracing data models: traces, spans (forming a DAG), annotations, key-value annotations. Loosely based on HTrace, but Zipkin (Dapper) compatible.

  • Sleuth records timing information to aid in latency analysis. Using sleuth, you can pinpoint causes of latency in your applications. Sleuth is written to not log too much, and to not cause your production application to crash.

    • propagates structural data about your call-graph in-band, and the rest out-of-band.

    • includes opinionated instrumentation of layers such as HTTP

    • includes sampling policy to manage volume

    • can report to a Zipkin system for query and visualization

  • Instruments common ingress and egress points from Spring applications (servlet filter, async endpoints, rest template, scheduled actions, message channels, zuul filters, feign client).

  • Sleuth includes default logic to join a trace across http or messaging boundaries. For example, http propagation works via Zipkin-compatible request headers. This propagation logic is defined and customized via SpanInjector and SpanExtractor implementations.

  • Provides simple metrics of accepted / dropped spans.

  • If spring-cloud-sleuth-zipkin then the app will generate and collect Zipkin-compatible traces. By default it sends them via HTTP to a Zipkin server on localhost (port 9411). Configure the location of the service using spring.zipkin.baseUrl.

  • If spring-cloud-sleuth-stream then the app will generate and collect traces via Spring Cloud Stream. Your app automatically becomes a producer of tracer messages that are sent over your broker of choice (e.g. RabbitMQ, Apache Kafka, Redis).

Important
If using Zipkin or Stream, configure the percentage of spans exported using spring.sleuth.sampler.percentage (default 0.1, i.e. 10%). Otherwise you might think that Sleuth is not working cause it’s omitting some spans.
Note
the SLF4J MDC is always set and logback users will immediately see the trace and span ids in logs per the example above. Other logging systems have to configure their own formatter to get the same result. The default is logging.pattern.level set to %clr(%5p) %clr([${spring.application.name:},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%X{X-Span-Export:-}]){yellow} (this is a Spring Boot feature for logback users). This means that if you’re not using SLF4J this pattern WILL NOT be automatically applied.

Sampling

In distributed tracing the data volumes can be very high so sampling can be important (you usually don’t need to export all spans to get a good picture of what is happening). Spring Cloud Sleuth has a Sampler strategy that you can implement to take control of the sampling algorithm. Samplers do not stop span (correlation) ids from being generated, but they do prevent the tags and events being attached and exported. By default you get a strategy that continues to trace if a span is already active, but new ones are always marked as non-exportable. If all your apps run with this sampler you will see traces in logs, but not in any remote store. For testing the default is often enough, and it probably is all you need if you are only using the logs (e.g. with an ELK aggregator). If you are exporting span data to Zipkin or Spring Cloud Stream, there is also an AlwaysSampler that exports everything and a PercentageBasedSampler that samples a fixed fraction of spans.

Note
the PercentageBasedSampler is the default if you are using spring-cloud-sleuth-zipkin or spring-cloud-sleuth-stream. You can configure the exports using spring.sleuth.sampler.percentage. The passed value needs to be a double from 0.0 to 1.0 so it’s not a percentage. For backwards compatibility reasons we’re not changing the property name.

A sampler can be installed just by creating a bean definition, e.g:

@Bean
public Sampler defaultSampler() {
	return new AlwaysSampler();
}

Instrumentation

Spring Cloud Sleuth instruments all your Spring application automatically, so you shouldn’t have to do anything to activate it. The instrumentation is added using a variety of technologies according to the stack that is available, e.g. for a servlet web application we use a Filter, and for Spring Integration we use ChannelInterceptors.

You can customize the keys used in span tags. To limit the volume of span data, by default an HTTP request will be tagged only with a handful of metadata like the status code, host and URL. You can add request headers by configuring spring.sleuth.keys.http.headers (a list of header names).

Note
Remember that tags are only collected and exported if there is a Sampler that allows it (by default there is not, so there is no danger of accidentally collecting too much data without configuring something).
Note
Currently the instrumentation in Spring Cloud Sleuth is eager - it means that we’re actively trying to pass the tracing context between threads. Also timing events are captured even when sleuth isn’t exporting data to a tracing system. This approach may change in the future towards being lazy on this matter.

Span lifecycle

You can do the following operations on the Span by means of org.springframework.cloud.sleuth.Tracer interface:

  • start - when you start a span its name is assigned and start timestamp is recorded.

  • close - the span gets finished (the end time of the span is recorded) and if the span is exportable then it will be eligible for collection to Zipkin. The span is also removed from the current thread.

  • continue - a new instance of span will be created whereas it will be a copy of the one that it continues.

  • detach - the span doesn’t get stopped or closed. It only gets removed from the current thread.

  • create with explicit parent - you can create a new span and set an explicit parent to it

Tip
Spring creates the instance of Tracer for you. In order to use it all you need is to just autowire it.

Creating and closing spans

You can manually create spans by using the Tracer interface.

// Start a span. If there was a span present in this thread it will become
// the `newSpan`'s parent.
Span newSpan = this.tracer.createSpan("calculateTax");
try {
	// ...
	// You can tag a span
	this.tracer.addTag("taxValue", taxValue);
	// ...
	// You can log an event on a span
	newSpan.logEvent("taxCalculated");
} finally {
	// Once done remember to close the span. This will allow collecting
	// the span to send it to Zipkin
	this.tracer.close(newSpan);
}

In this example we could see how to create a new instance of span. Assuming that there already was a span present in this thread then it would become the parent of that span.

Important
Always clean after you create a span! Don’t forget to close a span if you want to send it to Zipkin.

Continuing spans

Sometimes you don’t want to create a new span but you want to continue one. Example of such a situation might be (of course it all depends on the use-case):

  • AOP - If there was already a span created before an aspect was reached then you might not want to create a new span.

  • Hystrix - executing a Hystrix command is most likely a logical part of the current processing. It’s in fact only a technical implementation detail that you wouldn’t necessarily want to reflect in tracing as a separate being.

The continued instance of span is equal to the one that it continues:

Span continuedSpan = this.tracer.continueSpan(spanToContinue);
assertThat(continuedSpan).isEqualTo(spanToContinue);

To continue a span you can use the Tracer interface.

// let's assume that we're in a thread Y and we've received
// the `initialSpan` from thread X
Span continuedSpan = this.tracer.continueSpan(initialSpan);
try {
	// ...
	// You can tag a span
	this.tracer.addTag("taxValue", taxValue);
	// ...
	// You can log an event on a span
	continuedSpan.logEvent("taxCalculated");
} finally {
	// Once done remember to detach the span. That way you'll
	// safely remove it from the current thread without closing it
	this.tracer.detach(continuedSpan);
}
Important
Always clean after you create a span! Don’t forget to detach a span if some work was done started in one thread (e.g. thread X) and it’s waiting for other threads (e.g. Y, Z) to finish. Then the spans in the threads Y, Z should be detached at the end of their work. When the results are collected the span in thread X should be closed.

Creating spans with an explicit parent

There is a possibility that you want to start a new span and provide an explicit parent of that span. Let’s assume that the parent of a span is in one thread and you want to start a new span in another thread. The startSpan method of the Tracer interface is the method you are looking for.

// let's assume that we're in a thread Y and we've received
// the `initialSpan` from thread X. `initialSpan` will be the parent
// of the `newSpan`
Span newSpan = this.tracer.createSpan("calculateCommission", initialSpan);
try {
	// ...
	// You can tag a span
	this.tracer.addTag("commissionValue", commissionValue);
	// ...
	// You can log an event on a span
	newSpan.logEvent("commissionCalculated");
} finally {
	// Once done remember to close the span. This will allow collecting
	// the span to send it to Zipkin. The tags and events set on the
	// newSpan will not be present on the parent
	this.tracer.close(newSpan);
}
Important
After having created such a span remember to close it. Otherwise you will see a lot of warnings in your logs related to the fact that you have a span present in the current thread other than the one you’re trying to close. What’s worse your spans won’t get closed properly thus will not get collected to Zipkin.

Naming spans

Picking a span name is not a trivial task. Span name should depict an operation name. The name should be low cardinality (e.g. not include identifiers).

Since there is a lot of instrumentation going on some of the span names will be artificial like:

  • controller-method-name when received by a Controller with a method name conrollerMethodName

  • async for asynchronous operations done via wrapped Callable and Runnable.

  • @Scheduled annotated methods will return the simple name of the class.

Fortunately, for the asynchronous processing you can provide explicit naming.

@SpanName annotation

You can do name the span explicitly via the @SpanName annotation.

@SpanName("calculateTax")
class TaxCountingRunnable implements Runnable {

	@Override public void run() {
		// perform logic
	}
}

In this case, when processed in the following manner:

Runnable runnable = new TraceRunnable(tracer, spanNamer, new TaxCountingRunnable());
Future<?> future = executorService.submit(runnable);
// ... some additional logic ...
future.get();

The span will be named calculateTax.

toString() method

It’s pretty rare to create separate classes for Runnable or Callable. Typically one creates an anonymous instance of those classes. You can’t annotate such classes thus to override that, if there is no @SpanName annotation present, we’re checking if the class has a custom implementation of the toString() method.

So executing such code:

Runnable runnable = new TraceRunnable(tracer, spanNamer, new Runnable() {
	@Override public void run() {
		// perform logic
	}

	@Override public String toString() {
		return "calculateTax";
	}
});
Future<?> future = executorService.submit(runnable);
// ... some additional logic ...
future.get();

will lead in creating a span named calculateTax.

Customizations

Thanks to the SpanInjector and SpanExtractor you can customize the way spans are created and propagated.

There are currently two built-in ways to pass tracing information between processes:

  • via Spring Integration

  • via HTTP

Span ids are extracted from Zipkin-compatible (B3) headers (either Message or HTTP headers), to start or join an existing trace. Trace information is injected into any outbound requests so the next hop can extract them.

Spring Integration

For Spring Integration these are the beans responsible for creation of a Span from a Message and filling in the MessageBuilder with tracing information.

@Bean
public SpanExtractor<Message> messagingSpanExtractor() {
    ...
}

@Bean
public SpanInjector<MessageBuilder> messagingSpanInjector() {
    ...
}

You can override them by providing your own implementation and by adding a @Primary annotation to your bean definition.

HTTP

For HTTP these are the beans responsible for creation of a Span from a HttpServletRequest.

@Bean
public SpanExtractor<HttpServletRequest> httpServletRequestSpanExtractor() {
    ...
}

You can override them by providing your own implementation and by adding a @Primary annotation to your bean definition.

Example

Let’s assume that instead of the standard Zipkin compatible tracing HTTP header names you have

  • for trace id - correlationId

  • for span id - mySpanId

This is a an example of a SpanExtractor

static class CustomHttpServletRequestSpanExtractor
		implements SpanExtractor<HttpServletRequest> {

	@Override
	public Span joinTrace(HttpServletRequest carrier) {
		long traceId = Span.hexToId(carrier.getHeader("correlationId"));
		long spanId = Span.hexToId(carrier.getHeader("mySpanId"));
		// extract all necessary headers
		Span.SpanBuilder builder = Span.builder().traceId(traceId).spanId(spanId);
		// build rest of the Span
		return builder.build();
	}
}

And you could register it like this:

@Bean
@Primary
SpanExtractor<HttpServletRequest> customHttpServletRequestSpanExtractor() {
	return new CustomHttpServletRequestSpanExtractor();
}

Spring Cloud Sleuth does not add trace/span related headers to the Http Response for security reasons. If you need the headers then a custom SpanInjector that injects the headers into the Http Response and a Servlet filter which makes use of this can be added the following way:

static class CustomHttpServletResponseSpanInjector
		implements SpanInjector<HttpServletResponse> {

	@Override
	public void inject(Span span, HttpServletResponse carrier) {
		carrier.addHeader(Span.TRACE_ID_NAME, span.traceIdString());
		carrier.addHeader(Span.SPAN_ID_NAME, Span.idToHex(span.getSpanId()));
	}
}

static class HttpResponseInjectingTraceFilter extends GenericFilterBean {

	private final Tracer tracer;
	private final SpanInjector<HttpServletResponse> spanInjector;

	public HttpResponseInjectingTraceFilter(Tracer tracer, SpanInjector<HttpServletResponse> spanInjector) {
		this.tracer = tracer;
		this.spanInjector = spanInjector;
	}

	@Override
	public void doFilter(ServletRequest request, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
		HttpServletResponse response = (HttpServletResponse) servletResponse;
		Span currentSpan = this.tracer.getCurrentSpan();
		this.spanInjector.inject(currentSpan, response);
		filterChain.doFilter(request, response);
	}
}

And you could register them like this:

@Bean
SpanInjector<HttpServletResponse> customHttpServletResponseSpanInjector() {
	return new CustomHttpServletResponseSpanInjector();
}

@Bean
HttpResponseInjectingTraceFilter responseInjectingTraceFilter(Tracer tracer) {
	return new HttpResponseInjectingTraceFilter(tracer, customHttpServletResponseSpanInjector());
}

Custom SA tag in Zipkin

Sometimes you want to create a manual Span that will wrap a call to an external service which is not instrumented. What you can do is to create a span with the peer.service tag that will contain a value of the service that you want to call. Below you can see an example of a call to Redis that is wrapped in such a span.

org.springframework.cloud.sleuth.Span newSpan = tracer.createSpan("redis");
try {
	newSpan.tag("redis.op", "get");
	newSpan.tag("lc", "redis");
	newSpan.logEvent(org.springframework.cloud.sleuth.Span.CLIENT_SEND);
	// call redis service e.g
	// return (SomeObj) redisTemplate.opsForHash().get("MYHASH", someObjKey);
} finally {
	newSpan.tag("peer.service", "redisService");
	newSpan.tag("peer.ipv4", "1.2.3.4");
	newSpan.tag("peer.port", "1234");
	newSpan.logEvent(org.springframework.cloud.sleuth.Span.CLIENT_RECV);
	tracer.close(newSpan);
}
Important
Remember not to add both peer.service tag and the SA tag! You have to add only peer.service.

Span Data as Messages

You can accumulate and send span data over Spring Cloud Stream by including the spring-cloud-sleuth-stream jar as a dependency, and adding a Channel Binder implementation (e.g. spring-cloud-starter-stream-rabbit for RabbitMQ or spring-cloud-starter-stream-kafka for Kafka). This will automatically turn your app into a producer of messages with payload type Spans.

Zipkin Consumer

There is a special convenience annotation for setting up a message consumer for the Span data and pushing it into a Zipkin SpanStore. This application

@SpringBootApplication
@EnableZipkinStreamServer
public class Consumer {
	public static void main(String[] args) {
		SpringApplication.run(Consumer.class, args);
	}
}

will listen for the Span data on whatever transport you provide via a Spring Cloud Stream Binder (e.g. include spring-cloud-starter-stream-rabbit for RabbitMQ, and similar starters exist for Redis and Kafka). If you add the following UI dependency

<groupId>io.zipkin.java</groupId>
<artifactId>zipkin-autoconfigure-ui</artifactId>

Then you’ll have your app a Zipkin server, which hosts the UI and api on port 9411.

The default SpanStore is in-memory (good for demos and getting started quickly). For a more robust solution you can add MySQL and spring-boot-starter-jdbc to your classpath and enable the JDBC SpanStore via configuration, e.g.:

spring:
  rabbitmq:
    host: ${RABBIT_HOST:localhost}
  datasource:
    schema: classpath:/mysql.sql
    url: jdbc:mysql://${MYSQL_HOST:localhost}/test
    username: root
    password: root
# Switch this on to create the schema on startup:
    initialize: true
    continueOnError: true
  sleuth:
    enabled: false
zipkin:
  storage:
    type: mysql
Note
The @EnableZipkinStreamServer is also annotated with @EnableZipkinServer so the process will also expose the standard Zipkin server endpoints for collecting spans over HTTP, and for querying in the Zipkin Web UI.

Custom Consumer

A custom consumer can also easily be implemented using spring-cloud-sleuth-stream and binding to the SleuthSink. Example:

@EnableBinding(SleuthSink.class)
@SpringBootApplication(exclude = SleuthStreamAutoConfiguration.class)
@MessageEndpoint
public class Consumer {

    @ServiceActivator(inputChannel = SleuthSink.INPUT)
    public void sink(Spans input) throws Exception {
        // ... process spans
    }
}
Note
the sample consumer application above explicitly excludes SleuthStreamAutoConfiguration so it doesn’t send messages to itself, but this is optional (you might actually want to trace requests into the consumer app).

Metrics

Currently Spring Cloud Sleuth registers very simple metrics related to spans. It’s using the Spring Boot’s metrics support to calculate the number of accepted and dropped spans. Each time a span gets sent to Zipkin the number of accepted spans will increase. If there’s an error then the number of dropped spans will get increased.

Integrations

Runnable and Callable

If you’re wrapping your logic in Runnable or Callable it’s enough to wrap those classes in their Sleuth representative.

Example for Runnable:

Runnable runnable = new Runnable() {
	@Override
	public void run() {
		// do some work
	}

	@Override
	public String toString() {
		return "spanNameFromToStringMethod";
	}
};
// Manual `TraceRunnable` creation with explicit "calculateTax" Span name
Runnable traceRunnable = new TraceRunnable(tracer, spanNamer, runnable, "calculateTax");
// Wrapping `Runnable` with `Tracer`. The Span name will be taken either from the
// `@SpanName` annotation or from `toString` method
Runnable traceRunnableFromTracer = tracer.wrap(runnable);

Example for Callable:

Callable<String> callable = new Callable<String>() {
	@Override
	public String call() throws Exception {
		return someLogic();
	}

	@Override
	public String toString() {
		return "spanNameFromToStringMethod";
	}
};
// Manual `TraceCallable` creation with explicit "calculateTax" Span name
Callable<String> traceCallable = new TraceCallable<>(tracer, spanNamer, callable, "calculateTax");
// Wrapping `Callable` with `Tracer`. The Span name will be taken either from the
// `@SpanName` annotation or from `toString` method
Callable<String> traceCallableFromTracer = tracer.wrap(callable);

That way you will ensure that a new Span is created and closed for each execution.

Hystrix

Custom Concurrency Strategy

We’re registering a custom HystrixConcurrencyStrategy that wraps all Callable instances into their Sleuth representative - the TraceCallable. The strategy either starts or continues a span depending on the fact whether tracing was already going on before the Hystrix command was called. To disable the custom Hystrix Concurrency Strategy set the spring.sleuth.hystrix.strategy.enabled to false.

Manual Command setting

Assuming that you have the following HystrixCommand:

HystrixCommand<String> hystrixCommand = new HystrixCommand<String>(setter) {
	@Override
	protected String run() throws Exception {
		return someLogic();
	}
};

In order to pass the tracing information you have to wrap the same logic in the Sleuth version of the HystrixCommand which is the TraceCommand:

TraceCommand<String> traceCommand = new TraceCommand<String>(tracer, traceKeys, setter) {
	@Override
	public String doRun() throws Exception {
		return someLogic();
	}
};

RxJava

We’re registering a custom RxJavaSchedulersHook that wraps all Action0 instances into their Sleuth representative - the TraceAction. The hook either starts or continues a span depending on the fact whether tracing was already going on before the Action was scheduled. To disable the custom RxJavaSchedulersHook set the spring.sleuth.rxjava.schedulers.hook.enabled to false.

You can define a list of regular expressions for thread names, for which you don’t want a Span to be created. Just provide a comma separated list of regular expressions in the spring.sleuth.rxjava.schedulers.ignoredthreads property.

HTTP integration

Features from this section can be disabled by providing the spring.sleuth.web.enabled property with value equal to false.

HTTP Filter

Via the TraceFilter all sampled incoming requests result in creation of a Span. That Span’s name is http: + the path to which the request was sent. E.g. if the request was sent to /foo/bar then the name will be http:/foo/bar. You can configure which URIs you would like to skip via the spring.sleuth.web.skipPattern property. If you have ManagementServerProperties on classpath then its value of contextPath gets appended to the provided skip pattern.

HandlerInterceptor

Since we want the span names to be precise we’re using a TraceHandlerInterceptor that either wraps an existing HandlerInterceptor or is added directly to the list of existing HandlerInterceptors. The TraceHandlerInterceptor adds a special request attribute to the given HttpServletRequest. If the the TraceFilter doesn’t see this attribute set it will create a "fallback" span which is an additional span created on the server side so that the trace is presented properly in the UI. Seeing that most likely signifies that there is a missing instrumentation. In that case please file an issue in Spring Cloud Sleuth.

Async Servlet support

If your controller returns a Callable or a WebAsyncTask Spring Cloud Sleuth will continue the existing span instead of creating a new one.

HTTP client integration

Synchronous Rest Template

We’re injecting a RestTemplate interceptor that ensures that all the tracing information is passed to the requests. Each time a call is made a new Span is created. It gets closed upon receiving the response. In order to block the synchronous RestTemplate features just set spring.sleuth.web.client.enabled to false.

Important
You have to register RestTemplate as a bean so that the interceptors will get injected. If you create a RestTemplate instance with a new keyword then the instrumentation WILL NOT work.

Asynchronous Rest Template

Custom instrumentation is set to create and close Spans upon sending and receiving requests. You can customize the ClientHttpRequestFactory and the AsyncClientHttpRequestFactory by registering your beans. Remember to use tracing compatible implementations (e.g. don’t forget to wrap ThreadPoolTaskScheduler in a TraceAsyncListenableTaskExecutor). Example of custom request factories:

@EnableAutoConfiguration
@Configuration
public static class TestConfiguration {

	@Bean
	ClientHttpRequestFactory mySyncClientFactory() {
		return new MySyncClientHttpRequestFactory();
	}

	@Bean
	AsyncClientHttpRequestFactory myAsyncClientFactory() {
		return new MyAsyncClientHttpRequestFactory();
	}
}

To block the AsyncRestTemplate features set spring.sleuth.web.async.client.enabled to false. To disable creation of the default TraceAsyncClientHttpRequestFactoryWrapper set spring.sleuth.web.async.client.factory.enabled to false. If you don’t want to create AsyncRestClient at all set spring.sleuth.web.async.client.template.enabled to false.

Feign

By default Spring Cloud Sleuth provides integration with feign via the TraceFeignClientAutoConfiguration. You can disable it entirely by setting spring.sleuth.feign.enabled to false. If you do so then no Feign related instrumentation will take place.

Part of Feign instrumentation is done via a FeignBeanPostProcessor. You can disable it by providing the spring.sleuth.feign.processor.enabled equal to false. If you set it like this then Spring Cloud Sleuth will not instrument any of your custom Feign components. All the default instrumentation however will be still there.

Asynchronous communication

@Async annotated methods

In Spring Cloud Sleuth we’re instrumenting async related components so that the tracing information is passed between threads. You can disable this behaviour by setting the value of spring.sleuth.async.enabled to false.

If you annotate your method with @Async then we’ll automatically create a new Span with the following characteristics:

  • the Span name will be the annotated method name

  • the Span will be tagged with that method’s class name and the method name too

@Scheduled annotated methods

In Spring Cloud Sleuth we’re instrumenting scheduled method execution so that the tracing information is passed between threads. You can disable this behaviour by setting the value of spring.sleuth.scheduled.enabled to false.

If you annotate your method with @Scheduled then we’ll automatically create a new Span with the following characteristics:

  • the Span name will be the annotated method name

  • the Span will be tagged with that method’s class name and the method name too

If you want to skip Span creation for some @Scheduled annotated classes you can set the spring.sleuth.scheduled.skipPattern with a regular expression that will match the fully qualified name of the @Scheduled annotated class.

Executor, ExecutorService and ScheduledExecutorService

We’re providing LazyTraceExecutor, TraceableExecutorService and TraceableScheduledExecutorService. Those implementations are creating Spans each time a new task is submitted, invoked or scheduled.

Here you can see an example of how to pass tracing information with TraceableExecutorService when working with CompletableFuture:

CompletableFuture<Long> completableFuture = CompletableFuture.supplyAsync(() -> {
	// perform some logic
	return 1_000_000L;
}, new TraceableExecutorService(executorService,
		// 'calculateTax' explicitly names the span - this param is optional
		tracer, traceKeys, spanNamer, "calculateTax"));

Messaging

Spring Cloud Sleuth integrates with Spring Integration. It creates spans for publish and subscribe events. To disable Spring Integration instrumentation, set spring.sleuth.integration.enabled to false.

Spring Cloud Sleuth up till version 1.0.4 is sending invalid tracing headers when using messaging. Those headers are actually the same as the ones sent in HTTP (they contain a -) in its name. For the sake of backwards compatibility in 1.0.4 we’ve started sending both valid and invalid headers. Please upgrade to 1.0.4 because in Spring Cloud Sleuth 1.1 we will remove the support for the deprecated headers.

Since 1.0.4 you can provide the spring.sleuth.integration.patterns pattern to explicitly provide the names of channels that you want to include for tracing. By default all channels are included.

Zuul

We’re registering Zuul filters to propagate the tracing information (the request header is enriched with tracing data). To disable Zuul support set the spring.sleuth.zuul.enabled property to false.

Running examples

You can find the running examples deployed in the Pivotal Web Services. Check them out in the following links:

Spring Cloud Consul

Camden.SR3

This project provides Consul integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations you can quickly enable and configure the common patterns inside your application and build large distributed systems with Consul based components. The patterns provided include Service Discovery, Control Bus and Configuration. Intelligent Routing (Zuul) and Client Side Load Balancing (Ribbon), Circuit Breaker (Hystrix) are provided by integration with Spring Cloud Netflix.

Install Consul

Please see the installation documentation for instructions on how to install Consul.

Consul Agent

A Consul Agent client must be available to all Spring Cloud Consul applications. By default, the Agent client is expected to be at localhost:8500. See the Agent documentation for specifics on how to start an Agent client and how to connect to a cluster of Consul Agent Servers. For development, after you have installed consul, you may start a Consul Agent using the following command:

./src/main/bash/local_run_consul.sh

This will start an agent in server mode on port 8500, with the ui available at http://localhost:8500

Service Discovery with Consul

Service Discovery is one of the key tenets of a microservice based architecture. Trying to hand configure each client or some form of convention can be very difficult to do and can be very brittle. Consul provides Service Discovery services via an HTTP API and DNS. Spring Cloud Consul leverages the HTTP API for service registration and discovery. This does not prevent non-Spring Cloud applications from leveraging the DNS interface. Consul Agents servers are run in a cluster that communicates via a gossip protocol and uses the Raft consensus protocol.

How to activate

To activate Consul Service Discovery use the starter with group org.springframework.cloud and artifact id spring-cloud-starter-consul-discovery. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

Registering with Consul

When a client registers with Consul, it provides meta-data about itself such as host and port, id, name and tags. An HTTP Check is created by default that Consul hits the /health endpoint every 10 seconds. If the health check fails, the service instance is marked as critical.

Example Consul client:

@SpringBootApplication
@EnableDiscoveryClient
@RestController
public class Application {

    @RequestMapping("/")
    public String home() {
        return "Hello world";
    }

    public static void main(String[] args) {
        new SpringApplicationBuilder(Application.class).web(true).run(args);
    }

}

(i.e. utterly normal Spring Boot app). If the Consul client is located somewhere other than localhost:8500, the configuration is required to locate the client. Example:

application.yml
spring:
  cloud:
    consul:
      host: localhost
      port: 8500
Caution
If you use Spring Cloud Consul Config, the above values will need to be placed in bootstrap.yml instead of application.yml.

The default service name, instance id and port, taken from the Environment, are ${spring.application.name}, the Spring Context ID and ${server.port} respectively.

@EnableDiscoveryClient make the app into both a Consul "service" (i.e. it registers itself) and a "client" (i.e. it can query Consul to locate other services).

HTTP Health Check

The health check for a Consul instance defaults to "/health", which is the default locations of a useful endpoint in a Spring Boot Actuator application. You need to change these, even for an Actuator application if you use a non-default context path or servlet path (e.g. server.servletPath=/foo) or management endpoint path (e.g. management.contextPath=/admin). The interval that Consul uses to check the health endpoint may also be configured. "10s" and "1m" represent 10 seconds and 1 minute respectively. Example:

application.yml
spring:
  cloud:
    consul:
      discovery:
        healthCheckPath: ${management.contextPath}/health
        healthCheckInterval: 15s

Metadata and Consul tags

Consul does not yet support metadata on services. Spring Cloud’s ServiceInstance has a Map<String, String> metadata field. Spring Cloud Consul uses Consul tags to approximate metadata until Consul officially supports metadata. Tags with the form key=value will be split and used as a Map key and value respectively. Tags without the equal = sign, will be used as both the key and value.

application.yml
spring:
  cloud:
    consul:
      discovery:
        tags: foo=bar, baz

The above configuration will result in a map with foo→bar and baz→baz.

Making the Consul Instance ID Unique

By default a consul instance is registered with an ID that is equal to its Spring Application Context ID. By default, the Spring Application Context ID is ${spring.application.name}:comma,separated,profiles:${server.port}. For most cases, this will allow multiple instances of one service to run on one machine. If further uniqueness is required, Using Spring Cloud you can override this by providing a unique identifier in spring.cloud.consul.discovery.instanceId. For example:

application.yml
spring:
  cloud:
    consul:
      discovery:
        instanceId: ${spring.application.name}:${vcap.application.instance_id:${spring.application.instance_id:${random.value}}}

With this metadata, and multiple service instances deployed on localhost, the random value will kick in there to make the instance unique. In Cloudfoundry the vcap.application.instance_id will be populated automatically in a Spring Boot application, so the random value will not be needed.

Using the DiscoveryClient

Spring Cloud has support for Feign (a REST client builder) and also Spring RestTemplate using the logical service names instead of physical URLs.

You can also use the org.springframework.cloud.client.discovery.DiscoveryClient which provides a simple API for discovery clients that is not specific to Netflix, e.g.

@Autowired
private DiscoveryClient discoveryClient;

public String serviceUrl() {
    List<ServiceInstance> list = discoveryClient.getInstances("STORES");
    if (list != null && list.size() > 0 ) {
        return list.get(0).getUri();
    }
    return null;
}

Distributed Configuration with Consul

Consul provides a Key/Value Store for storing configuration and other metadata. Spring Cloud Consul Config is an alternative to the Config Server and Client. Configuration is loaded into the Spring Environment during the special "bootstrap" phase. Configuration is stored in the /config folder by default. Multiple PropertySource instances are created based on the application’s name and the active profiles that mimicks the Spring Cloud Config order of resolving properties. For example, an application with the name "testApp" and with the "dev" profile will have the following property sources created:

config/testApp,dev/
config/testApp/
config/application,dev/
config/application/

The most specific property source is at the top, with the least specific at the bottom. Properties is the config/application folder are applicable to all applications using consul for configuration. Properties in the config/testApp folder are only available to the instances of the service named "testApp".

Configuration is currently read on startup of the application. Sending a HTTP POST to /refresh will cause the configuration to be reloaded. Watching the key value store (which Consul supports) is not currently possible, but will be a future addition to this project.

How to activate

To get started with Consul Configuration use the starter with group org.springframework.cloud and artifact id spring-cloud-starter-consul-config. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

This will enable auto-configuration that will setup Spring Cloud Consul Config.

Customizing

Consul Config may be customized using the following properties:

bootstrap.yml
spring:
  cloud:
    consul:
      config:
        enabled: true
        prefix: configuration
        defaultContext: apps
        profileSeparator: '::'
  • enabled setting this value to "false" disables Consul Config

  • prefix sets the base folder for configuration values

  • defaultContext sets the folder name used by all applications

  • profileSeparator sets the value of the separator used to separate the profile name in property sources with profiles

Config Watch

The Consul Config Watch takes advantage of the ability of consul to watch a key prefix. The Config Watch makes a blocking Consul HTTP API call to determine if any relevant configuration data has changed for the current application. If there is new configuration data a Refresh Event is published. This is equivalent to calling the /refresh actuator endpoint.

To change the frequency of when the Config Watch is called change spring.cloud.consul.config.watch.delay. The default value is 1000, which is in milliseconds.

To disable the Config Watch set spring.cloud.consul.config.watch.enabled=false.

YAML or Properties with Config

It may be more convenient to store a blob of properties in YAML or Properties format as opposed to individual key/value pairs. Set the spring.cloud.consul.config.format property to YAML or PROPERTIES. For example to use YAML:

bootstrap.yml
spring:
  cloud:
    consul:
      config:
        format: YAML

YAML must be set in the appropriate data key in consul. Using the defaults above the keys would look like:

config/testApp,dev/data
config/testApp/data
config/application,dev/data
config/application/data

You could store a YAML document in any of the keys listed above.

You can change the data key using spring.cloud.consul.config.data-key.

git2consul with Config

git2consul is a Consul community project that loads files from a git repository to individual keys into Consul. By default the names of the keys are names of the files. YAML and Properties files are supported with file extensions of .yml and .properties respectively. Set the spring.cloud.consul.config.format property to FILES. For example:

bootstrap.yml
spring:
  cloud:
    consul:
      config:
        format: FILES

Given the following keys in /config, the development profile and an application name of foo:

.gitignore
application.yml
bar.properties
foo-development.properties
foo-production.yml
foo.properties
master.ref

the following property sources would be created:

config/foo-development.properties
config/foo.properties
config/application.yml

The value of each key needs to be a properly formatted YAML or Properties file.

Fail Fast

It may be convenient in certain circumstances (like local development or certain test scenarios) to not fail if consul isn’t available for configuration. Setting spring.cloud.consul.config.failFast=false in bootstrap.yml will cause the configuration module to log a warning rather than throw an exception. This will allow the application to continue startup normally.

Consul Retry

If you expect that the consul agent may occasionally be unavailable when your app starts, you can ask it to keep trying after a failure. You need to add spring-retry and spring-boot-starter-aop to your classpath. The default behaviour is to retry 6 times with an initial backoff interval of 1000ms and an exponential multiplier of 1.1 for subsequent backoffs. You can configure these properties (and others) using spring.cloud.consul.retry.* configuration properties. This works with both Spring Cloud Consul Config and Discovery registration.

Tip
To take full control of the retry add a @Bean of type RetryOperationsInterceptor with id "consulRetryInterceptor". Spring Retry has a RetryInterceptorBuilder that makes it easy to create one.

Spring Cloud Bus with Consul

How to activate

To get started with the Consul Bus use the starter with group org.springframework.cloud and artifact id spring-cloud-starter-consul-bus. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.

See the Spring Cloud Bus documentation for the available actuator endpoints and howto send custom messages.

Circuit Breaker with Hystrix

Applications can use the Hystrix Circuit Breaker provided by the Spring Cloud Netflix project by including this starter in the projects pom.xml: spring-cloud-starter-hystrix. Hystrix doesn’t depend on the Netflix Discovery Client. The @EnableHystrix annotation should be placed on a configuration class (usually the main class). Then methods can be annotated with @HystrixCommand to be protected by a circuit breaker. See the documentation for more details.

Hystrix metrics aggregation with Turbine and Consul

Turbine (provided by the Spring Cloud Netflix project), aggregates multiple instances Hystrix metrics streams, so the dashboard can display an aggregate view. Turbine uses the DiscoveryClient interface to lookup relevant instances. To use Turbine with Spring Cloud Consul, configure the Turbine application in a manner similar to the following examples:

pom.xml
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-netflix-turbine</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-consul-discovery</artifactId>
</dependency>

Notice that the Turbine dependency is not a starter. The turbine starter includes support for Netflix Eureka.

application.yml
spring.application.name: turbine
applications: consulhystrixclient
turbine:
  aggregator:
    clusterConfig: ${applications}
  appConfig: ${applications}

The clusterConfig and appConfig sections must match, so it’s useful to put the comma-separated list of service ID’s into a separate configuration property.

Turbine.java
@EnableTurbine
@EnableDiscoveryClient
@SpringBootApplication
public class Turbine {
    public static void main(String[] args) {
        SpringApplication.run(DemoturbinecommonsApplication.class, args);
    }
}

Spring Cloud Zookeeper

This project provides Zookeeper integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations you can quickly enable and configure the common patterns inside your application and build large distributed systems with Zookeeper based components. The patterns provided include Service Discovery and Configuration. Intelligent Routing (Zuul) and Client Side Load Balancing (Ribbon), Circuit Breaker (Hystrix) are provided by integration with Spring Cloud Netflix.

Install Zookeeper

Please see the installation documentation for instructions on how to install Zookeeper.

Service Discovery with Zookeeper

Service Discovery is one of the key tenets of a microservice based architecture. Trying to hand configure each client or some form of convention can be very difficult to do and can be very brittle. Curator(A java library for Zookeeper) provides Service Discovery services via Service Discovery Extension. Spring Cloud Zookeeper leverages this extension for service registration and discovery.

How to activate

Including a dependency on org.springframework.cloud:spring-cloud-starter-zookeeper-discovery will enable auto-configuration that will setup Spring Cloud Zookeeper Discovery.

Registering with Zookeeper

When a client registers with Zookeeper, it provides meta-data about itself such as host and port, id and name.

Example Zookeeper client:

@SpringBootApplication
@EnableDiscoveryClient
@RestController
public class Application {

    @RequestMapping("/")
    public String home() {
        return "Hello world";
    }

    public static void main(String[] args) {
        new SpringApplicationBuilder(Application.class).web(true).run(args);
    }

}

(i.e. utterly normal Spring Boot app). If Zookeeper is located somewhere other than localhost:2181, the configuration is required to locate the server. Example:

application.yml
spring:
  cloud:
    zookeeper:
      connect-string: localhost:2181
Caution
If you use Spring Cloud Zookeeper Config, the above values will need to be placed in bootstrap.yml instead of application.yml.

The default service name, instance id and port, taken from the Environment, are ${spring.application.name}, the Spring Context ID and ${server.port} respectively.

@EnableDiscoveryClient makes the app into both a Zookeeper "service" (i.e. it registers itself) and a "client" (i.e. it can query Zookeeper to locate other services).

Using the DiscoveryClient

Spring Cloud has support for Feign (a REST client builder) and also Spring RestTemplate using the logical service names instead of physical URLs.

You can also use the org.springframework.cloud.client.discovery.DiscoveryClient which provides a simple API for discovery clients that is not specific to Netflix, e.g.

@Autowired
private DiscoveryClient discoveryClient;

public String serviceUrl() {
    List<ServiceInstance> list = discoveryClient.getInstances("STORES");
    if (list != null && list.size() > 0 ) {
        return list.get(0).getUri().toString();
    }
    return null;
}

Zookeeper Dependencies

Using the Zookeeper Dependencies

Spring Cloud Zookeeper gives you a possibility to provide dependencies of your application as properties. As dependencies you can understand other applications that are registered in Zookeeper and which you would like to call via Feign (a REST client builder) and also Spring RestTemplate.

You can also benefit from the Zookeeper Dependency Watchers functionality that lets you control and monitor what is the state of your dependencies and decide what to do with that.

How to activate Zookeeper Dependencies

  • Including a dependency on org.springframework.cloud:spring-cloud-starter-zookeeper-discovery will enable auto-configuration that will setup Spring Cloud Zookeeper Dependencies.

  • If you have to have the spring.cloud.zookeeper.dependencies section properly set up - check the subsequent section for more details then the feature is active

  • You can have the dependencies turned off even if you’ve provided the dependencies in your properties. Just set the property spring.cloud.zookeeper.dependency.enabled to false (defaults to true).

Setting up Zookeeper Dependencies

Let’s take a closer look at an example of dependencies representation:

application.yml
spring.application.name: yourServiceName
spring.cloud.zookeeper:
  dependencies:
    newsletter:
      path: /path/where/newsletter/has/registered/in/zookeeper
      loadBalancerType: ROUND_ROBIN
      contentTypeTemplate: application/vnd.newsletter.$version+json
      version: v1
      headers:
        header1:
            - value1
        header2:
            - value2
      required: false
      stubs: org.springframework:foo:stubs
    mailing:
      path: /path/where/mailing/has/registered/in/zookeeper
      loadBalancerType: ROUND_ROBIN
      contentTypeTemplate: application/vnd.mailing.$version+json
      version: v1
      required: true

Let’s now go through each part of the dependency one by one. The root property name is spring.cloud.zookeeper.dependencies.

Aliases

Below the root property you have to represent each dependency has by an alias due to the constraints of Ribbon (the application id has to be placed in the URL thus you can’t pass any complex path like /foo/bar/name). The alias will be the name that you will use instead of serviceId for DiscoveryClient, Feign or RestTemplate.

In the aforementioned examples the aliases are newsletter and mailing. Example of Feign usage with newsletter would be:

@FeignClient("newsletter")
public interface NewsletterService {
        @RequestMapping(method = RequestMethod.GET, value = "/newsletter")
        String getNewsletters();
}

Path

Represented by path yaml property.

Path is the path under which the dependency is registered under Zookeeper. Like presented before Ribbon operates on URLs thus this path is not compliant with its requirement. That is why Spring Cloud Zookeeper maps the alias to the proper path.

Load balancer type

Represented by loadBalancerType yaml property.

If you know what kind of load balancing strategy has to be applied when calling this particular dependency then you can provide it in the yaml file and it will be automatically applied. You can choose one of the following load balancing strategies

  • STICKY - once chosen the instance will always be called

  • RANDOM - picks an instance randomly

  • ROUND_ROBIN - iterates over instances over and over again

Content-Type template and version

Represented by contentTypeTemplate and version yaml property.

If you version your api via the Content-Type header then you don’t want to add this header to each of your requests. Also if you want to call a new version of the API you don’t want to roam around your code to bump up the API version. That’s why you can provide a contentTypeTemplate with a special $version placeholder. That placeholder will be filled by the value of the version yaml property. Let’s take a look at an example.

Having the following contentTypeTemplate:

application/vnd.newsletter.$version+json

and the following version:

v1

Will result in setting up of a Content-Type header for each request:

application/vnd.newsletter.v1+json

Default headers

Represented by headers map in yaml

Sometimes each call to a dependency requires setting up of some default headers. In order not to do that in code you can set them up in the yaml file. Having the following headers section:

headers:
    Accept:
        - text/html
        - application/xhtml+xml
    Cache-Control:
        - no-cache

Results in adding the Accept and Cache-Control headers with appropriate list of values in your HTTP request.

Obligatory dependencies

Represented by required property in yaml

If one of your dependencies is required to be up and running when your application is booting then it’s enough to set up the required: true property in the yaml file.

If your application can’t localize the required dependency during boot time it will throw an exception and the Spring Context will fail to set up. In other words your application won’t be able to start if the required dependency is not registered in Zookeeper.

You can read more about Spring Cloud Zookeeper Presence Checker in the following sections.

Stubs

You can provide a colon separated path to the JAR containing stubs of the dependency. Example

stubs: org.springframework:foo:stubs

means that for a particular dependencies can be found under:

  • groupId: org.springframework

  • artifactId: foo

  • classifier: stubs - this is the default value

This is actually equal to

stubs: org.springframework:foo

since stubs is the default classifier.

Configuring Spring Cloud Zookeeper Dependencies

There is a bunch of properties that you can set to enable / disable parts of Zookeeper Dependencies functionalities.

  • spring.cloud.zookeeper.dependencies - if you don’t set this property you won’t benefit from Zookeeper Dependencies

  • spring.cloud.zookeeper.dependency.ribbon.enabled (enabled by default) - Ribbon requires explicit global configuration or a particular one for a dependency. By turning on this property runtime load balancing strategy resolution is possible and you can profit from the loadBalancerType section of the Zookeeper Dependencies. The configuration that needs this property has an implementation of LoadBalancerClient that delegates to the ILoadBalancer presented in the next bullet

  • spring.cloud.zookeeper.dependency.ribbon.loadbalancer (enabled by default) - thanks to this property the custom ILoadBalancer knows that the part of the URI passed to Ribbon might actually be the alias that has to be resolved to a proper path in Zookeeper. Without this property you won’t be able to register applications under nested paths.

  • spring.cloud.zookeeper.dependency.headers.enabled (enabled by default) - this property registers such a RibbonClient that automatically will append appropriate headers and content types with version as presented in the Dependency configuration. Without this setting of those two parameters will not be operational.

  • spring.cloud.zookeeper.dependency.resttemplate.enabled (enabled by default) - when enabled will modify the request headers of @LoadBalanced annotated RestTemplate so that it passes headers and content type with version set in Dependency configuration. Wihtout this setting of those two parameters will not be operational.

Spring Cloud Zookeeper Dependency Watcher

The Dependency Watcher mechanism allows you to register listeners to your dependencies. The functionality is in fact an implementation of the Observator pattern. When a dependency changes its state (UP or DOWN) then some custom logic can be applied.

How to activate

Spring Cloud Zookeeper Dependencies functionality needs to be enabled to profit from Dependency Watcher mechanism.

Registering a listener

In order to register a listener you have to implement an interface org.springframework.cloud.zookeeper.discovery.watcher.DependencyWatcherListener and register it as a bean. The interface gives you one method:

    void stateChanged(String dependencyName, DependencyState newState);

If you want to register a listener for a particular dependency then the dependencyName would be the discriminator for your concrete implementation. newState will provide you with information whether your dependency has changed to CONNECTED or DISCONNECTED.

Presence Checker

Bound with Dependency Watcher is the functionality called Presence Checker. It allows you to provide custom behaviour upon booting of your application to react accordingly to the state of your dependencies.

The default implementation of the abstract org.springframework.cloud.zookeeper.discovery.watcher.presence.DependencyPresenceOnStartupVerifier class is the org.springframework.cloud.zookeeper.discovery.watcher.presence.DefaultDependencyPresenceOnStartupVerifier which works in the following way.

  • If the dependency is marked us required and it’s not in Zookeeper then upon booting your application will throw an exception and shutdown

  • If dependency is not required the org.springframework.cloud.zookeeper.discovery.watcher.presence.LogMissingDependencyChecker will log that application is missing at WARN level

The functionality can be overriden since the DefaultDependencyPresenceOnStartupVerifier is registered only when there is no bean of DependencyPresenceOnStartupVerifier.

Distributed Configuration with Zookeeper

Zookeeper provides a hierarchical namespace that allows clients to store arbitrary data, such as configuration data. Spring Cloud Zookeeper Config is an alternative to the Config Server and Client. Configuration is loaded into the Spring Environment during the special "bootstrap" phase. Configuration is stored in the /config namespace by default. Multiple PropertySource instances are created based on the application’s name and the active profiles that mimicks the Spring Cloud Config order of resolving properties. For example, an application with the name "testApp" and with the "dev" profile will have the following property sources created:

config/testApp,dev
config/testApp
config/application,dev
config/application

The most specific property source is at the top, with the least specific at the bottom. Properties is the config/application namespace are applicable to all applications using zookeeper for configuration. Properties in the config/testApp namespace are only available to the instances of the service named "testApp".

Configuration is currently read on startup of the application. Sending a HTTP POST to /refresh will cause the configuration to be reloaded. Watching the configuration namespace (which Zookeeper supports) is not currently implemented, but will be a future addition to this project.

How to activate

Including a dependency on org.springframework.cloud:spring-cloud-starter-zookeeper-config will enable auto-configuration that will setup Spring Cloud Zookeeper Config.

Customizing

Zookeeper Config may be customized using the following properties:

bootstrap.yml
spring:
  cloud:
    zookeeper:
      config:
        enabled: true
        root: configuration
        defaultContext: apps
        profileSeparator: '::'
  • enabled setting this value to "false" disables Zookeeper Config

  • root sets the base namespace for configuration values

  • defaultContext sets the name used by all applications

  • profileSeparator sets the value of the separator used to separate the profile name in property sources with profiles

Unresolved directive in spring-cloud.adoc - include::/Users/ryanjbaxter/git-repos/spring-cloud/scripts/docs/../cli/docs/src/main/asciidoc/spring-cloud-cli.adoc[]

Spring Cloud Security

Spring Cloud Security offers a set of primitives for building secure applications and services with minimum fuss. A declarative model which can be heavily configured externally (or centrally) lends itself to the implementation of large systems of co-operating, remote components, usually with a central indentity management service. It is also extremely easy to use in a service platform like Cloud Foundry. Building on Spring Boot and Spring Security OAuth2 we can quickly create systems that implement common patterns like single sign on, token relay and token exchange.

Note
Spring Cloud is released under the non-restrictive Apache 2.0 license. If you would like to contribute to this section of the documentation or if you find an error, please find the source code and issue trackers in the project at github.

Quickstart

OAuth2 Single Sign On

Here’s a Spring Cloud "Hello World" app with HTTP Basic authentication and a single user account:

app.groovy
@Grab('spring-boot-starter-security')
@Controller
class Application {

  @RequestMapping('/')
  String home() {
    'Hello World'
  }

}

You can run it with spring run app.groovy and watch the logs for the password (username is "user"). So far this is just the default for a Spring Boot app.

Here’s a Spring Cloud app with OAuth2 SSO:

app.groovy
@Controller
@EnableOAuth2Sso
class Application {

  @RequestMapping('/')
  String home() {
    'Hello World'
  }

}

Spot the difference? This app will actually behave exactly the same as the previous one, because it doesn’t know it’s OAuth2 credentals yet.

You can register an app in github quite easily, so try that if you want a production app on your own domain. If you are happy to test on localhost:8080, then set up these properties in your application configuration:

application.yml
spring:
  oauth2:
    client:
      clientId: bd1c0a783ccdd1c9b9e4
      clientSecret: 1a9030fbca47a5b2c28e92f19050bb77824b5ad1
      accessTokenUri: https://github.com/login/oauth/access_token
      userAuthorizationUri: https://github.com/login/oauth/authorize
      clientAuthenticationScheme: form
    resource:
      userInfoUri: https://api.github.com/user
      preferTokenInfo: false

run the app above and it will redirect to github for authorization. If you are already signed into github you won’t even notice that it has authenticated. These credentials will only work if your app is running on port 8080.

To limit the scope that the client asks for when it obtains an access token you can set spring.oauth2.client.scope (comma separated or an array in YAML). By default the scope is empty and it is up to to Authorization Server to decide what the defaults should be, usually depending on the settings in the client registration that it holds.

Note
The examples above are all Groovy scripts. If you want to write the same code in Java (or Groovy) you need to add Spring Security OAuth2 to the classpath (e.g. see the sample here).

OAuth2 Protected Resource

You want to protect an API resource with an OAuth2 token? Here’s a simple example (paired with the client above):

app.groovy
@Grab('spring-cloud-starter-security')
@RestController
@EnableResourceServer
class Application {

  @RequestMapping('/')
  def home() {
    [message: 'Hello World']
  }

}

and

application.yml
spring:
  oauth2:
    resource:
      userInfoUri: https://api.github.com/user
      preferTokenInfo: false

More Detail

Single Sign On

Note
All of the OAuth2 SSO and resource server features moved to Spring Boot in version 1.3. You can find documentation in the Spring Boot user guide.

Token Relay

A Token Relay is where an OAuth2 consumer acts as a Client and forwards the incoming token to outgoing resource requests. The consumer can be a pure Client (like an SSO application) or a Resource Server.

Client Token Relay

If your app is a user facing OAuth2 client (i.e. has declared @EnableOAuth2Sso or @EnableOAuth2Client) then it has an OAuth2ClientContext in request scope from Spring Boot. You can create your own OAuth2RestTemplate from this context and an autowired OAuth2ProtectedResourceDetails, and then the context will always forward the access token downstream, also refreshing the access token automatically if it expires. (These are features of Spring Security and Spring Boot.)

Client Token Relay in Zuul Proxy

If your app also has a Spring Cloud Zuul embedded reverse proxy (using @EnableZuulProxy) then you can ask it to forward OAuth2 access tokens downstream to the services it is proxying. Thus the SSO app above can be enhanced simply like this:

app.groovy
@Controller
@EnableOAuth2Sso
@EnableZuulProxy
class Application {

}

and it will (in addition to logging the user in and grabbing a token) pass the authentication token downstream to the /proxy/* services. If those services are implemented with @EnableOAuth2Resource then they will get a valid token in the correct header.

How does it work? The @EnableOAuth2Sso annotation pulls in spring-cloud-starter-security (which you could do manually in a traditional app), and that in turn triggers some autoconfiguration for a ZuulFilter, which itself is activated because Zuul is on the classpath (via @EnableZuulProxy). The filter just extracts an access token from the currently authenticated user, and puts it in a request header for the downstream requests.

Resource Server Token Relay

If your app has @EnableOAuth2Resource you might want to relay the incoming token downstream to other services. If you use a RestTemplate to contact the downstream services then this is just a matter of how to create the template with the right context.

If your service uses UserInfoTokenServices to authenticate incoming tokens (i.e. it is using the security.oauth2.user-info-uri configuration), then you can simply create an OAuth2RestTemplate using an autowired OAuth2ClientContext (it will be populated by the authentication process before it hits the backend code). Equivalently (with Spring Boot 1.4), you could inject a UserInfoRestTemplateFactory and grab its OAuth2RestTemplate in your configuration. For example:

MyConfiguration.java
@Bean
public OAuth2RestTemplate restTemplate(UserInfoRestTemplateFactory factory) {
    return factory.getUserInfoRestTemplate();
}

This rest template will then have the same OAuth2ClientContext (request-scoped) that is used by the authentication filter, so you can use it to send requests with the same access token.

If your app is not using UserInfoTokenServices but is still a client (i.e. it declares @EnableOAuth2Client or @EnableOAuth2Sso), then with Spring Security Cloud any OAuth2RestOperations that the user creates from an @Autowired @OAuth2Context will also forward tokens. This feature is implemented by default as an MVC handler interceptor, so it only works in Spring MVC. If you are not using MVC you could use a custom filter or AOP interceptor wrapping an AccessTokenContextRelay to provide the same feature.

Here’s a basic example showing the use of an autowired rest template created elsewhere ("foo.com" is a Resource Server accepting the same tokens as the surrounding app):

MyController.java
@Autowired
private OAuth2RestOperations restTemplate;

@RequestMapping("/relay")
public String relay() {
    ResponseEntity<String> response =
      restTemplate.getForEntity("https://foo.com/bar", String.class);
    return "Success! (" + response.getBody() + ")";
}

If you don’t want to forward tokens (and that is a valid choice, since you might want to act as yourself, rather than the client that sent you the token), then you only need to create your own OAuth2Context instead of autowiring the default one.

Feign clients will also pick up an interceptor that uses the OAuth2ClientContext if it is available, so they should also do a token relay anywhere where a RestTemplate would.

Configuring Authentication Downstream of a Zuul Proxy

You can control the authorization behaviour downstream of an @EnableZuulProxy through the proxy.auth.* settings. Example:

application.yml
proxy:
  auth:
    routes:
      customers: oauth2
      stores: passthru
      recommendations: none

In this example the "customers" service gets an OAuth2 token relay, the "stores" service gets a passthrough (the authorization header is just passed downstream), and the "recommendations" service has its authorization header removed. The default behaviour is to do a token relay if there is a token available, and passthru otherwise.

See ProxyAuthenticationProperties for full details.

Spring Cloud for Cloud Foundry

Spring Cloud for Cloudfoundry makes it easy to run Spring Cloud apps in Cloud Foundry (the Platform as a Service). Cloud Foundry has the notion of a "service", which is middlware that you "bind" to an app, essentially providing it with an environment variable containing credentials (e.g. the location and username to use for the service).

The spring-cloud-cloudfoundry-web project provides basic support for some enhanced features of webapps in Cloud Foundry: binding automatically to single-sign-on services and optionally enabling sticky routing for discovery.

The spring-cloud-cloudfoundry-discovery project provides an implementation of Spring Cloud Commons DiscoveryClient so you can @EnableDiscoveryClient and provide your credentials as spring.cloud.cloudfoundry.discovery.[email,password] and then you can use the DiscoveryClient directly or via a LoadBalancerClient (also *.url if you are not connecting to Pivotal Web Services).

The first time you use it the discovery client might be slow owing to the fact that it has to get an access token from Cloud Foundry.

Discovery

Here’s a Spring Cloud app with Cloud Foundry discovery:

app.groovy
@Grab('org.springframework.cloud:spring-cloud-cloudfoundry')
@RestController
@EnableDiscoveryClient
class Application {

  @Autowired
  DiscoveryClient client

  @RequestMapping('/')
  String home() {
    'Hello from ' + client.getLocalServiceInstance()
  }

}

If you run it without any service bindings:

$ spring jar app.jar app.groovy
$ cf push -p app.jar

It will show its app name in the home page.

The DiscoveryClient can lists all the apps in a space, according to the credentials it is authenticated with, where the space defaults to the one the client is running in (if any). If neither org nor space are configured, they default per the user’s profile in Cloud Foundry.

Single Sign On

Note
All of the OAuth2 SSO and resource server features moved to Spring Boot in version 1.3. You can find documentation in the Spring Boot user guide.

This project provides automatic binding from CloudFoundry service credentials to the Spring Boot features. If you have a CloudFoundry service called "sso", for instance, with credentials containing "client_id", "client_secret" and "auth_domain", it will bind automatically to the Spring OAuth2 client that you enable with @EnableOAuth2Sso (from Spring Boot). The name of the service can be parameterized using spring.oauth2.sso.serviceId.

Unresolved directive in spring-cloud.adoc - include::/Users/ryanjbaxter/git-repos/spring-cloud/scripts/docs/../cluster/docs/src/main/asciidoc/spring-cloud-cluster.adoc[]

Spring Cloud Contract

Documentation Authors: Adam Dudczak, Mathias Düsterhöft, Marcin Grzejszczak, Jakub Kubryński, Karol Lassak, Olga Maciaszek-Sharma, Mariusz Smykuła, Dave Syer

Camden.SR3

Spring Cloud Contract

What you always need is confidence in pushing new features into a new application or service in a distributed system. This project provides support for Consumer Driven Contracts and service schemas in Spring applications, covering a range of options for writing tests, publishing them as assets, asserting that a contract is kept by producers and consumers, for HTTP and message-based interactions.

Spring Cloud Contract WireMock

Modules giving you the possibility to use WireMock with different servers by using the "ambient" server embedded in a Spring Boot application. Check out the samples for more details.

If you have a Spring Boot application that uses Tomcat as an embedded server, for example (the default with spring-boot-starter-web), then you can simply add spring-cloud-contract-wiremock to your classpath and add @AutoConfigureWireMock in order to be able to use Wiremock in your tests. Wiremock runs as a stub server and you can register stub behaviour using a Java API or via static JSON declarations as part of your test. Here’s a simple example:

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
@AutoConfigureWireMock(port = 0)
public class WiremockForDocsTests {
	// A service that calls out over HTTP
	@Autowired private Service service;

	// Using the WireMock APIs in the normal way:
	@Test
	public void contextLoads() throws Exception {
		// Stubbing WireMock
		stubFor(get(urlEqualTo("/resource"))
				.willReturn(aResponse().withHeader("Content-Type", "text/plain").withBody("Hello World!")));
		// We're asserting if WireMock responded properly
		assertThat(this.service.go()).isEqualTo("Hello World!");
	}

}

To start the stub server on a different port use @AutoConfigureWireMock(port=9999) (for example), and for a random port use the value 0. The stub server port will be bindable in the test application context as "wiremock.server.port". Using @AutoConfigureWireMock adds a bean of type WiremockConfiguration to your test application context, where it will be cached in between methods and classes having the same context, just like for normal Spring integration tests.

Registering Stubs Automatically

If you use @AutoConfigureWireMock then it will register WireMock JSON stubs from the file system or classpath, by default from file:src/test/resources/mappings. You can customize the locations using the stubs attribute in the annotation, which can be a resource pattern (ant-style) or a directory, in which case */.json is appended. Example:

@RunWith(SpringRunner.class)
@SpringBootTest
@AutoConfigureWireMock(stubs="classpath:/stubs")
public class WiremockImportApplicationTests {

	@Autowired
	private Service service;

	@Test
	public void contextLoads() throws Exception {
		assertThat(this.service.go()).isEqualTo("Hello World!");
	}

}
Note
Actually WireMock always loads mappings from src/test/resources/mappings as well as the custom locations in the stubs attribute. To change this behaviour you have to also specify a files root as described next.

Using Files to Specify the Stub Bodies

WireMock can read response bodies from files on the classpath or file system. In that case you will see in the JSON DSL that the response has a "bodyFileName" instead of a (literal) "body". The files are resolved relative to a root directory src/test/resources/__files by default. To customize this location you can set the files attribute in the @AutoConfigureWireMock annotation to the location of the parent directory (i.e. the place __files is a subdirectory). You can use Spring resource notation to refer to file:…​ or classpath:…​ locations (but generic URLs are not supported). A list of values can be given and WireMock will resolve the first file that exists when it needs to find a response body.

Note
when you configure the files root, then it affects the automatic loading of stubs as well (they come from the root location in a subdirectory called "mappings"). The value of files has no effect on the stubs loaded explicitly from the stubs attribute.

Alternative: Using JUnit Rules

For a more conventional WireMock experience, using JUnit @Rules to start and stop the server, just use the WireMockSpring convenience class to obtain an Options instance:

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
@AutoConfigureWireMock
public class WiremockForDocsClassRuleTests {

	// Start WireMock on some dynamic port
	@ClassRule
	public static WireMockClassRule wiremock = new WireMockClassRule(
			WireMockSpring.options().dynamicPort());
	// A service that calls out over HTTP to localhost:${wiremock.port}
	@Autowired
	private Service service;

	// Using the WireMock APIs in the normal way:
	@Test
	public void contextLoads() throws Exception {
		// Stubbing WireMock
		wiremock.stubFor(get(urlEqualTo("/resource"))
				.willReturn(aResponse().withHeader("Content-Type", "text/plain").withBody("Hello World!")));
		// We're asserting if WireMock responded properly
		assertThat(this.service.go()).isEqualTo("Hello World!");
	}

}

The use @ClassRule means that the server will shut down after all the methods in this class.

WireMock and Spring MVC Mocks

Spring Cloud Contract provides a convenience class that can load JSON WireMock stubs into a Spring MockRestServiceServer. Here’s an example:

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.NONE)
public class WiremockForDocsMockServerApplicationTests {

	@Autowired
	private RestTemplate restTemplate;

	@Autowired
	private Service service;

	@Test
	public void contextLoads() throws Exception {
		// will read stubs classpath
		MockRestServiceServer server = WireMockRestServiceServer.with(this.restTemplate)
				.baseUrl("http://example.org").stubs("classpath:/stubs/resource.json")
				.build();
		// We're asserting if WireMock responded properly
		assertThat(this.service.go()).isEqualTo("Hello World");
		server.verify();
	}
}

The baseUrl is prepended to all mock calls, and the stubs() method takes a stub path resource pattern as an argument. So in this example the stub defined at /stubs/resource.json is loaded into the mock server, so if the RestTemplate is asked to visit http://example.org/ it will get the responses as declared there. More than one stub pattern can be specified, and each one can be a directory (for a recursive list of all ".json"), or a fixed filename (like in the example above) or an ant-style pattern. The JSON format is the normal WireMock format which you can read about in the WireMock website.

Currently we support Tomcat, Jetty and Undertow as Spring Boot embedded servers, and Wiremock itself has "native" support for a particular version of Jetty (currently 9.2). To use the native Jetty you need to add the native wiremock dependencies and exclude the Spring Boot container if there is one.

Generating Stubs using RestDocs

Spring RestDocs can be used to generate documentation (e.g. in asciidoctor format) for an HTTP API with Spring MockMvc or RestEasy. At the same time as you generate documentation for your API, you can also generate WireMock stubs, by using Spring Cloud Contract WireMock. Just write your normal RestDocs test cases and use @AutoConfigureRestDocs to have stubs automatically in the restdocs output directory. For example:

@RunWith(SpringRunner.class)
@SpringBootTest
@AutoConfigureRestDocs(outputDir = "target/snippets")
@AutoConfigureMockMvc
public class ApplicationTests {

	@Autowired
	private MockMvc mockMvc;

	@Test
	public void contextLoads() throws Exception {
		mockMvc.perform(get("/resource"))
				.andExpect(content().string("Hello World"))
				.andDo(document("resource"));
	}
}

From this test will be generated a WireMock stub at "target/snippets/stubs/resource.json". It matches all GET requests to the "/resource" path.

Without any additional configuration this will create a stub with a request matcher for the HTTP method and all headers except "host" and "content-length". To match the request more precisely, for example to match the body of a POST or PUT, we need to explicitly create a request matcher. This will do two things: 1) create a stub that only matches the way you specify, 2) assert that the request in the test case also matches the same conditions.

The main entry point for this is WireMockRestDocs.verify() which can be used as a substitute for the document() convenience method. For example:

@RunWith(SpringRunner.class)
@SpringBootTest
@AutoConfigureRestDocs(outputDir = "target/snippets")
@AutoConfigureMockMvc
public class ApplicationTests {

	@Autowired
	private MockMvc mockMvc;

	@Test
	public void contextLoads() throws Exception {
		mockMvc.perform(post("/resource")
                .content("{\"id\":\"123456\",\"message\":\"Hello World\"}"))
				.andExpect(status.isOk())
				.andDo(verify().jsonPath("$.id")
                        .stub("resource"));
	}
}

So this contract is saying: any valid POST with an "id" field will get back an the same response as in this test. You can chain together calls to .jsonPath() to add additional matchers. The JayWay documentation can help you to get up to speed with JSON Path if it is unfamiliar to you.

Instead of the jsonPath and contentType convenience methods, you can also use the WireMock APIs to verify the request matches the created stub. Example:

@Test
public void contextLoads() throws Exception {
	mockMvc.perform(post("/resource")
               .content("{\"id\":\"123456\",\"message\":\"Hello World\"}"))
			.andExpect(status.isOk())
			.andDo(verify()
					.wiremock(WireMock.post(
						urlPathEquals("/resource"))
						.withRequestBody(matchingJsonPath("$.id"))
                       .stub("post-resource"));
}

The WireMock API is rich - you can match headers, query parameters, and request body by regex as well as by json path - so this can useful to create stubs with a wider range of parameters. The above example will generate a stub something like this:

post-resource.json
{
  "request" : {
    "url" : "/resource",
    "method" : "POST",
    "bodyPatterns" : [ {
      "matchesJsonPath" : "$.id"
    }]
  },
  "response" : {
    "status" : 200,
    "body" : "Hello World",
    "headers" : {
      "X-Application-Context" : "application:-1",
      "Content-Type" : "text/plain"
    }
  }
}
Note
You can use either the wiremock() method or the jsonPath() and contentType() methods to create request matchers, but not both.

On the consumer side, assuming the resource.json generated above is available on the classpath, you can create a stub using WireMock in a number of different ways, including as described above using @AutoConfigureWireMock(stubs="classpath:resource.json").

Spring Cloud Contract Verifier

Introduction

Tip
The Accurest project was initially started by Marcin Grzejszczak and Jakub Kubrynski (codearte.io)

Just to make long story short - Spring Cloud Contract Verifier is a tool that enables Consumer Driven Contract (CDC) development of JVM-based applications. It is shipped with Contract Definition Language (DSL). Contract definitions are used to produce following resources:

  • JSON stub definitions to be used by WireMock when doing integration testing on the client code (client tests). Test code must still be written by hand, test data is produced by Spring Cloud Contract Verifier.

  • Messaging routes if you’re using one. We’re integrating with Spring Integration, Spring Cloud Stream, Spring AMQP and Apache Camel. You can however set your own integrations if you want to

  • Acceptance tests (in JUnit or Spock) used to verify if server-side implementation of the API is compliant with the contract (server tests). Full test is generated by Spring Cloud Contract Verifier.

Spring Cloud Contract Verifier moves TDD to the level of software architecture.

Spring Cloud Contract Webinar

You can check out the video from the Spring Cloud Contract Webinar to watch the explanation of the project and the concept of Consuner Driven Contracts. Video was recorded on 25.10.2016.

Why?

Let us assume that we have a system comprising of multiple microservices:

Microservices Architecture
Testing issues

If we wanted to test the application in top left corner if it can communicate with other services then we could do one of two things:

  • deploy all microservices and perform end to end tests

  • mock other microservices in unit / integration tests

Both have their advantages but also a lot of disadvantages. Let’s focus on the latter.

Deploy all microservices and perform end to end tests

Advantages:

  • simulates production

  • tests real communication between services

Disadvantages:

  • to test one microservice we would have to deploy 6 microservices, a couple of databases etc.

  • the environment where the tests would be conducted would be locked for a single suite of tests (i.e. nobody else would be able to run the tests in the meantime).

  • long to run

  • very late feedback

  • extremely hard to debug

Mock other microservices in unit / integration tests

Advantages:

  • very fast feedback

  • no infrastructure requirements

Disadvantages:

  • the implementor of the service creates stubs thus they might have nothing to do with the reality

  • you can go to production with passing tests and failing production

To solve the aforementioned issues Spring Cloud Contract Verifier with Stub Runner were created. Their main idea is to give you very fast feedback, without the need to set up the whole world of microservices.

Stubbed Services

If you work on stubs then the only applications you need are those that your application is using directly.

Stubbed Services

Spring Cloud Contract Verifier gives you the certainty that the stubs that you’re using were created by the service that you’re calling. Also if you can use them it means that they were tested against the producer’s side. In other words - you can trust those stubs.

Purposes

The main purposes of Spring Cloud Contract Verifier with Stub Runner are:

  • to ensure that WireMock / Messaging stubs (used when developing the client) are doing exactly what actual server-side implementation will do,

  • to promote ATDD method and Microservices architectural style,

  • to provide a way to publish changes in contracts that are immediately visible on both sides,

  • to generate boilerplate test code used on the server side.

Important
Spring Cloud Contract Verifier’s purpose is NOT to start writing business features in the contracts. Let’s assume that we have a business use case of fraud check. If a user can be a fraud for 100 different reasons, we would assume that you would create 2 contracts. One for the positive and one for the negative fraud case. Contract tests are used to test contracts between applications and not to simulate full behaviour.

Client Side

During the tests you want to have a WireMock instance / Messaging route up and running that simulates the service Y. You would like to feed that instance with a proper stub definition. That stub definition would need to be valid and should also be reusable on the server side.

Summing it up: On this side, in the stub definition, you can use patterns for request stubbing and you need exact values for responses.

Server Side

Being a service Y since you are developing your stub, you need to be sure that it’s actually resembling your concrete implementation. You can’t have a situation where your stub acts in one way and your application on production behaves in a different way.

That’s why from the provided stub acceptance tests will be generated that will ensure that your application behaves in the same way as you define in your stub.

Summing it up: On this side, in the stub definition, you need exact values as request and can use patterns/methods for response verification.

Step by step guide to CDC

Let’s take an example of Fraud Detection and Loan Issuance process. The business scenario is such that we want to issue loans to people but don’t want them to steal the money from us. The current implementation of our system grants loans to everybody.

Let’s assume that the Loan Issuance is a client to the Fraud Detection server. In the current sprint we are required to develop a new feature - if a client wants to borrow too much money then we mark him as fraud.

Technical remark - Fraud Detection will have artifact id http-server, Loan Issuance http-client and both have group id com.example.

Social remark - both client and server development teams need to communicate directly and discuss changes while going through the process. CDC is all about communication.

Tip
In this case the ownership of the contracts lays on the producer side. It means that physically all the contract are present in the producer’s repository
Technical note

If using the SNAPSHOT / Milestone / Release Candidate versions please add the following section to your

Maven
<repositories>
	<repository>
		<id>spring-snapshots</id>
		<name>Spring Snapshots</name>
		<url>https://repo.spring.io/snapshot</url>
		<snapshots>
			<enabled>true</enabled>
		</snapshots>
	</repository>
	<repository>
		<id>spring-milestones</id>
		<name>Spring Milestones</name>
		<url>https://repo.spring.io/milestone</url>
		<snapshots>
			<enabled>false</enabled>
		</snapshots>
	</repository>
	<repository>
		<id>spring-releases</id>
		<name>Spring Releases</name>
		<url>https://repo.spring.io/release</url>
		<snapshots>
			<enabled>false</enabled>
		</snapshots>
	</repository>
</repositories>
<pluginRepositories>
	<pluginRepository>
		<id>spring-snapshots</id>
		<name>Spring Snapshots</name>
		<url>https://repo.spring.io/snapshot</url>
		<snapshots>
			<enabled>true</enabled>
		</snapshots>
	</pluginRepository>
	<pluginRepository>
		<id>spring-milestones</id>
		<name>Spring Milestones</name>
		<url>https://repo.spring.io/milestone</url>
		<snapshots>
			<enabled>false</enabled>
		</snapshots>
	</pluginRepository>
	<pluginRepository>
		<id>spring-releases</id>
		<name>Spring Releases</name>
		<url>https://repo.spring.io/release</url>
		<snapshots>
			<enabled>false</enabled>
		</snapshots>
	</pluginRepository>
</pluginRepositories>
Gradle
repositories {
	mavenCentral()
	mavenLocal()
	maven { url "http://repo.spring.io/snapshot" }
	maven { url "http://repo.spring.io/milestone" }
	maven { url "http://repo.spring.io/release" }
}
Consumer side (Loan Issuance)

As a developer of the Loan Issuance service (a consumer of the Fraud Detection server):

start doing TDD by writing a test to your feature

@Test
public void shouldBeRejectedDueToAbnormalLoanAmount() {
	// given:
	LoanApplication application = new LoanApplication(new Client("1234567890"),
			99999);
	// when:
	LoanApplicationResult loanApplication = service.loanApplication(application);
	// then:
	assertThat(loanApplication.getLoanApplicationStatus())
			.isEqualTo(LoanApplicationStatus.LOAN_APPLICATION_REJECTED);
	assertThat(loanApplication.getRejectionReason()).isEqualTo("Amount too high");
}

We’ve just written a test of our new feature. If a loan application for a big amount is received we should reject that loan application with some description.

write the missing implementation

At some point in time you need to send a request to the Fraud Detection service. Let’s assume that we’d like to send the request containing the id of the client and the amount he wants to borrow from us. We’d like to send it to the /fraudcheck url via the PUT method.

ResponseEntity<FraudServiceResponse> response =
		restTemplate.exchange("http://localhost:" + port + "/fraudcheck", HttpMethod.PUT,
				new HttpEntity<>(request, httpHeaders),
				FraudServiceResponse.class);

For simplicity we’ve hardcoded the port of the Fraud Detection service at 8080 and our application is running on 8090.

If we’d start the written test it would obviously break since we have no service running on port 8080.

clone the Fraud Detection service repository locally

We’ll start playing around with the server side contract. That’s why we need to first clone it.

git clone https://your-git-server.com/server-side.git local-http-server-repo

define the contract locally in the repo of Fraud Detection service

As consumers we need to define what exactly we want to achieve. We need to formulate our expectations. That’s why we write the following contract.

package contracts

org.springframework.cloud.contract.spec.Contract.make {
			request { // (1)
				method 'PUT' // (2)
				url '/fraudcheck' // (3)
				body([ // (4)
					clientId: value(consumer(regex('[0-9]{10}'))),
					loanAmount: 99999
					])
				headers { // (5)
					header('Content-Type', 'application/vnd.fraud.v1+json')
				}
			}
			response { // (6)
				status 200 // (7)
				body([ // (8)
					fraudCheckStatus: "FRAUD",
					rejectionReason: "Amount too high"
				])
				headers { // (9)
					 header('Content-Type': value(
							 producer(regex('application/vnd.fraud.v1.json.*')),
							 consumer('application/vnd.fraud.v1+json'))
					 )
				}
			}
}

/*
Since we don't want to force on the user to hardcode values of fields that are dynamic
(timestamps, database ids etc.), one can provide parametrize those entries by using the
`value(consumer(...), producer(...))` method. That way what's present in the `consumer`
section will end up in the produced stub. What's there in the `producer` will end up in the
autogenerated test. If you provide only the regular expression side without the concrete
value then Spring Cloud Contract will generate one for you.

From the Consumer perspective, when shooting a request in the integration test:

(1) - If the consumer sends a request
(2) - With the "PUT" method
(3) - to the URL "/fraudcheck"
(4) - with the JSON body that
 * has a field `clientId` that matches a regular expression `[0-9]{10}`
 * has a field `loanAmount` that is equal to `99999`
(5) - with header `Content-Type` equal to `application/vnd.fraud.v1+json`
(6) - then the response will be sent with
(7) - status equal `200`
(8) - and JSON body equal to
 { "fraudCheckStatus": "FRAUD", "rejectionReason": "Amount too high" }
(9) - with header `Content-Type` equal to `application/vnd.fraud.v1+json`

From the Producer perspective, in the autogenerated producer-side test:

(1) - A request will be sent to the producer
(2) - With the "PUT" method
(3) - to the URL "/fraudcheck"
(4) - with the JSON body that
 * has a field `clientId` that will have a generated value that matches a regular expression `[0-9]{10}`
 * has a field `loanAmount` that is equal to `99999`
(5) - with header `Content-Type` equal to `application/vnd.fraud.v1+json`
(6) - then the test will assert if the response has been sent with
(7) - status equal `200`
(8) - and JSON body equal to
 { "fraudCheckStatus": "FRAUD", "rejectionReason": "Amount too high" }
(9) - with header `Content-Type` matching `application/vnd.fraud.v1+json.*`
 */

The Contract is written using a statically typed Groovy DSL. You might be wondering what are those value(client(…​), server(…​)) parts. By using this notation Spring Cloud Contract allows you to define parts of a JSON / URL / etc. which are dynamic. In case of an identifier or a timestamp you don’t want to hardcode a value. You want to allow some different ranges of values. That’s why for the consumer side you can set regular expressions matching those values. You can provide the body either by means of a map notation or String with interpolations. Consult the docs for more information. We highly recommend using the map notation!

Tip
It’s really important that you understand the map notation to set up contracts. Please read the Groovy docs regarding JSON

The aforementioned contract is an agreement between two sides that:

  • if an HTTP request is sent with

    • a method PUT on an endpoint /fraudcheck

    • JSON body with clientPesel matching the regular expression [0-9]{10} and loanAmount equal to 99999

    • and with a header Content-Type equal to application/vnd.fraud.v1+json

  • then an HTTP response would be sent to the consumer that

    • has status 200

    • contains JSON body with the fraudCheckStatus field containing a value FRAUD and the rejectionReason field having value Amount too high

    • and a Content-Type header with a value of application/vnd.fraud.v1+json

Once we’re ready to check the API in practice in the integration tests we need to just install the stubs locally

add the Spring Cloud Contract Verifier plugin

We can add either Maven or Gradle plugin - in this example we’ll show how to add Maven. First we need to add the Spring Cloud Contract BOM.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>${spring-cloud-dependencies.version}</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Next, the Spring Cloud Contract Verifier Maven plugin

<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<version>${spring-cloud-contract.version}</version>
	<extensions>true</extensions>
	<configuration>
		<packageWithBaseClasses>com.example.fraud</packageWithBaseClasses>
	</configuration>
</plugin>

Since the plugin was added we get the Spring Cloud Contract Verifier features which from the provided contracts:

  • generate and run tests

  • produce and install stubs

We don’t want to generate tests since we, as consumers, want only to play with the stubs. That’s why we need to skip the tests generation and execution. When we execute:

cd local-http-server-repo
./mvnw clean install -DskipTests

In the logs we’ll see something like this:

[INFO] --- spring-cloud-contract-maven-plugin:1.0.0.BUILD-SNAPSHOT:generateStubs (default-generateStubs) @ http-server ---
[INFO] Building jar: /some/path/http-server/target/http-server-0.0.1-SNAPSHOT-stubs.jar
[INFO]
[INFO] --- maven-jar-plugin:2.6:jar (default-jar) @ http-server ---
[INFO] Building jar: /some/path/http-server/target/http-server-0.0.1-SNAPSHOT.jar
[INFO]
[INFO] --- spring-boot-maven-plugin:1.4.0.BUILD-SNAPSHOT:repackage (default) @ http-server ---
[INFO]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ http-server ---
[INFO] Installing /some/path/http-server/target/http-server-0.0.1-SNAPSHOT.jar to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT.jar
[INFO] Installing /some/path/http-server/pom.xml to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT.pom
[INFO] Installing /some/path/http-server/target/http-server-0.0.1-SNAPSHOT-stubs.jar to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar

This line is extremely important

[INFO] Installing /some/path/http-server/target/http-server-0.0.1-SNAPSHOT-stubs.jar to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar

It’s confirming that the stubs of the http-server have been installed in the local repository.

run the integration tests

In order to profit from the Spring Cloud Contract Stub Runner functionality of automatic stub downloading you have to do the following in our consumer side project (Loan Application service).

Add the Spring Cloud Contract BOM

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>${spring-cloud-dependencies.version}</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Add the dependency to Spring Cloud Contract Stub Runner

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
	<scope>test</scope>
</dependency>

Annotate your test class with @AutoConfigureStubRunner. In the annotation provide the group id and artifact id for the Stub Runner to download stubs of your collaborators. Also provide the offline work switch since you’re playing with the collaborators offline (optional step).

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment=WebEnvironment.NONE)
@AutoConfigureStubRunner(ids = {"com.example:http-server-dsl:+:stubs:6565"}, workOffline = true)
@DirtiesContext
public class LoanApplicationServiceTests {

Now if you run your tests you’ll see sth like this:

2016-07-19 14:22:25.403  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Desired version is + - will try to resolve the latest version
2016-07-19 14:22:25.438  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolved version is 0.0.1-SNAPSHOT
2016-07-19 14:22:25.439  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolving artifact com.example:http-server:jar:stubs:0.0.1-SNAPSHOT using remote repositories []
2016-07-19 14:22:25.451  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolved artifact com.example:http-server:jar:stubs:0.0.1-SNAPSHOT to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar
2016-07-19 14:22:25.465  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Unpacking stub from JAR [URI: file:/path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar]
2016-07-19 14:22:25.475  INFO 41050 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Unpacked file to [/var/folders/0p/xwq47sq106x1_g3dtv6qfm940000gq/T/contracts100276532569594265]
2016-07-19 14:22:27.737  INFO 41050 --- [           main] o.s.c.c.stubrunner.StubRunnerExecutor    : All stubs are now running RunningStubs [namesAndPorts={com.example:http-server:0.0.1-SNAPSHOT:stubs=8080}]

Which means that Stub Runner has found your stubs and started a server for app with group id com.example, artifact id http-server with version 0.0.1-SNAPSHOT of the stubs and with stubs classifier on port 8080.

file a PR

What we did until now is an iterative process. We can play around with the contract, install it locally and work on the consumer side until we’re happy with the contract.

Once we’re satisfied with the results and the test passes publish a PR to the server side. Currently the consumer side work is done.

Producer side (Fraud Detection server)

As a developer of the Fraud Detection server (a server to the Loan Issuance service):

initial implementation

As a reminder here you can see the initial implementation

@RequestMapping(
		value = "/fraudcheck",
		method = PUT,
		consumes = FRAUD_SERVICE_JSON_VERSION_1,
		produces = FRAUD_SERVICE_JSON_VERSION_1)
public FraudCheckResult fraudCheck(@RequestBody FraudCheck fraudCheck) {
return new FraudCheckResult(FraudCheckStatus.OK, NO_REASON);
}

take over the PR

git checkout -b contract-change-pr master
git pull https://your-git-server.com/server-side-fork.git contract-change-pr

You have to add the dependencies needed by the autogenerated tests

	<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-verifier</artifactId>
	<scope>test</scope>
</dependency>

In the configuration of the Maven plugin we passed the baseClassForTests property

<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<version>${spring-cloud-contract.version}</version>
	<extensions>true</extensions>
	<configuration>
		<packageWithBaseClasses>com.example.fraud</packageWithBaseClasses>
	</configuration>
</plugin>

That’s because all the generated tests will extend that class. Over there you can set up your Spring Context or whatever is necessary. In our case we’re using Rest Assured MVC to start the server side FraudDetectionController.

package com.example.fraud;

import com.example.fraud.FraudDetectionController;
import com.jayway.restassured.module.mockmvc.RestAssuredMockMvc;

import org.junit.Before;

public class FraudBase {

	@Before
	public void setup() {
		RestAssuredMockMvc.standaloneSetup(new FraudDetectionController());
	}

	public void assertThatRejectionReasonIsNull(Object rejectionReason) {
		assert rejectionReason == null;
	}
}

Now, if you run the ./mvnw clean install you would get sth like this:

Results :

Tests in error:
  ContractVerifierTest.validate_shouldMarkClientAsFraud:32 » IllegalState Parsed...

That’s because you have a new contract from which a test was generated and it failed since you haven’t implemented the feature. The autogenerated test would look like this:

@Test
public void validate_shouldMarkClientAsFraud() throws Exception {
    // given:
        MockMvcRequestSpecification request = given()
                .header("Content-Type", "application/vnd.fraud.v1+json")
                .body("{\"clientPesel\":\"1234567890\",\"loanAmount\":99999}");

    // when:
        ResponseOptions response = given().spec(request)
                .put("/fraudcheck");

    // then:
        assertThat(response.statusCode()).isEqualTo(200);
        assertThat(response.header("Content-Type")).matches("application/vnd.fraud.v1.json.*");
    // and:
        DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
        assertThatJson(parsedJson).field("fraudCheckStatus").matches("[A-Z]{5}");
        assertThatJson(parsedJson).field("rejectionReason").isEqualTo("Amount too high");
}

As you can see all the producer() parts of the Contract that were present in the value(consumer(…​), producer(…​)) blocks got injected into the test.

What’s important here to note is that on the producer side we also are doing TDD. We have expectations in form of a test. This test is shooting a request to our own application to an URL, headers and body defined in the contract. It also is expecting very precisely defined values in the response. In other words you have is your red part of red, green and refactor. Time to convert the red into the green.

write the missing implementation

Now since we now what is the expected input and expected output let’s write the missing implementation.

@RequestMapping(
		value = "/fraudcheck",
		method = PUT,
		consumes = FRAUD_SERVICE_JSON_VERSION_1,
		produces = FRAUD_SERVICE_JSON_VERSION_1)
public FraudCheckResult fraudCheck(@RequestBody FraudCheck fraudCheck) {
if (amountGreaterThanThreshold(fraudCheck)) {
	return new FraudCheckResult(FraudCheckStatus.FRAUD, AMOUNT_TOO_HIGH);
}
return new FraudCheckResult(FraudCheckStatus.OK, NO_REASON);
}

If we execute ./mvnw clean install again the tests will pass. Since the Spring Cloud Contract Verifier plugin adds the tests to the generated-test-sources you can actually run those tests from your IDE.

deploy your app

Once you’ve finished your work it’s time to deploy your change. First merge the branch

git checkout master
git merge --no-ff contract-change-pr
git push origin master

Then we assume that your CI would run sth like ./mvnw clean deploy which would publish both the application and the stub artifcats.

Consumer side (Loan Issuance) final step

As a developer of the Loan Issuance service (a consumer of the Fraud Detection server):

merge branch to master

git checkout master
git merge --no-ff contract-change-pr

work online

Now you can disable the offline work for Spring Cloud Contract Stub Runner ad provide where the repository with your stubs is placed. At this moment the stubs of the server side will be automatically downloaded from Nexus / Artifactory. You can switch off the value of the workOffline parameter in your annotation. Below you can see an example of achieving the same by changing the properties.

stubrunner:
  ids: 'com.example:http-server-dsl:+:stubs:8080'
  repositoryRoot: http://repo.spring.io/libs-snapshot

And that’s it!

Dependencies

Spring Cloud Contract Verifier and Stub Runner are using the following libraries

Below you can find some resources related to Spring Cloud Contract Verifier and Stub Runner. Note that some can be outdated since the Spring Cloud Contract Verifier project is under constant development.

Samples

Here you can find some samples.

FAQ

Why use Spring Cloud Contract Verifier and not X ?

For the time being Spring Cloud Contract Verifier is a JVM based tool. So it could be your first pick when you’re already creating software for the JVM. This project has a lot of really interesting features but especially quite a few of them definitely make Spring Cloud Contract Verifier stand out on the "market" of Consumer Driven Contract (CDC) tooling. Out of many the most interesting are:

  • Possibility to do CDC with messaging

  • Clear and easy to use, statically typed DSL

  • Possibility to copy paste your current JSON file to the contract and only edit its elements

  • Automatic generation of tests from the defined Contract

  • Stub Runner functionality - the stubs are automatically downloaded at runtime from Nexus / Artifactory

  • Spring Cloud integration - no discovery service is needed for integration tests

What is this value(consumer(), producer()) ?

One of the biggest challenges related to stubs is their reusability. Only if they can be vastly used, will they serve their purpose. What typically makes that difficult are the hard-coded values of request / response elements. For example dates or ids. Imagine the following JSON request

{
    "time" : "2016-10-10 20:10:15",
    "id" : "9febab1c-6f36-4a0b-88d6-3b6a6d81cd4a",
    "body" : "foo"
}

and JSON response

{
    "time" : "2016-10-10 21:10:15",
    "id" : "c4231e1f-3ca9-48d3-b7e7-567d55f0d051",
    "body" : "bar"
}

Imagine the pain required to set proper value of the time field (let’s assume that this content is generated by the database) by changing the clock in the system or providing stub implementations of data providers. The same is related to the field called id. Will you create a stubbed implementation of UUID generator? Makes little sense…​

So as a consumer you would like to send a request that matches any form of a time or any UUID. That way your system will work as usual - will generate data and you won’t have to stub anything out. Let’s assume that in case of the aforementioned JSON the most important part is the body field. You can focus on that and provide matching for other fields. In other words you would like the stub to work like this:

{
    "time" : "SOMETHING THAT MATCHES TIME",
    "id" : "SOMETHING THAT MATCHES UUID",
    "body" : "foo"
}

As far as the response goes as a consumer you need a concrete value that you can operate on. So such a JSON is valid

{
    "time" : "2016-10-10 21:10:15",
    "id" : "c4231e1f-3ca9-48d3-b7e7-567d55f0d051",
    "body" : "bar"
}

As you could see in the previous sections we generate tests from contracts. So from the producer’s side the situation looks much different. We’re parsing the provided contract and in the test we want to send a real request to your endpoints. So for the case of a producer for the request we can’t have any sort of matching. We need concrete values that the producer’s backend can work on. Such a JSON would be a valid one:

{
    "time" : "2016-10-10 20:10:15",
    "id" : "9febab1c-6f36-4a0b-88d6-3b6a6d81cd4a",
    "body" : "foo"
}

On the other hand from the point of view of the validity of the contract the response doesn’t necessarily have to contain concrete values of time or id. Let’s say that you generate those on the producer side - again, you’d have to do a lot of stubbing to ensure that you always return the same values. That’s why from the producer’s side what you might want is the following response:

{
    "time" : "SOMETHING THAT MATCHES TIME",
    "id" : "SOMETHING THAT MATCHES UUID",
    "body" : "bar"
}

How can you then provide one time a matcher for the consumer and a concrete value for the producer and vice versa? In Spring Cloud Contract we’re allowing you to provide a dynamic value. That means that it can differ for both sides of the communication. You can pass the values:

Either via the value method

value(consumer(...), producer(...))
value(stub(...), test(...))
value(client(...), server(...))

or using the $() method

$(consumer(...), producer(...))
$(stub(...), test(...))
$(client(...), server(...))

You can read more about this in the Contract DSL section.

Calling value() or $() tells Spring Cloud Contract that you will be passing a dynamic value. Inside the consumer() method you pass the value that should be used on the consumer side (in the generated stub). Inside the producer() method you pass the value that should be used on the producer side (in the generated test).

Tip
If on one side you have passed the regular expression and you haven’t passed the other, then the other side will get auto-generated.

Most often you will use that method together with the regex helper method. E.g. consumer(regex('[0-9]{10}')).

To sum it up the contract for the aforementioned scenario would look more or less like this (the regular expression for time and UUID are simplified and most likely invalid but we want to keep things very simple in this example):

org.springframework.cloud.contract.spec.Contract.make {
				request {
					method 'GET'
					url '/someUrl'
					body([
					    time : value(consumer(regex('[0-9]{4}-[0-9]{2}-[0-9]{2} [0-2][0-9]-[0-5][0-9]-[0-5][0-9]')),
					    id: value(consumer(regex('[0-9a-zA-z]{8}-[0-9a-zA-z]{4}-[0-9a-zA-z]{4}-[0-9a-zA-z]{12}'))
					    body: "foo"
					])
				}
			response {
				status 200
				body([
					    time : value(producer(regex('[0-9]{4}-[0-9]{2}-[0-9]{2} [0-2][0-9]-[0-5][0-9]-[0-5][0-9]')),
					    id: value([producer(regex('[0-9a-zA-z]{8}-[0-9a-zA-z]{4}-[0-9a-zA-z]{4}-[0-9a-zA-z]{12}'))
					    body: "bar"
					])
			}
}
Important
Please read the Groovy docs related to JSON to understand how to properly structure the request / response bodies.

How to do Stubs versioning?

API Versioning

Let’s try to answer a question what versioning really means. If you’re referring to the API version then there are different approaches.

  • use Hypermedia, links and do not version your API by any means

  • pass versions through headers / urls

I will not try to answer a question which approach is better. Whatever suit your needs and allows you to generate business value should be picked.

Let’s assume that you do version your API. In that case you should provide as many contracts as many versions you support. You can create a subfolder for every version or append it to th contract name - whatever suits you more.

JAR versioning

If by versioning you mean the version of the JAR that contains the stubs then there are essentially two main approaches.

Let’s assume that you’re doing Continuous Delivery / Deployment which means that you’re generating a new version of the jar each time you go through the pipeline and that jar can go to production at any time. For example your jar version looks like this (it got built on the 20.10.2016 at 20:15:21) :

1.0.0.20161020-201521-RELEASE

In that case your generated stub jar will look like this.

1.0.0.20161020-201521-RELEASE-stubs.jar

In this case you should inside your application.yml or @AutoConfigureStubRunner when referencing stubs provide the latest version of the stubs. You can do that by passing the + sign. Example

@AutoConfigureStubRunner(ids = {"com.example:http-server-dsl:+:stubs:8080"})

If the versioning however is fixed (e.g. 1.0.4.RELEASE or 2.1.1) then you have to set the concrete value of the jar version. Example for 2.1.1.

@AutoConfigureStubRunner(ids = {"com.example:http-server-dsl:2.1.1:stubs:8080"})
Dev or prod stubs

You can manipulate the classifier to run the tests against current development version of the stubs of other services or the ones that were deployed to production. If you alter your build to deploy the stubs with the prod-stubs classifier once you reach production deployment then you can run tests in one case with dev stubs and one with prod stubs.

Example of tests using development version of stubs

@AutoConfigureStubRunner(ids = {"com.example:http-server-dsl:+:stubs:8080"})

Example of tests using production version of stubs

@AutoConfigureStubRunner(ids = {"com.example:http-server-dsl:+:prod-stubs:8080"})

You can pass those values also via properties from your deployment pipeline.

Common repo with contracts

Another way of storing contracts other than having them with the producer is keeping them in a common place. It can be related to security issues where the consumers can’t clone the producer’s code. Also if you keep contracts in a single place then you, as a producer, will know how many consumers you have and which consumer will you break with your local changes.

Repo structure

Let’s assume that we have a producer with coordinates com.example:server and 3 consumers: client1, client2, client3. Then in the repository with common contracts you would have the following setup (which you can checkout here:

├── com
│   └── example
│       └── server
│           ├── client1
│           │   └── expectation.groovy
│           ├── client2
│           │   └── expectation.groovy
│           ├── client3
│           │   └── expectation.groovy
│           └── pom.xml
├── mvnw
├── mvnw.cmd
├── pom.xml
└── src
    └── assembly
        └── contracts.xml

As you can see the under the slash-delimited groupid / artifact id folder (com/example/server) you have expectations of the 3 consumers (client1, client2 and client3). Expectations are the standard Groovy DSL contract files as described throughout this documentation. This repository has to produce a JAR file that maps one to one to the contents of the repo.

Example of a pom.xml inside the server folder.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>com.example</groupId>
	<artifactId>server</artifactId>
	<version>0.0.1-SNAPSHOT</version>

	<name>Server Stubs</name>
	<description>POM used to install locally stubs for consumer side</description>

	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>1.4.2.BUILD-SNAPSHOT</version>
		<relativePath />
	</parent>

	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<java.version>1.8</java.version>
		<spring-cloud-contract.version>1.0.3.BUILD-SNAPSHOT</spring-cloud-contract.version>
		<spring-cloud-dependencies.version>Camden.BUILD-SNAPSHOT</spring-cloud-dependencies.version>
	</properties>

	<dependencyManagement>
		<dependencies>
			<dependency>
				<groupId>org.springframework.cloud</groupId>
				<artifactId>spring-cloud-dependencies</artifactId>
				<version>${spring-cloud-dependencies.version}</version>
				<type>pom</type>
				<scope>import</scope>
			</dependency>
		</dependencies>
	</dependencyManagement>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.cloud</groupId>
				<artifactId>spring-cloud-contract-maven-plugin</artifactId>
				<version>${spring-cloud-contract.version}</version>
				<extensions>true</extensions>
				<configuration>
					<!-- By default it would search under src/test/resources/ -->
					<contractsDirectory>${project.basedir}</contractsDirectory>
				</configuration>
			</plugin>
		</plugins>
	</build>

	<repositories>
		<repository>
			<id>spring-snapshots</id>
			<name>Spring Snapshots</name>
			<url>https://repo.spring.io/snapshot</url>
			<snapshots>
				<enabled>true</enabled>
			</snapshots>
		</repository>
		<repository>
			<id>spring-milestones</id>
			<name>Spring Milestones</name>
			<url>https://repo.spring.io/milestone</url>
			<snapshots>
				<enabled>false</enabled>
			</snapshots>
		</repository>
		<repository>
			<id>spring-releases</id>
			<name>Spring Releases</name>
			<url>https://repo.spring.io/release</url>
			<snapshots>
				<enabled>false</enabled>
			</snapshots>
		</repository>
	</repositories>
	<pluginRepositories>
		<pluginRepository>
			<id>spring-snapshots</id>
			<name>Spring Snapshots</name>
			<url>https://repo.spring.io/snapshot</url>
			<snapshots>
				<enabled>true</enabled>
			</snapshots>
		</pluginRepository>
		<pluginRepository>
			<id>spring-milestones</id>
			<name>Spring Milestones</name>
			<url>https://repo.spring.io/milestone</url>
			<snapshots>
				<enabled>false</enabled>
			</snapshots>
		</pluginRepository>
		<pluginRepository>
			<id>spring-releases</id>
			<name>Spring Releases</name>
			<url>https://repo.spring.io/release</url>
			<snapshots>
				<enabled>false</enabled>
			</snapshots>
		</pluginRepository>
	</pluginRepositories>

</project>

As you can see there are no dependencies other than the Spring Cloud Contract Verifier Maven plugin. Those poms are necessary for the consumer side to run mvn clean install -DskipTests to locally install stubs of the producer project.

The pom.xml in the root folder can look like this:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
		 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>com.example.standalone</groupId>
	<artifactId>contracts</artifactId>
	<version>0.0.1-SNAPSHOT</version>

	<name>Contracts</name>
	<description>Contains all the Spring Cloud Contracts, well, contracts. JAR used by the producers to generate tests and stubs</description>

	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
	</properties>

	<build>
		<plugins>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-assembly-plugin</artifactId>
				<executions>
					<execution>
						<id>contracts</id>
						<phase>prepare-package</phase>
						<goals>
							<goal>single</goal>
						</goals>
						<configuration>
							<attach>true</attach>
							<descriptor>${basedir}/src/assembly/contracts.xml</descriptor>
							<!-- If you want an explicit classifier remove the following line -->
							<appendAssemblyId>false</appendAssemblyId>
						</configuration>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>

</project>

It’s using the assembly plugin in order to build the JAR with all the contracts. Example of such setup is here:

<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3"
		  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
		  xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd">
	<id>project</id>
	<formats>
		<format>jar</format>
	</formats>
	<includeBaseDirectory>false</includeBaseDirectory>
	<fileSets>
		<fileSet>
			<directory>${project.basedir}</directory>
			<outputDirectory>/</outputDirectory>
			<useDefaultExcludes>true</useDefaultExcludes>
			<excludes>
				<exclude>**/${project.build.directory}/**</exclude>
				<exclude>mvnw</exclude>
				<exclude>mvnw.cmd</exclude>
				<exclude>.mvn/**</exclude>
				<exclude>src/**</exclude>
			</excludes>
		</fileSet>
	</fileSets>
</assembly>
Workflow

The workflow would look similar to the one presented in the Step by step guide to CDC. The only difference is that the producer doesn’t own the contracts anymore. So the consumer and the producer have to work on common contracts in a common repository.

Consumer

When the consumer wants to work on the contracts offline, instead of cloning the producer code, the consumer team clones the common repository, goes to the required producer’s folder (e.g. com/example/server) and runs mvn clean install -DskipTests to install locally the stubs converted from the contracts.

Tip
You need to have Maven installed locally
Producer

As a producer it’s enough to alter the Spring Cloud Contract Verifier to provide the URL and the dependency of the JAR containing the contracts:

<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<configuration>
		<contractsRepositoryUrl>http://link/to/your/nexus/or/artifactory/or/sth</contractsRepositoryUrl>
		<contractDependency>
			<groupId>com.example.standalone</groupId>
			<artifactId>contracts</artifactId>
		</contractDependency>
	</configuration>
</plugin>

With this setup the JAR with groupid com.example.standalone and artifactid contracts will be downloaded from http://link/to/your/nexus/or/artifactory/or/sth. It will be then unpacked in a local temporary folder and contracts present under the com/example/server will be picked as the ones used to generate the tests and the stubs. Due to this convention the producer team will know which consumer teams will be broken when some incompatible changes are done.

The rest of the flow looks the same.

Can I have multiple base classes for tests?

Yes! Check out the Different base classes for contracts sections of either Gradle or Maven plugins.

Spring Cloud Contract Verifier HTTP

Gradle Project

Prerequisites

In order to use Spring Cloud Contract Verifier with WireMock you have to use Gradle or Maven plugin.

Warning
If you want to use Spock in your projects you have to add separately the spock-core and spock-spring modules. Check Spock docs for more information
Add gradle plugin with dependencies
buildscript {
	repositories {
		mavenCentral()
	}
	dependencies {
	    classpath "org.springframework.boot:spring-boot-gradle-plugin:${springboot_version}"
		classpath "org.springframework.cloud:spring-cloud-contract-gradle-plugin:${verifier_version}"
	}
}

apply plugin: 'groovy'
apply plugin: 'spring-cloud-contract'

dependencyManagement {
	imports {
		mavenBom "org.springframework.cloud:spring-cloud-contract-dependencies:${verifier_version}"
	}
}

dependencies {
	testCompile 'org.codehaus.groovy:groovy-all:2.4.6'
	// example with adding Spock core and Spock Spring
	testCompile 'org.spockframework:spock-core:1.0-groovy-2.4'
	testCompile 'org.spockframework:spock-spring:1.0-groovy-2.4'
	testCompile 'org.springframework.cloud:spring-cloud-starter-contract-verifier'
}
Snapshot versions for Gradle

Add the additional snapshot repository to your build.gradle to use snapshot versions which are automatically uploaded after every successful build:

buildscript {
	repositories {
		mavenCentral()
		mavenLocal()
		maven { url "http://repo.spring.io/snapshot" }
		maven { url "http://repo.spring.io/milestone" }
		maven { url "http://repo.spring.io/release" }
	}
}
Add maven plugin with dependencies
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-contract-dependencies</artifactId>
            <version>${spring-cloud-contract.version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependencies>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-contract-verifier</artifactId>
        <scope>test</scope>
    </dependency>
</dependencies>

<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <executions>
        <execution>
            <goals>
                <goal>convert</goal>
                <goal>generateStubs</goal>
                <goal>generateTests</goal>
            </goals>
        </execution>
    </executions>
</plugin>
Add stubs

By default Spring Cloud Contract Verifier is looking for stubs in src/test/resources/contracts directory.

Directory containing stub definitions is treated as a class name, and each stub definition is treated as a single test. We assume that it contains at least one directory which will be used as test class name. If there is more than one level of nested directories all except the last one will be used as package name. So with following structure

src/test/resources/contracts/myservice/shouldCreateUser.groovy
src/test/resources/contracts/myservice/shouldReturnUser.groovy

Spring Cloud Contract Verifier will create test class defaultBasePackage.MyService with two methods

  • shouldCreateUser()

  • shouldReturnUser()

Run plugin

Plugin registers itself to be invoked before check task. You have nothing to do as long as you want it to be part of your build process. If you just want to generate tests please invoke generateContractTests task.

Default setup

Default Gradle Plugin setup creates the following Gradle part of the build (it’s a pseudocode)

contracts {
    targetFramework = 'JUNIT'
    testMode = 'MockMvc'
    generatedTestSourcesDir = project.file("${project.buildDir}/generated-test-sources/contracts")
    contractsDslDir = "${project.rootDir}/src/test/resources/contracts"
    basePackageForTests = 'org.springframework.cloud.verifier.tests'
    stubsOutputDir = project.file("${project.buildDir}/stubs")

    // the following properties are used when you want to provide where the JAR with contract lays
    contractDependency {
        stringNotation = ''
    }
    contractsPath = ''
    contractsWorkOffline = false
}

tasks.create(type: Jar, name: 'verifierStubsJar', dependsOn: 'generateWireMockClientStubs') {
    baseName = project.name
    classifier = contracts.stubsSuffix
    from contractVerifier.stubsOutputDir
}

project.artifacts {
    archives task
}

tasks.create(type: Copy, name: 'copyContracts') {
    from contracts.contractsDslDir
    into contracts.stubsOutputDir
}

verifierStubsJar.dependsOn 'copyContracts'

publishing {
    publications {
        stubs(MavenPublication) {
            artifactId project.name
            artifact verifierStubsJar
        }
    }
}
Configure plugin

To change default configuration just add contracts snippet to your Gradle config

contracts {
	testMode = 'MockMvc'
	baseClassForTests = 'org.mycompany.tests'
	generatedTestSourcesDir = project.file('src/generatedContract')
}
Configuration options
  • testMode - defines mode for acceptance tests. By default MockMvc which is based on Spring’s MockMvc. It can also be changed to JaxRsClient or to Explicit for real HTTP calls.

  • imports - array with imports that should be included in generated tests (for example ['org.myorg.Matchers']). By default empty array []

  • staticImports - array with static imports that should be included in generated tests(for example ['org.myorg.Matchers.*']). By default empty array []

  • basePackageForTests - specifies base package for all generated tests. By default set to org.springframework.cloud.verifier.tests

  • baseClassForTests - base class for all generated tests. By default spock.lang.Specification if using Spock tests.

  • packageWithBaseClasses - instead of providing a fixed value for base class you can provide a package where all the base classes lay. Takes precedence over baseClassForTests.

  • baseClassMappings - explicitly map contract package to a FQN of a base class. Takes precedence over packageWithBaseClasses and baseClassForTests.

  • ruleClassForTests - specifies Rule which should be added to generated test classes.

  • ignoredFiles - Ant matcher allowing defining stub files for which processing should be skipped. By default empty array []

  • contractsDslDir - directory containing contracts written using the GroovyDSL. By default $rootDir/src/test/resources/contracts

  • generatedTestSourcesDir - test source directory where tests generated from Groovy DSL should be placed. By default $buildDir/generated-test-sources/contractVerifier

  • stubsOutputDir - dir where the generated WireMock stubs from Groovy DSL should be placed

  • targetFramework - the target test framework to be used; currently Spock and JUnit are supported with JUnit being the default framework

The following properties are used when you want to provide where the JAR with contract lays

  • contractDependency - the Dependency that provides groupid:artifactid:version:classifier coordinates. You can use the contractDependency closure to set it up

  • contractsPath - if contract deps are downloaded will default to groupid/artifactid where groupid will be slash separated. Otherwise will scan contracts under provided directory

  • contractsWorkOffline - in order not to download the dependencies each time you can download them once and work offline afterwards (reuse local Maven repo)

Single base class for all tests

When using Spring Cloud Contract Verifier in default MockMvc you need to create a base specification for all generated acceptance tests. In this class you need to point to endpoint which should be verified.

abstract class BaseMockMvcSpec extends Specification {

	def setup() {
		RestAssuredMockMvc.standaloneSetup(new PairIdController())
	}

	void isProperCorrelationId(Integer correlationId) {
		assert correlationId == 123456
	}

	void isEmpty(String value) {
		assert value == null
	}

}

In case of using Explicit mode, you can use base class to initialize the whole tested app similarly as in regular integration tests. In case of JAXRSCLIENT mode this base class should also contain protected WebTarget webTarget field, right now the only option to test JAX-RS API is to start a web server.

Different base classes for contracts

If your base classes differ between contracts you can tell the Spring Cloud Contract plugin which class should get extended by the autogenerated tests. You have two options:

  • follow a convention by providing the packageWithBaseClasses

  • provide explicit mapping via baseClassMappings

Convention

The convention is such that if you have a contract under e.g. src/test/resources/contract/foo/bar/baz/ and provide the value of the packageWithBaseClasses property to com.example.base then we will assume that there is a BarBazBase class under com.example.base package. In other words we take last two parts of package if they exist and form a class with a Base suffix. Takes precedence over baseClassForTests. Example of usage in the contracts closure:

packageWithBaseClasses = 'com.example.base'

Mapping

You can manually map a regular expression of the contract’s package to fully qualified name of the base class for the matched contract. Let’s take a look at the following example:

baseClassForTests = "com.example.FooBase"
baseClassMappings {
	baseClassMapping('.*/com/.*', 'com.example.ComBase')
	baseClassMapping('.*/bar/.*':'com.example.BarBase')
}

Let’s assume that you have contracts under - src/test/resources/contract/com/ - src/test/resources/contract/foo/

By providing the baseClassForTests we have a fallback in case mapping didn’t succeed (you could also provide the packageWithBaseClasses as fallback). That way the tests generated from src/test/resources/contract/com/ contracts will be extending the com.example.ComBase whereas the rest of tests will extend com.example.FooBase.

Invoking generated tests

To ensure that provider side is complaint with defined contracts, you need to invoke:

./gradlew generateContractTests test
Spring Cloud Contract Verifier on consumer side

In consumer service you need to configure Spring Cloud Contract Verifier plugin in exactly the same way as in case of provider. If you don’t want to use Stub Runner then you need to copy contracts stored in src/test/resources/contracts and generate WireMock json stubs using:

./gradlew generateWireMockClientStubs

Note that stubsOutputDir option has to be set for stub generation to work.

When present, json stubs can be used in consumer automated tests.

@ContextConfiguration(loader == SpringApplicationContextLoader, classes == Application)
class LoanApplicationServiceSpec extends Specification {

 @ClassRule
 @Shared
 WireMockClassRule wireMockRule == new WireMockClassRule()

 @Autowired
 LoanApplicationService sut

 def 'should successfully apply for loan'() {
   given:
 	LoanApplication application =
			new LoanApplication(client: new Client(clientPesel: '12345678901'), amount: 123.123)
   when:
	LoanApplicationResult loanApplication == sut.loanApplication(application)
   then:
	loanApplication.loanApplicationStatus == LoanApplicationStatus.LOAN_APPLIED
	loanApplication.rejectionReason == null
 }
}

Underneath LoanApplication makes a call to FraudDetection service. This request is handled by WireMock server configured using stubs generated by Spring Cloud Contract Verifier.

Using in your Maven project

Add maven plugin

Add the Spring Cloud Contract BOM

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>${spring-cloud-dependencies.version}</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Next, the Spring Cloud Contract Verifier Maven plugin

<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<version>${spring-cloud-contract.version}</version>
	<extensions>true</extensions>
	<configuration>
		<packageWithBaseClasses>com.example.fraud</packageWithBaseClasses>
	</configuration>
</plugin>

You can read more in the Spring Cloud Contract Maven Plugin Docs

Snapshot versions for Maven

For Snapshot / Milestone versions you have to add the following section to your pom.xml

<repositories>
	<repository>
		<id>spring-snapshots</id>
		<name>Spring Snapshots</name>
		<url>https://repo.spring.io/snapshot</url>
		<snapshots>
			<enabled>true</enabled>
		</snapshots>
	</repository>
	<repository>
		<id>spring-milestones</id>
		<name>Spring Milestones</name>
		<url>https://repo.spring.io/milestone</url>
		<snapshots>
			<enabled>false</enabled>
		</snapshots>
	</repository>
	<repository>
		<id>spring-releases</id>
		<name>Spring Releases</name>
		<url>https://repo.spring.io/release</url>
		<snapshots>
			<enabled>false</enabled>
		</snapshots>
	</repository>
</repositories>
<pluginRepositories>
	<pluginRepository>
		<id>spring-snapshots</id>
		<name>Spring Snapshots</name>
		<url>https://repo.spring.io/snapshot</url>
		<snapshots>
			<enabled>true</enabled>
		</snapshots>
	</pluginRepository>
	<pluginRepository>
		<id>spring-milestones</id>
		<name>Spring Milestones</name>
		<url>https://repo.spring.io/milestone</url>
		<snapshots>
			<enabled>false</enabled>
		</snapshots>
	</pluginRepository>
	<pluginRepository>
		<id>spring-releases</id>
		<name>Spring Releases</name>
		<url>https://repo.spring.io/release</url>
		<snapshots>
			<enabled>false</enabled>
		</snapshots>
	</pluginRepository>
</pluginRepositories>
Add stubs

By default Spring Cloud Contract Verifier is looking for stubs in src/test/resources/contracts directory. Directory containing stub definitions is treated as a class name, and each stub definition is treated as a single test. We assume that it contains at least one directory which will be used as test class name. If there is more than one level of nested directories all except the last one will be used as package name. So with following structure

src/test/resources/contracts/myservice/shouldCreateUser.groovy
src/test/resources/contracts/myservice/shouldReturnUser.groovy

Spring Cloud Contract Verifier will create test class defaultBasePackage.MyService with two methods - shouldCreateUser() - shouldReturnUser()

Run plugin

Plugin goal generateTests is assigned to be invoked in phase generate-test-sources. You have nothing to do as long as you want it to be part of your build process. If you just want to generate tests please invoke generateTests goal.

Configure plugin

To change default configuration just add configuration section to plugin definition or execution definition.

<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <executions>
        <execution>
            <goals>
                <goal>convert</goal>
                <goal>generateStubs</goal>
                <goal>generateTests</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <basePackageForTests>org.springframework.cloud.verifier.twitter.place</basePackageForTests>
        <baseClassForTests>org.springframework.cloud.verifier.twitter.place.BaseMockMvcSpec</baseClassForTests>
    </configuration>
</plugin>
Important configuration options
  • testMode - defines mode for acceptance tests. By default MockMvc which is based on Spring’s MockMvc. It can also be changed to JaxRsClient or to Explicit for real HTTP calls.

  • basePackageForTests - specifies base package for all generated tests. By default set to org.springframework.cloud.verifier.tests.

  • ruleClassForTests - specifies Rule which should be added to generated test classes.

  • baseClassForTests - base class for generated tests. By default spock.lang.Specification if using Spock tests.

  • contractsDir - directory containing contracts written using the GroovyDSL. By default /src/test/resources/contracts.

  • testFramework - the target test framework to be used; currently Spock and JUnit are supported with JUnit being the default framework

  • packageWithBaseClasses - instead of providing a fixed value for base class you can provide a package where all the base classes lay. The convention is such that if you have a contract under src/test/resources/contract/foo/bar/baz/ and provide the value of this property to com.example.base then we will assume that there is a BarBazBase class under com.example.base package. Takes precedence over baseClassForTests

  • baseClassMappings - list of base class mappings that where you have to provide contractPackageRegex which is checked against the package in which the contract lays and baseClassFQN that maps to fully qualified name of the base class for the matched contract. If you have a contract under src/test/resources/contract/foo/bar/baz/ and map the property .*com.example.base.BaseClass then the test class generated from these contracts will extend com.example.base.BaseClass. Takes precedence over packageWithBaseClasses and baseClassForTests.

If you want to download your contract definitions from a Maven repository you can use

  • contractsRepositoryUrl - URL to a repo with the artifacts with contracts, if not provided should use the current Maven ones

  • contractDependency - the contract dependency that contains all the packaged contracts

  • contractsPath - path to concrete contracts in the JAR with packaged contracts. Defaults to groupid/artifactid where gropuid is slash separated.

  • contractsWorkOffline - if the dependencies should be downloaded or local Maven only should be reused

For complete information take a look at Plugin Documentation

Single base class for all tests

When using Spring Cloud Contract Verifier in default MockMvc you need to create a base specification for all generated acceptance tests. In this class you need to point to endpoint which should be verified.

package org.mycompany.tests

import org.mycompany.ExampleSpringController
import com.jayway.restassured.module.mockmvc.RestAssuredMockMvc
import spock.lang.Specification

class  MvcSpec extends Specification {
  def setup() {
   RestAssuredMockMvc.standaloneSetup(new ExampleSpringController())
  }
}

In case of using Explicit mode, you can use base class to initialize the whole tested app similarly as in regular integration tests. In case of JAXRSCLIENT mode this base class should also contain protected WebTarget webTarget field, right now the only option to test JAX-RS API is to start a web server.

Different base classes for contracts

If your base classes differ between contracts you can tell the Spring Cloud Contract plugin which class should get extended by the autogenerated tests. You have two options:

  • follow a convention by providing the packageWithBaseClasses

  • provide explicit mapping via baseClassMappings

Convention

The convention is such that if you have a contract under e.g. src/test/resources/contract/hello/v1/ and provide the value of the packageWithBaseClasses property to hello then we will assume that there is a HelloV1Base class under hello package. In other words we take last two parts of package if they exist and form a class with a Base suffix. Takes precedence over baseClassForTests. Example of usage:

<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<configuration>
		<packageWithBaseClasses>hello</packageWithBaseClasses>
	</configuration>
</plugin>

Mapping

You can manually map a regular expression of the contract’s package to fully qualified name of the base class for the matched contract. You have to provide a list baseClassMappings of baseClassMapping that takes a contractPackageRegex to baseClassFQN mapping. Let’s take a look at the following example:

<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<configuration>
		<baseClassForTests>com.example.FooBase</baseClassForTests>
		<baseClassMappings>
			<baseClassMapping>
				<contractPackageRegex>.*com.*</contractPackageRegex>
				<baseClassFQN>com.example.TestBase</baseClassFQN>
			</baseClassMapping >
		</baseClassMappings>
	</configuration>
</plugin>

Let’s assume that you have contracts under - src/test/resources/contract/com/ - src/test/resources/contract/foo/

By providing the baseClassForTests we have a fallback in case mapping didn’t succeed (you could also provide the packageWithBaseClasses as fallback). That way the tests generated from src/test/resources/contract/com/ contracts will be extending the com.example.ComBase whereas the rest of tests will extend com.example.FooBase.

Invoking generated tests

Spring Cloud Contract Verifier Maven Plugin generates verification code into directory /generated-test-sources/contractVerifier and attach this directory to testCompile goal.

For Groovy Spock code use:

<plugin>
	<groupId>org.codehaus.gmavenplus</groupId>
	<artifactId>gmavenplus-plugin</artifactId>
	<version>1.5</version>
	<executions>
		<execution>
			<goals>
				<goal>testCompile</goal>
			</goals>
		</execution>
	</executions>
	<configuration>
		<testSources>
			<testSource>
				<directory>${project.basedir}/src/test/groovy</directory>
				<includes>
					<include>**/*.groovy</include>
				</includes>
			</testSource>
			<testSource>
				<directory>${project.build.directory}/generated-test-sources/contractVerifier</directory>
				<includes>
					<include>**/*.groovy</include>
				</includes>
			</testSource>
		</testSources>
	</configuration>
</plugin>

To ensure that provider side is complaint with defined contracts, you need to invoke mvn generateTest test

Spring Cloud Contract Verifier on consumer side

You can actually use the Spring Cloud Contract Verifier also for the consumer side! You can use the plugin so that it only converts the contracts and generates the stubs. To achieve that you need to configure Spring Cloud Contract Verifier plugin in exactly the same way as in case of provider. You need to copy contracts stored in src/test/resources/contracts and generate WireMock json stubs using: mvn generateStubs command. By default generated WireMock mapping is stored in directory target/mappings. Your project should create from this generated mappings additional artifact with classifier stubs for easy deploy to maven repository.

Sample configuration:

<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <version>${verifier-plugin.version}</version>
    <executions>
        <execution>
            <goals>
                <goal>convert</goal>
                <goal>generateStubs</goal>
            </goals>
        </execution>
    </executions>
</plugin>

When present, json stubs can be used in consumer automated tests.

@RunWith(SpringTestRunner.class)
@SpringBootTest
@AutoConfigureStubRunner
public class LoanApplicationServiceTests {

  @Autowired
  LoanApplicationService service;

  @Test
  public void shouldSuccessfullyApplyForLoan() {
    //given:
 	LoanApplication application =
			new LoanApplication(new Client("12345678901"), 123.123);
    //when:
	LoanApplicationResult loanApplication = service.loanApplication(application);
    // then:
	assertThat(loanApplication.loanApplicationStatus).isEqualTo(LoanApplicationStatus.LOAN_APPLIED);
	assertThat(loanApplication.rejectionReason).isNull();
  }
}

Underneath LoanApplication makes a call to the FraudDetection service. This request is handled by a WireMock server configured using stubs generated by Spring Cloud Contract Verifier.

Scenarios

It’s possible to handle scenarios with Spring Cloud Contract Verifier. All you need to do is to stick to proper naming convention while creating your contracts. The convention requires to include order number followed by the underscore.

my_contracts_dir\
  scenario1\
    1_login.groovy
    2_showCart.groovy
    3_logout.groovy

Such tree will cause Spring Cloud Contract Verifier generating WireMock’s scenario with name scenario1 and three steps:

  • login marked as Started pointing to:

  • showCart marked as Step1 pointing to:

  • logout marked as Step2 which will close the scenario.

More details about WireMock scenarios can be found under http://wiremock.org/stateful-behaviour.html

Spring Cloud Contract Verifier will also generate tests with guaranteed order of execution.

Stubs and transitive dependencies

The Maven and Gradle plugin that we’re created are adding the tasks that create the stubs jar for you. What can be problematic is that when reusing the stubs you can by mistake import all of that stub dependencies! When building a Maven artifact even though you have a couple of different jars, all of them share one pom:

├── github-webhook-0.0.1.BUILD-20160903.075506-1-stubs.jar
├── github-webhook-0.0.1.BUILD-20160903.075506-1-stubs.jar.sha1
├── github-webhook-0.0.1.BUILD-20160903.075655-2-stubs.jar
├── github-webhook-0.0.1.BUILD-20160903.075655-2-stubs.jar.sha1
├── github-webhook-0.0.1.BUILD-SNAPSHOT.jar
├── github-webhook-0.0.1.BUILD-SNAPSHOT.pom
├── github-webhook-0.0.1.BUILD-SNAPSHOT-stubs.jar
├── ...
└── ...

There are three possibilities of working with those dependencies so as not to have any issues with transitive dependencies.

Mark all application dependencies as optional

If in the github-webhook application we would mark all of our dependencies as optional, when you include the github-webhook stubs in another application (or when that dependency gets downloaded by Stub Runner) then, since all of the depenencies are optional, they will not get downloaded.

Create a separate artifactid for stubs

If you create a separate artifactid then you can set it up in whatever way you wish. For example by having no dependencies at all.

Exclude dependencies on the consumer side

As a consumer, if you add the stub dependency to your classpath you can explicitly exclude the unwanted dependencies.

Spring Cloud Contract Verifier Messaging

Spring Cloud Contract Verifier allows you to verify your application that uses messaging as means of communication. All of our integrations are working with Spring but you can also create one yourself and use it.

Integrations

You can use one of the four integration configurations:

  • Apache Camel

  • Spring Integration

  • Spring Cloud Stream

  • Spring AMQP

Since we’re using Spring Boot then if you have added one of the aforementioned libraries to the classpath then automatically all the messaging configuration will be set up.

Important
Remember to put @AutoConfigureMessageVerifier on the base class of your generated tests. Otherwise messaging part of Spring Cloud Contract Verifier will not work.

Manual Integration Testing

The main interface used by the tests is the org.springframework.cloud.contract.verifier.messaging.MessageVerifier. It defines how to send and receive messages. You can create your own implementation to achieve the same goal.

In the a test you can inject a ContractVerifierMessageExchange to send and receive messages that follow the contract. Then add @AutoConfigureMessageVerifier to your test, e.g.

@RunWith(SpringTestRunner.class)
@SpringBootTest
@AutoConfigureMessageVerifier
public static class MessagingContractTests {

  @Autowired
  private MessageVerifier verifier;
  ...
}
Note
If your tests require stubs as well, then @AutoConfigureStubRunner includes the messaging configuration, so you only need the one annotation.

Publisher side test generation

Having the input or outputMessage sections in your DSL will result in creation of tests on the publisher’s side. By default JUnit tests will be created, however there is also a possibility to create Spock tests.

There are 3 main scenarios that we should take into consideration:

  • Scenario 1: there is no input message that produces an output one. The output message is triggered by a component inside the application (e.g. scheduler)

  • Scenario 2: the input message triggers an output message

  • Scenario 3: the input message is consumed and there is no output message

Scenario 1 (no input message)

For the given contract:

def contractDsl = Contract.make {
	label 'some_label'
	input {
		triggeredBy('bookReturnedTriggered()')
	}
	outputMessage {
		sentTo('activemq:output')
		body('''{ "bookName" : "foo" }''')
		headers {
			header('BOOK-NAME', 'foo')
		}
	}
}

The following JUnit test will be created:

'''
 // when:
  bookReturnedTriggered();

 // then:
  ContractVerifierMessage response = contractVerifierMessaging.receive("activemq:output");
  assertThat(response).isNotNull();
  assertThat(response.getHeader("BOOK-NAME")).isEqualTo("foo");
 // and:
  DocumentContext parsedJson = JsonPath.parse(contractVerifierObjectMapper.writeValueAsString(response.getPayload()));
  assertThatJson(parsedJson).field("bookName").isEqualTo("foo");
'''

And the following Spock test would be created:

'''
 when:
  bookReturnedTriggered()

 then:
  ContractVerifierMessage response = contractVerifierMessaging.receive('activemq:output')
  assert response != null
  response.getHeader('BOOK-NAME')  == 'foo'
 and:
  DocumentContext parsedJson = JsonPath.parse(contractVerifierObjectMapper.writeValueAsString(response.payload))
  assertThatJson(parsedJson).field("bookName").isEqualTo("foo")

'''
Scenario 2 (output triggered by input)

For the given contract:

def contractDsl = Contract.make {
	label 'some_label'
	input {
		messageFrom('jms:input')
		messageBody([
				bookName: 'foo'
		])
		messageHeaders {
			header('sample', 'header')
		}
	}
	outputMessage {
		sentTo('jms:output')
		body([
				bookName: 'foo'
		])
		headers {
			header('BOOK-NAME', 'foo')
		}
	}
}

The following JUnit test will be created:

'''
// given:
 ContractVerifierMessage inputMessage = contractVerifierMessaging.create(
  "{\\"bookName\\":\\"foo\\"}"
, headers()
  .header("sample", "header"));

// when:
 contractVerifierMessaging.send(inputMessage, "jms:input");

// then:
 ContractVerifierMessage response = contractVerifierMessaging.receive("jms:output");
 assertThat(response).isNotNull();
 assertThat(response.getHeader("BOOK-NAME")).isEqualTo("foo");
// and:
 DocumentContext parsedJson = JsonPath.parse(contractVerifierObjectMapper.writeValueAsString(response.getPayload()));
 assertThatJson(parsedJson).field("bookName").isEqualTo("foo");
'''

And the following Spock test would be created:

"""\
given:
   ContractVerifierMessage inputMessage = contractVerifierMessaging.create(
    '''{"bookName":"foo"}''',
    ['sample': 'header']
  )

when:
   contractVerifierMessaging.send(inputMessage, 'jms:input')

then:
   ContractVerifierMessage response = contractVerifierMessaging.receive('jms:output')
   assert response !- null
   response.getHeader('BOOK-NAME')  == 'foo'
and:
   DocumentContext parsedJson = JsonPath.parse(contractVerifierObjectMapper.writeValueAsString(response.payload))
   assertThatJson(parsedJson).field("bookName").isEqualTo("foo")
"""
Scenario 3 (no output message)

For the given contract:

def contractDsl = Contract.make {
	label 'some_label'
	input {
		messageFrom('jms:delete')
		messageBody([
				bookName: 'foo'
		])
		messageHeaders {
			header('sample', 'header')
		}
		assertThat('bookWasDeleted()')
	}
}

The following JUnit test will be created:

'''
// given:
 ContractVerifierMessage inputMessage = contractVerifierMessaging.create(
	"{\\"bookName\\":\\"foo\\"}"
, headers()
	.header("sample", "header"));

// when:
 contractVerifierMessaging.send(inputMessage, "jms:delete");

// then:
 bookWasDeleted();
'''

And the following Spock test would be created:

'''
given:
	 ContractVerifierMessage inputMessage = contractVerifierMessaging.create(
		\'\'\'{"bookName":"foo"}\'\'\',
		['sample': 'header']
	)

when:
	 contractVerifierMessaging.send(inputMessage, 'jms:delete')

then:
	 noExceptionThrown()
	 bookWasDeleted()
'''

Consumer Stub Side generation

Unlike the HTTP part - in Messaging we need to publish the Groovy DSL inside the JAR with a stub. Then it’s parsed on the consumer side and proper stubbed routes are created.

For more information please consult the Stub Runner Messaging sections.

Maven
<dependencies>
	<dependency>
		<groupId>org.springframework.cloud</groupId>
		<artifactId>spring-cloud-starter-stream-rabbit</artifactId>
	</dependency>

	<dependency>
		<groupId>org.springframework.cloud</groupId>
		<artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
		<scope>test</scope>
	</dependency>
	<dependency>
		<groupId>org.springframework.cloud</groupId>
		<artifactId>spring-cloud-stream-test-support</artifactId>
		<scope>test</scope>
	</dependency>
</dependencies>

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Camden.BUILD-SNAPSHOT</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>
Gradle
ext {
	contractsDir = file("mappings")
	stubsOutputDirRoot = file("${project.buildDir}/production/${project.name}-stubs/")
}

// Automatically added by plugin:
// copyContracts - copies contracts to the output folder from which JAR will be created
// verifierStubsJar - JAR with a provided stub suffix
// the presented publication is also added by the plugin but you can modify it as you wish

publishing {
	publications {
		stubs(MavenPublication) {
			artifactId "${project.name}-stubs"
			artifact verifierStubsJar
		}
	}
}

Spring Cloud Contract Stub Runner

One of the issues that you could have encountered while using Spring Cloud Contract Verifier was to pass the generated WireMock JSON stubs from the server side to the client side (or various clients). The same takes place in terms of client side generation for messaging.

Copying the JSON files / setting the client side for messaging manually is out of the question.

That’s why we’ll introduce Spring Cloud Contract Stub Runner that can download and run the stubs automatically for you.

Snapshot versions

Add the additional snapshot repository to your build.gradle to use snapshot versions which are automatically uploaded after every successful build:

Maven
<repositories>
	<repository>
		<id>spring-snapshots</id>
		<name>Spring Snapshots</name>
		<url>https://repo.spring.io/snapshot</url>
		<snapshots>
			<enabled>true</enabled>
		</snapshots>
	</repository>
	<repository>
		<id>spring-milestones</id>
		<name>Spring Milestones</name>
		<url>https://repo.spring.io/milestone</url>
		<snapshots>
			<enabled>false</enabled>
		</snapshots>
	</repository>
	<repository>
		<id>spring-releases</id>
		<name>Spring Releases</name>
		<url>https://repo.spring.io/release</url>
		<snapshots>
			<enabled>false</enabled>
		</snapshots>
	</repository>
</repositories>
<pluginRepositories>
	<pluginRepository>
		<id>spring-snapshots</id>
		<name>Spring Snapshots</name>
		<url>https://repo.spring.io/snapshot</url>
		<snapshots>
			<enabled>true</enabled>
		</snapshots>
	</pluginRepository>
	<pluginRepository>
		<id>spring-milestones</id>
		<name>Spring Milestones</name>
		<url>https://repo.spring.io/milestone</url>
		<snapshots>
			<enabled>false</enabled>
		</snapshots>
	</pluginRepository>
	<pluginRepository>
		<id>spring-releases</id>
		<name>Spring Releases</name>
		<url>https://repo.spring.io/release</url>
		<snapshots>
			<enabled>false</enabled>
		</snapshots>
	</pluginRepository>
</pluginRepositories>
Gradle
buildscript {
	repositories {
		mavenCentral()
		mavenLocal()
		maven { url "http://repo.spring.io/snapshot" }
		maven { url "http://repo.spring.io/milestone" }
		maven { url "http://repo.spring.io/release" }
	}

Publishing stubs as JARs

The easiest approach would be to centralize the way stubs are kept. For example you can keep them as JARs in a Maven repository.

Tip
For both Maven and Gradle the setup comes out of the box. But you can customize it if you want to.
Maven
<!-- First disable the default jar setup in the properties section-->
<!-- we don't want the verifier to do a jar for us -->
<spring.cloud.contract.verifier.skip>true</spring.cloud.contract.verifier.skip>

<!-- Next add the assembly plugin to your build -->
<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-assembly-plugin</artifactId>
	<executions>
		<execution>
			<id>stub</id>
			<phase>prepare-package</phase>
			<goals>
				<goal>single</goal>
			</goals>
			<inherited>false</inherited>
			<configuration>
				<attach>true</attach>
				<descriptor>$/Users/ryanjbaxter/git-repos/spring-cloud/scripts/docs/../src/assembly/stub.xml</descriptor>
			</configuration>
		</execution>
	</executions>
</plugin>

<!-- Finally setup your assembly. Below you can find the contents of src/main/assembly/stub.xml -->
<assembly
	xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd">
	<id>stubs</id>
	<formats>
		<format>jar</format>
	</formats>
	<includeBaseDirectory>false</includeBaseDirectory>
	<fileSets>
		<fileSet>
			<directory>src/main/java</directory>
			<outputDirectory>/</outputDirectory>
			<includes>
				<include>**com/example/model/*.*</include>
			</includes>
		</fileSet>
		<fileSet>
			<directory>${project.build.directory}/classes</directory>
			<outputDirectory>/</outputDirectory>
			<includes>
				<include>**com/example/model/*.*</include>
			</includes>
		</fileSet>
		<fileSet>
			<directory>${project.build.directory}/snippets/stubs</directory>
			<outputDirectory>META-INF/${project.groupId}/${project.artifactId}/${project.version}/mappings</outputDirectory>
			<includes>
				<include>**/*</include>
			</includes>
		</fileSet>
		<fileSet>
			<directory>$/Users/ryanjbaxter/git-repos/spring-cloud/scripts/docs/../src/test/resources/contracts</directory>
			<outputDirectory>META-INF/${project.groupId}/${project.artifactId}/${project.version}/contracts</outputDirectory>
			<includes>
				<include>**/*.groovy</include>
			</includes>
		</fileSet>
	</fileSets>
</assembly>
Gradle
ext {
	contractsDir = file("mappings")
	stubsOutputDirRoot = file("${project.buildDir}/production/${project.name}-stubs/")
}

// Automatically added by plugin:
// copyContracts - copies contracts to the output folder from which JAR will be created
// verifierStubsJar - JAR with a provided stub suffix
// the presented publication is also added by the plugin but you can modify it as you wish

publishing {
	publications {
		stubs(MavenPublication) {
			artifactId "${project.name}-stubs"
			artifact verifierStubsJar
		}
	}
}

Modules

Stub Runner Core

Runs stubs for service collaborators. Treating stubs as contracts of services allows to use stub-runner as an implementation of Consumer Driven Contracts.

Stub Runner allows you to automatically download the stubs of the provided dependencies, start WireMock servers for them and feed them with proper stub definitions. For messaging, special stub routes are defined.

Running stubs

Running using main app

You can set the following options to the main class:

-maxp (--maxPort) N            : Maximum port value to be assigned to the
                                 Wiremock instance. Defaults to 15000
                                 (default: 15000)
-minp (--minPort) N            : Minimal port value to be assigned to the
                                 Wiremock instance. Defaults to 10000
                                 (default: 10000)
-s (--stubs) VAL               : Comma separated list of Ivy representation of
                                 jars with stubs. Eg. groupid:artifactid1,group
                                 id2:artifactid2:version:classifier
-sr (--stubRepositoryRoot) VAL : Location of a Jar containing server where you
                                 keep your stubs (e.g. http://nexus.net/content
                                 /repositories/repository)
-ss (--stubsSuffix) VAL        : Suffix for the jar containing stubs (e.g.
                                 'stubs' if the stub jar would have a 'stubs'
                                 classifier for stubs: foobar-stubs ).
                                 Defaults to 'stubs' (default: stubs)
-wo (--workOffline)            : Switch to work offline. Defaults to 'false'
                                 (default: false)
HTTP Stubs

Stubs are defined in JSON documents, whose syntax is defined in WireMock documentation

Example:

{
    "request": {
        "method": "GET",
        "url": "/ping"
    },
    "response": {
        "status": 200,
        "body": "pong",
        "headers": {
            "Content-Type": "text/plain"
        }
    }
}
Viewing registered mappings

Every stubbed collaborator exposes list of defined mappings under __/admin/ endpoint.

Messaging Stubs

Depending on the provided Stub Runner dependency and the DSL the messaging routes are automatically set up.

Stub Runner JUnit Rule

Stub Runner comes with a JUnit rule thanks to which you can very easily download and run stubs for given group and artifact id:

@ClassRule public static StubRunnerRule rule = new StubRunnerRule()
		.repoRoot(repoRoot())
		.downloadStub("org.springframework.cloud.contract.verifier.stubs", "loanIssuance")
		.downloadStub("org.springframework.cloud.contract.verifier.stubs:fraudDetectionServer");

After that rule gets executed Stub Runner connects to your Maven repository and for the given list of dependencies tries to:

  • download them

  • cache them locally

  • unzip them to a temporary folder

  • start a WireMock server for each Maven dependency on a random port from the provided range of ports / provided port

  • feed the WireMock server with all JSON files that are valid WireMock definitions

Stub Runner uses Eclipse Aether mechanism to download the Maven dependencies. Check their docs for more information.

Since the StubRunnerRule implements the StubFinder it allows you to find the started stubs:

/*
 *  Copyright 2013-2016 the original author or authors.
 *
 *  Licensed under the Apache License, Version 2.0 (the "License");
 *  you may not use this file except in compliance with the License.
 *  You may obtain a copy of the License at
 *
 *       http://www.apache.org/licenses/LICENSE-2.0
 *
 *  Unless required by applicable law or agreed to in writing, software
 *  distributed under the License is distributed on an "AS IS" BASIS,
 *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 *  See the License for the specific language governing permissions and
 *  limitations under the License.
 */

package org.springframework.cloud.contract.stubrunner;

import java.net.URL;
import java.util.Collection;
import java.util.Map;

import org.springframework.cloud.contract.spec.Contract;

public interface StubFinder extends StubTrigger {
	/**
	 * For the given groupId and artifactId tries to find the matching
	 * URL of the running stub.
	 *
	 * @param groupId - might be null. In that case a search only via artifactId takes place
	 * @return URL of a running stub or throws exception if not found
	 */
	URL findStubUrl(String groupId, String artifactId) throws StubNotFoundException;

	/**
	 * For the given Ivy notation {@code [groupId]:artifactId:[version]:[classifier]} tries to
	 * find the matching URL of the running stub. You can also pass only {@code artifactId}.
	 *
	 * @param ivyNotation - Ivy representation of the Maven artifact
	 * @return URL of a running stub or throws exception if not found
	 */
	URL findStubUrl(String ivyNotation) throws StubNotFoundException;

	/**
	 * Returns all running stubs
	 */
	RunningStubs findAllRunningStubs();

	/**
	 * Returns the list of Contracts
	 */
	Map<StubConfiguration, Collection<Contract>> getContracts();
}

Example of usage in Spock tests:

@ClassRule @Shared StubRunnerRule rule = new StubRunnerRule()
		.repoRoot(StubRunnerRuleSpec.getResource("/m2repo/repository").toURI().toString())
		.downloadStub("org.springframework.cloud.contract.verifier.stubs", "loanIssuance")
		.downloadStub("org.springframework.cloud.contract.verifier.stubs:fraudDetectionServer")

def 'should start WireMock servers'() {
	expect: 'WireMocks are running'
		rule.findStubUrl('org.springframework.cloud.contract.verifier.stubs', 'loanIssuance') != null
		rule.findStubUrl('loanIssuance') != null
		rule.findStubUrl('loanIssuance') == rule.findStubUrl('org.springframework.cloud.contract.verifier.stubs', 'loanIssuance')
		rule.findStubUrl('org.springframework.cloud.contract.verifier.stubs:fraudDetectionServer') != null
	and:
		rule.findAllRunningStubs().isPresent('loanIssuance')
		rule.findAllRunningStubs().isPresent('org.springframework.cloud.contract.verifier.stubs', 'fraudDetectionServer')
		rule.findAllRunningStubs().isPresent('org.springframework.cloud.contract.verifier.stubs:fraudDetectionServer')
	and: 'Stubs were registered'
		"${rule.findStubUrl('loanIssuance').toString()}/name".toURL().text == 'loanIssuance'
		"${rule.findStubUrl('fraudDetectionServer').toString()}/name".toURL().text == 'fraudDetectionServer'
}

Example of usage in JUnit tests:

@Test
public void should_start_wiremock_servers() throws Exception {
	// expect: 'WireMocks are running'
		then(rule.findStubUrl("org.springframework.cloud.contract.verifier.stubs", "loanIssuance")).isNotNull();
		then(rule.findStubUrl("loanIssuance")).isNotNull();
		then(rule.findStubUrl("loanIssuance")).isEqualTo(rule.findStubUrl("org.springframework.cloud.contract.verifier.stubs", "loanIssuance"));
		then(rule.findStubUrl("org.springframework.cloud.contract.verifier.stubs:fraudDetectionServer")).isNotNull();
	// and:
		then(rule.findAllRunningStubs().isPresent("loanIssuance")).isTrue();
		then(rule.findAllRunningStubs().isPresent("org.springframework.cloud.contract.verifier.stubs", "fraudDetectionServer")).isTrue();
		then(rule.findAllRunningStubs().isPresent("org.springframework.cloud.contract.verifier.stubs:fraudDetectionServer")).isTrue();
	// and: 'Stubs were registered'
		then(httpGet(rule.findStubUrl("loanIssuance").toString() + "/name")).isEqualTo("loanIssuance");
		then(httpGet(rule.findStubUrl("fraudDetectionServer").toString() + "/name")).isEqualTo("fraudDetectionServer");
}

Check the Common properties for JUnit and Spring for more information on how to apply global configuration of Stub Runner.

Providing fixed ports

You can also run your stubs on fixed ports. You can do it in two different ways. One is to pass it in the properties, and the other via fluent API of JUnit rule.

Fluent API

When using the StubRunnerRule you can add a stub to download and then pass the port for the last downloaded stub.

@ClassRule public static StubRunnerRule rule = new StubRunnerRule()
		.repoRoot(repoRoot())
		.downloadStub("org.springframework.cloud.contract.verifier.stubs", "loanIssuance")
		.withPort(12345)
		.downloadStub("org.springframework.cloud.contract.verifier.stubs:fraudDetectionServer:12346");

You can see that for this example the following test is valid:

then(rule.findStubUrl("loanIssuance")).isEqualTo(URI.create("http://localhost:12345").toURL());
then(rule.findStubUrl("fraudDetectionServer")).isEqualTo(URI.create("http://localhost:12346").toURL());

Stub Runner with Spring

Sets up Spring configuration of the Stub Runner project.

By providing a list of stubs inside your configuration file the Stub Runner automatically downloads and registers in WireMock the selected stubs.

If you want to find the URL of your stubbed dependency you can autowire the StubFinder interface and use its methods as presented below:

@ContextConfiguration(classes = Config, loader = SpringBootContextLoader)
@IntegrationTest(["stubrunner.cloud.enabled=false", "stubrunner.camel.enabled=false"])
@AutoConfigureStubRunner
@DirtiesContext
@ActiveProfiles("test")
class StubRunnerConfigurationSpec extends Specification {

	@Autowired StubFinder stubFinder

	@BeforeClass
	@AfterClass
	void setupProps() {
		System.clearProperty("stubrunner.repository.root");
		System.clearProperty("stubrunner.classifier");
	}

	def 'should start WireMock servers'() {
		expect: 'WireMocks are running'
			stubFinder.findStubUrl('org.springframework.cloud.contract.verifier.stubs', 'loanIssuance') != null
			stubFinder.findStubUrl('loanIssuance') != null
			stubFinder.findStubUrl('loanIssuance') == stubFinder.findStubUrl('org.springframework.cloud.contract.verifier.stubs', 'loanIssuance')
			stubFinder.findStubUrl('loanIssuance') == stubFinder.findStubUrl('org.springframework.cloud.contract.verifier.stubs:loanIssuance')
			stubFinder.findStubUrl('org.springframework.cloud.contract.verifier.stubs:loanIssuance:0.0.1-SNAPSHOT') == stubFinder.findStubUrl('org.springframework.cloud.contract.verifier.stubs:loanIssuance:0.0.1-SNAPSHOT:stubs')
			stubFinder.findStubUrl('org.springframework.cloud.contract.verifier.stubs:fraudDetectionServer') != null
		and:
			stubFinder.findAllRunningStubs().isPresent('loanIssuance')
			stubFinder.findAllRunningStubs().isPresent('org.springframework.cloud.contract.verifier.stubs', 'fraudDetectionServer')
			stubFinder.findAllRunningStubs().isPresent('org.springframework.cloud.contract.verifier.stubs:fraudDetectionServer')
		and: 'Stubs were registered'
			"${stubFinder.findStubUrl('loanIssuance').toString()}/name".toURL().text == 'loanIssuance'
			"${stubFinder.findStubUrl('fraudDetectionServer').toString()}/name".toURL().text == 'fraudDetectionServer'
	}

	def 'should throw an exception when stub is not found'() {
		when:
			stubFinder.findStubUrl('nonExistingService')
		then:
			thrown(StubNotFoundException)
		when:
			stubFinder.findStubUrl('nonExistingGroupId', 'nonExistingArtifactId')
		then:
			thrown(StubNotFoundException)
	}

	@Configuration
	@EnableAutoConfiguration
	static class Config {}
}

for the following configuration file:

stubrunner:
  repositoryRoot: classpath:m2repo/repository/
  ids:
    - org.springframework.cloud.contract.verifier.stubs:loanIssuance
    - org.springframework.cloud.contract.verifier.stubs:fraudDetectionServer
    - org.springframework.cloud.contract.verifier.stubs:bootService

Instead of using the properties you can also use the properties inside the @AutoConfigureStubRunner. Below you can find an example of achieving the same result by setting values on the annotation.

@AutoConfigureStubRunner(
		ids = ["org.springframework.cloud.contract.verifier.stubs:loanIssuance",
		"org.springframework.cloud.contract.verifier.stubs:fraudDetectionServer",
		"org.springframework.cloud.contract.verifier.stubs:bootService"],
		repositoryRoot = "classpath:m2repo/repository/")

Stub Runner Spring Cloud

Stub Runner can integrate with Spring Cloud.

Stubbing Service Discovery

The most important feature of Stub Runner Spring Cloud is the fact that it’s stubbing

  • DiscoveryClient

  • Ribbon ServerList

that means that regardless of the fact whether you’re using Zookeeper, Consul, Eureka or anything else, you don’t need that in your tests. We’re starting WireMock instances of your dependencies and we’re telling your application whenever you’re using Feign, load balanced RestTemplate or DiscoveryClient directly, to call those stubbed servers instead of calling the real Service Discovery tool.

For example this test will pass

def 'should make service discovery work'() {
	expect: 'WireMocks are running'
		"${stubFinder.findStubUrl('loanIssuance').toString()}/name".toURL().text == 'loanIssuance'
		"${stubFinder.findStubUrl('fraudDetectionServer').toString()}/name".toURL().text == 'fraudDetectionServer'
	and: 'Stubs can be reached via load service discovery'
		restTemplate.getForObject('http://loanIssuance/name', String) == 'loanIssuance'
		restTemplate.getForObject('http://someNameThatShouldMapFraudDetectionServer/name', String) == 'fraudDetectionServer'
}

for the following configuration file

spring.cloud:
  zookeeper.enabled: false
  consul.enabled: false
eureka.client.enabled: false
stubrunner:
  camel.enabled: false
  idsToServiceIds:
    ivyNotation: someValueInsideYourCode
    fraudDetectionServer: someNameThatShouldMapFraudDetectionServer

Additional Configuration

You can match the artifactId of the stub with the name of your app by using the stubrunner.idsToServiceIds: map. You can disable Stub Runner Ribbon support by providing: stubrunner.cloud.ribbon.enabled equal to false You can disable Stub Runner support by providing: stubrunner.cloud.enabled equal to false

Tip
By default all service discovery will be stubbed. That means that regardless of the fact if you have an existing DiscoveryClient its results will be ignored. However, if you want to reuse it, just set stubrunner.cloud.delegate.enabled to true and then your existing DiscoveryClient results will be merged with the stubbed ones.

Stub Runner Boot Application

Spring Cloud Contract Verifier Stub Runner Boot is a Spring Boot application that exposes REST endpoints to trigger the messaging labels and to access started WireMock servers.

One of the use-cases is to run some smoke (end to end) tests on a deployed application. You can read more about this in the "Microservice Deployment" article at Too Much Coding blog.

How to use it?

Just add the

compile "org.springframework.cloud:spring-cloud-starter-stub-runner"

Annotate a class with @EnableStubRunnerServer, build a fat-jar and you’re ready to go!

For the properties check the Stub Runner Spring section.

Endpoints

HTTP
  • GET /stubs - returns a list of all running stubs in ivy:integer notation

  • GET /stubs/{ivy} - returns a port for the given ivy notation (when calling the endpoint ivy can also be artifactId only)

Messaging

For Messaging

  • GET /triggers - returns a list of all running labels in ivy : [ label1, label2 …​] notation

  • POST /triggers/{label} - executes a trigger with label

  • POST /triggers/{ivy}/{label} - executes a trigger with label for the given ivy notation (when calling the endpoint ivy can also be artifactId only)

Example

@ContextConfiguration(classes = StubRunnerBoot, loader = SpringBootContextLoader)
@IntegrationTest("spring.cloud.zookeeper.enabled=false")
@ActiveProfiles("test")
class StubRunnerBootSpec extends Specification {

	@Autowired StubRunning stubRunning

	def setup() {
		RestAssuredMockMvc.standaloneSetup(new HttpStubsController(stubRunning),
				new TriggerController(stubRunning))
	}

	def 'should return a list of running stub servers in "full ivy:port" notation'() {
		when:
			String response = RestAssuredMockMvc.get('/stubs').body.asString()
		then:
			def root = new JsonSlurper().parseText(response)
			root.'org.springframework.cloud.contract.verifier.stubs:bootService:0.0.1-SNAPSHOT:stubs' instanceof Integer
	}

	def 'should return a port on which a [#stubId] stub is running'() {
		when:
			def response = RestAssuredMockMvc.get("/stubs/${stubId}")
		then:
			response.statusCode == 200
			response.body.as(Integer) > 0
		where:
			stubId << ['org.springframework.cloud.contract.verifier.stubs:bootService:+:stubs',
					   'org.springframework.cloud.contract.verifier.stubs:bootService:0.0.1-SNAPSHOT:stubs',
					   'org.springframework.cloud.contract.verifier.stubs:bootService:+',
					   'org.springframework.cloud.contract.verifier.stubs:bootService',
					   'bootService']
	}

	def 'should return 404 when missing stub was called'() {
		when:
			def response = RestAssuredMockMvc.get("/stubs/a:b:c:d")
		then:
			response.statusCode == 404
	}

	def 'should return a list of messaging labels that can be triggered when version and classifier are passed'() {
		when:
			String response = RestAssuredMockMvc.get('/triggers').body.asString()
		then:
			def root = new JsonSlurper().parseText(response)
			root.'org.springframework.cloud.contract.verifier.stubs:bootService:0.0.1-SNAPSHOT:stubs'?.containsAll(["delete_book","return_book_1","return_book_2"])
	}

	def 'should trigger a messaging label'() {
		given:
			StubRunning stubRunning = Mock()
			RestAssuredMockMvc.standaloneSetup(new HttpStubsController(stubRunning), new TriggerController(stubRunning))
		when:
			def response = RestAssuredMockMvc.post("/triggers/delete_book")
		then:
			response.statusCode == 200
		and:
			1 * stubRunning.trigger('delete_book')
	}

	def 'should trigger a messaging label for a stub with [#stubId] ivy notation'() {
		given:
			StubRunning stubRunning = Mock()
			RestAssuredMockMvc.standaloneSetup(new HttpStubsController(stubRunning), new TriggerController(stubRunning))
		when:
			def response = RestAssuredMockMvc.post("/triggers/$stubId/delete_book")
		then:
			response.statusCode == 200
		and:
			1 * stubRunning.trigger(stubId, 'delete_book')
		where:
			stubId << ['org.springframework.cloud.contract.verifier.stubs:bootService:stubs', 'org.springframework.cloud.contract.verifier.stubs:bootService', 'bootService']
	}

	def 'should throw exception when trigger is missing'() {
		when:
			RestAssuredMockMvc.post("/triggers/missing_label")
		then:
			Exception e = thrown(Exception)
			e.message.contains("Exception occurred while trying to return [missing_label] label.")
			e.message.contains("Available labels are")
			e.message.contains("org.springframework.cloud.contract.verifier.stubs:loanIssuance:0.0.1-SNAPSHOT:stubs=[]")
			e.message.contains("org.springframework.cloud.contract.verifier.stubs:bootService:0.0.1-SNAPSHOT:stubs=")
	}

}

Stub Runner Boot with Service Discovery

One of the possibilities of using Stub Runner Boot is to use it as a feed of stubs for "smoke-tests". What does it mean? Let’s assume that you don’t want to deploy 50 microservice to a test environment in order to check if your application is working fine. You’ve already executed a suite of tests during the build process but you would also like to ensure that the packaging of your application is fine. What you can do is to deploy your application to an environment, start it and run a couple of tests on it to see if it’s working fine. We can call those tests smoke-tests since their idea is to check only a handful of testing scenarios.

The problem with this approach is such that if you’re doing microservices most likely you’re using a service discovery tool. Stub Runner Boot allows you to solve this issue by starting the required stubs and register them in a service discovery tool. Let’s take a look at an example of such a setup with Eureka. Let’s assume that Eureka was already running.

@SpringBootApplication
@EnableStubRunnerServer
@EnableEurekaClient
@AutoConfigureStubRunner
public class StubRunnerBootEurekaExample {

	public static void main(String[] args) {
		SpringApplication.run(StubRunnerBootEurekaExample.class, args);
	}

}

As you can see we want to start a Stub Runner Boot server @EnableStubRunnerServer, enable Eureka client @EnableEurekaClient and we want to have the stub runner feature turned on @AutoConfigureStubRunner.

Now let’s assume that we want to start this application so that the stubs get automatically registered. We can do it by running the app java -jar ${SYSTEM_PROPS} stub-runner-boot-eureka-example.jar where ${SYSTEM_PROPS} would contain the following list of properties

-Dstubrunner.repositoryRoot=http://repo.spring.io/snapshots (1)
-Dstubrunner.cloud.stubbed.discovery.enabled=false (2)
-Dstubrunner.ids=org.springframework.cloud.contract.verifier.stubs:loanIssuance,org.springframework.cloud.contract.verifier.stubs:fraudDetectionServer,org.springframework.cloud.contract.verifier.stubs:bootService (3)
-Dstubrunner.idsToServiceIds.fraudDetectionServer=someNameThatShouldMapFraudDetectionServer (4)

(1) - we tell Stub Runner where all the stubs reside
(2) - we don't want the default behaviour where the discovery service is stubbed. That's why the stub registration will be picked
(3) - we provide a list of stubs to download
(4) - we provide a list of artifactId to serviceId mapping

That way your deployed application can send requests to started WireMock servers via the service discovery. Most likely points 1-3 could be set by default in application.yml cause they are not likely to change. That way you can provide only the list of stubs to download whenever you start the Stub Runner Boot.

Common properties for JUnit and Spring

Some of the properties that are repetitive can be set using system properties or configuration properties (for Spring). Here are their names with their default values:

Property name Default value Description

stubrunner.minPort

10000

Minimal value of a port for a started WireMock with stubs

stubrunner.maxPort

15000

Minimal value of a port for a started WireMock with stubs

stubrunner.repositoryRoot

Maven repo url. If blank then will call the local maven repo

stubrunner.classifier

stubs

Default classifier for the stub artifacts

stubrunner.workOffline

false

If true then will not contact any remote repositories to download stubs

stubrunner.ids

Array of Ivy notation stubs to download

Stub runner stubs ids

You can provide the stubs to download via the stubrunner.ids system property. They follow the following pattern:

groupId:artifactId:version:classifier:port

version, classifier and port are optional.

  • If you don’t provide the port then a random one will be picked

  • If you don’t provide the classifier then the default one will be taken.

  • If you don’t provide the version then the + will be passed and the latest one will be downloaded

Where port means the port of the WireMock server.

Stub Runner for Messaging

Stub Runner has the functionality to run the published stubs in memory. It can integrate with the following frameworks out of the box

  • Spring Integration

  • Spring Cloud Stream

  • Apache Camel

  • Spring AMQP

It also provides points of entry to integrate with any other solution on the market.

Stub triggering

To trigger a message it’s enough to use the StubTrigger interface:

/*
 *  Copyright 2013-2016 the original author or authors.
 *
 *  Licensed under the Apache License, Version 2.0 (the "License");
 *  you may not use this file except in compliance with the License.
 *  You may obtain a copy of the License at
 *
 *       http://www.apache.org/licenses/LICENSE-2.0
 *
 *  Unless required by applicable law or agreed to in writing, software
 *  distributed under the License is distributed on an "AS IS" BASIS,
 *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 *  See the License for the specific language governing permissions and
 *  limitations under the License.
 */

package org.springframework.cloud.contract.stubrunner;

import java.util.Collection;
import java.util.Map;

public interface StubTrigger {

	/**
	 * Triggers an event by a given label for a given {@code groupid:artifactid} notation. You can use only {@code artifactId} too.
	 *
	 * Feature related to messaging.
	 *
	 * @return true - if managed to run a trigger
	 */
	boolean trigger(String ivyNotation, String labelName);

	/**
	 * Triggers an event by a given label.
	 *
	 * Feature related to messaging.
	 *
	 * @return true - if managed to run a trigger
	 */
	boolean trigger(String labelName);

	/**
	 * Triggers all possible events.
	 *
	 * Feature related to messaging.
	 *
	 * @return true - if managed to run a trigger
	 */
	boolean trigger();

	/**
	 * Returns a mapping of ivy notation of a dependency to all the labels it has.
	 *
	 * Feature related to messaging.
	 */
	Map<String, Collection<String>> labels();
}

For convenience the StubFinder interface extends StubTrigger so it’s enough to use only one in your tests.

StubTrigger gives you the following options to trigger a message:

Trigger by label
stubFinder.trigger('return_book_1')
Trigger by group and artifact ids
stubFinder.trigger('org.springframework.cloud.contract.verifier.stubs:camelService', 'return_book_1')
Trigger by artifact ids
stubFinder.trigger('camelService', 'return_book_1')
Trigger all messages
stubFinder.trigger()

Stub Runner Camel

Spring Cloud Contract Verifier Stub Runner’s messaging module gives you an easy way to integrate with Apache Camel. For the provided artifacts it will automatically download the stubs and register the required routes.

Adding it to the project

It’s enough to have both Apache Camel and Spring Cloud Contract Stub Runner on classpath. Remember to annotate your test class with @AutoConfigureMessageVerifier.

Examples

Stubs structure

Let us assume that we have the following Maven repository with a deployed stubs for the camelService application.

└── .m2
    └── repository
        └── io
            └── codearte
                └── accurest
                    └── stubs
                        └── camelService
                            ├── 0.0.1-SNAPSHOT
                            │   ├── camelService-0.0.1-SNAPSHOT.pom
                            │   ├── camelService-0.0.1-SNAPSHOT-stubs.jar
                            │   └── maven-metadata-local.xml
                            └── maven-metadata-local.xml

And the stubs contain the following structure:

├── META-INF
│   └── MANIFEST.MF
└── repository
    ├── accurest
    │   ├── bookDeleted.groovy
    │   ├── bookReturned1.groovy
    │   └── bookReturned2.groovy
    └── mappings

Let’s consider the following contracts (let' number it with 1):

Contract.make {
	label 'return_book_1'
	input {
		triggeredBy('bookReturnedTriggered()')
	}
	outputMessage {
		sentTo('jms:output')
		body('''{ "bookName" : "foo" }''')
		headers {
			header('BOOK-NAME', 'foo')
		}
	}
}

and number 2

Contract.make {
	label 'return_book_2'
	input {
		messageFrom('jms:input')
		messageBody([
				bookName: 'foo'
		])
		messageHeaders {
			header('sample', 'header')
		}
	}
	outputMessage {
		sentTo('jms:output')
		body([
				bookName: 'foo'
		])
		headers {
			header('BOOK-NAME', 'foo')
		}
	}
}
Scenario 1 (no input message)

So as to trigger a message via the return_book_1 label we’ll use the StubTigger interface as follows

stubFinder.trigger('return_book_1')

Next we’ll want to listen to the output of the message sent to jms:output

Exchange receivedMessage = camelContext.createConsumerTemplate().receive('jms:output', 5000)

And the received message would pass the following assertions

receivedMessage != null
assertThatBodyContainsBookNameFoo(receivedMessage.in.body)
receivedMessage.in.headers.get('BOOK-NAME') == 'foo'
Scenario 2 (output triggered by input)

Since the route is set for you it’s enough to just send a message to the jms:output destination.

camelContext.createProducerTemplate().sendBodyAndHeaders('jms:input', new BookReturned('foo'), [sample: 'header'])

Next we’ll want to listen to the output of the message sent to jms:output

Exchange receivedMessage = camelContext.createConsumerTemplate().receive('jms:output', 5000)

And the received message would pass the following assertions

receivedMessage != null
assertThatBodyContainsBookNameFoo(receivedMessage.in.body)
receivedMessage.in.headers.get('BOOK-NAME') == 'foo'
Scenario 3 (input with no output)

Since the route is set for you it’s enough to just send a message to the jms:output destination.

camelContext.createProducerTemplate().sendBodyAndHeaders('jms:delete', new BookReturned('foo'), [sample: 'header'])

Stub Runner Integration

Spring Cloud Contract Verifier Stub Runner’s messaging module gives you an easy way to integrate with Spring Integration. For the provided artifacts it will automatically download the stubs and register the required routes.

Adding it to the project

It’s enough to have both Apache Camel and Spring Cloud Contract Stub Runner on classpath. Remember to annotate your test class with @AutoConfigureMessageVerifier.

Examples

Stubs structure

Let us assume that we have the following Maven repository with a deployed stubs for the integrationService application.

└── .m2
    └── repository
        └── io
            └── codearte
                └── accurest
                    └── stubs
                        └── integrationService
                            ├── 0.0.1-SNAPSHOT
                            │   ├── integrationService-0.0.1-SNAPSHOT.pom
                            │   ├── integrationService-0.0.1-SNAPSHOT-stubs.jar
                            │   └── maven-metadata-local.xml
                            └── maven-metadata-local.xml

And the stubs contain the following structure:

├── META-INF
│   └── MANIFEST.MF
└── repository
    ├── accurest
    │   ├── bookDeleted.groovy
    │   ├── bookReturned1.groovy
    │   └── bookReturned2.groovy
    └── mappings

Let’s consider the following contracts (let' number it with 1):

Contract.make {
	label 'return_book_1'
	input {
		triggeredBy('bookReturnedTriggered()')
	}
	outputMessage {
		sentTo('output')
		body('''{ "bookName" : "foo" }''')
		headers {
			header('BOOK-NAME', 'foo')
		}
	}
}

and number 2

Contract.make {
	label 'return_book_2'
	input {
		messageFrom('input')
		messageBody([
				bookName: 'foo'
		])
		messageHeaders {
			header('sample', 'header')
		}
	}
	outputMessage {
		sentTo('output')
		body([
				bookName: 'foo'
		])
		headers {
			header('BOOK-NAME', 'foo')
		}
	}
}

and the following Spring Integration Route:

<?xml version="1.0" encoding="UTF-8"?>
<!--
  ~  Copyright 2013-2016 the original author or authors.
  ~
  ~  Licensed under the Apache License, Version 2.0 (the "License");
  ~  you may not use this file except in compliance with the License.
  ~  You may obtain a copy of the License at
  ~
  ~       http://www.apache.org/licenses/LICENSE-2.0
  ~
  ~  Unless required by applicable law or agreed to in writing, software
  ~  distributed under the License is distributed on an "AS IS" BASIS,
  ~  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  ~  See the License for the specific language governing permissions and
  ~  limitations under the License.
  -->

<beans:beans xmlns="http://www.springframework.org/schema/integration"
			 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
			 xmlns:beans="http://www.springframework.org/schema/beans"
			 xsi:schemaLocation="http://www.springframework.org/schema/beans
			http://www.springframework.org/schema/beans/spring-beans.xsd
			http://www.springframework.org/schema/integration
			http://www.springframework.org/schema/integration/spring-integration.xsd">


	<!-- REQUIRED FOR TESTING -->
	<bridge input-channel="output"
			output-channel="outputTest"/>

	<channel id="outputTest">
		<queue/>
	</channel>

</beans:beans>
Scenario 1 (no input message)

So as to trigger a message via the return_book_1 label we’ll use the StubTigger interface as follows

stubFinder.trigger('return_book_1')

Next we’ll want to listen to the output of the message sent to output

Message<?> receivedMessage = messaging.receive('outputTest')

And the received message would pass the following assertions

receivedMessage != null
assertJsons(receivedMessage.payload)
receivedMessage.headers.get('BOOK-NAME') == 'foo'
Scenario 2 (output triggered by input)

Since the route is set for you it’s enough to just send a message to the output destination.

messaging.send(new BookReturned('foo'), [sample: 'header'], 'input')

Next we’ll want to listen to the output of the message sent to output

Message<?> receivedMessage = messaging.receive('outputTest')

And the received message would pass the following assertions

receivedMessage != null
assertJsons(receivedMessage.payload)
receivedMessage.headers.get('BOOK-NAME') == 'foo'
Scenario 3 (input with no output)

Since the route is set for you it’s enough to just send a message to the input destination.

messaging.send(new BookReturned('foo'), [sample: 'header'], 'delete')

Stub Runner Stream

Spring Cloud Contract Verifier Stub Runner’s messaging module gives you an easy way to integrate with Spring Stream. For the provided artifacts it will automatically download the stubs and register the required routes.

Warning
In Stub Runner’s integration with Stream the messageFrom or sentTo Strings are resolved first as a destination of a channel, and then if there is no such destination it’s resolved as a channel name.

Adding it to the project

It’s enough to have both Apache Camel and Spring Cloud Contract Stub Runner on classpath. Remember to annotate your test class with @AutoConfigureMessageVerifier.

Examples

Stubs structure

Let us assume that we have the following Maven repository with a deployed stubs for the streamService application.

└── .m2
    └── repository
        └── io
            └── codearte
                └── accurest
                    └── stubs
                        └── streamService
                            ├── 0.0.1-SNAPSHOT
                            │   ├── streamService-0.0.1-SNAPSHOT.pom
                            │   ├── streamService-0.0.1-SNAPSHOT-stubs.jar
                            │   └── maven-metadata-local.xml
                            └── maven-metadata-local.xml

And the stubs contain the following structure:

├── META-INF
│   └── MANIFEST.MF
└── repository
    ├── accurest
    │   ├── bookDeleted.groovy
    │   ├── bookReturned1.groovy
    │   └── bookReturned2.groovy
    └── mappings

Let’s consider the following contracts (let' number it with 1):

Contract.make {
	label 'return_book_1'
	input { triggeredBy('bookReturnedTriggered()') }
	outputMessage {
		sentTo('returnBook')
		body('''{ "bookName" : "foo" }''')
		headers { header('BOOK-NAME', 'foo') }
	}
}

and number 2

Contract.make {
	label 'return_book_2'
	input {
		messageFrom('bookStorage')
		messageBody([
			bookName: 'foo'
		])
		messageHeaders { header('sample', 'header') }
	}
	outputMessage {
		sentTo('returnBook')
		body([
			bookName: 'foo'
		])
		headers { header('BOOK-NAME', 'foo') }
	}
}

and the following Spring configuration:

stubrunner.repositoryRoot: classpath:m2repo/repository/
stubrunner.ids: org.springframework.cloud.contract.verifier.stubs:streamService:0.0.1-SNAPSHOT:stubs

spring:
  cloud:
    stream:
      bindings:
        output:
          destination: returnBook
        input:
          destination: bookStorage

server:
  port: 0
Scenario 1 (no input message)

So as to trigger a message via the return_book_1 label we’ll use the StubTrigger interface as follows

stubFinder.trigger('return_book_1')

Next we’ll want to listen to the output of the message sent to a channel whose destination is returnBook

Message<?> receivedMessage = messaging.receive('returnBook')

And the received message would pass the following assertions

receivedMessage != null
assertJsons(receivedMessage.payload)
receivedMessage.headers.get('BOOK-NAME') == 'foo'
Scenario 2 (output triggered by input)

Since the route is set for you it’s enough to just send a message to the bookStorage destination.

messaging.send(new BookReturned('foo'), [sample: 'header'], 'bookStorage')

Next we’ll want to listen to the output of the message sent to returnBook

Message<?> receivedMessage = messaging.receive('returnBook')

And the received message would pass the following assertions

receivedMessage != null
assertJsons(receivedMessage.payload)
receivedMessage.headers.get('BOOK-NAME') == 'foo'
Scenario 3 (input with no output)

Since the route is set for you it’s enough to just send a message to the output destination.

messaging.send(new BookReturned('foo'), [sample: 'header'], 'delete')

Stub Runner Spring AMQP

Spring Cloud Contract Verifier Stub Runner’s messaging module provides an easy way to integrate with Spring AMQP’s Rabbit Template. For the provided artifacts it will automatically download the stubs and register the required routes.

The integration tries to work standalone, that is without interaction with a running RabbitMQ message broker. It expects a RabbitTemplate on the application context and uses it as a spring boot test @SpyBean. Thus it can use the mockito spy functionality to verify and introspect messages sent by the application.

On the message consumer side, it considers all @RabbitListener annotated endpoints as well as all `SimpleMessageListenerContainer`s on the application context.

As messages are usually sent to exchanges in AMQP the message contract contains the exchange name as the destination. Message listeners on the other side are bound to queues. Bindings connect an exchange to a queue. If message contracts are triggered the Spring AMQP stub runner integration will look for bindings on the application context that match this exchange. Then it collects the queues from the Spring exchanges and tries to find messages listeners bound to these queues. The message is triggered to all matching message listeners.

Adding it to the project

It’s enough to have both Spring AMQP and Spring Cloud Contract Stub Runner on the classpath and set the property stubrunner.amqp.enabled=true. Remember to annotate your test class with @AutoConfigureMessageVerifier.

Examples

Stubs structure

Let us assume that we have the following Maven repository with a deployed stubs for the spring-cloud-contract-amqp-test application.

└── .m2
    └── repository
        └── com
            └── example
                └── spring-cloud-contract-amqp-test
                    ├── 0.4.0-SNAPSHOT
                    │   ├── spring-cloud-contract-amqp-test-0.4.0-SNAPSHOT.pom
                    │   ├── spring-cloud-contract-amqp-test-0.4.0-SNAPSHOT-stubs.jar
                    │   └── maven-metadata-local.xml
                    └── maven-metadata-local.xml

And the stubs contain the following structure:

├── META-INF
│   └── MANIFEST.MF
└── contracts
    └── shouldProduceValidPersonData.groovy

Let’s consider the following contract:

Contract.make {
    // Human readable description
    description 'Should produce valid person data'
    // Label by means of which the output message can be triggered
    label 'contract-test.person.created.event'
    // input to the contract
    input {
        // the contract will be triggered by a method
        triggeredBy('createPerson()')
    }
    // output message of the contract
    outputMessage {
        // destination to which the output message will be sent
        sentTo 'contract-test.exchange'
        headers {
            header('contentType': 'application/json')
            header('__TypeId__': 'org.springframework.cloud.contract.stubrunner.messaging.amqp.Person')
        }
        // the body of the output message
        body ([
                id: $(consumer(9), producer(regex("[0-9]+"))),
                name: "me"
        ])
    }
}

and the following Spring configuration:

stubrunner:
  repositoryRoot: classpath:m2repo/repository/
  ids: org.springframework.cloud.contract.verifier.stubs.amqp:spring-cloud-contract-amqp-test:0.4.0-SNAPSHOT:stubs
  amqp:
    enabled: true
server:
  port: 0
Triggering the message

So to trigger a message using the contract above we’ll use the StubTrigger interface as follows.

stubTrigger.trigger("contract-test.person.created.event")

The message has the destination contract-test.exchange so the Spring AMQP stub runner integration looks for bindings related to this exchange.

@Bean
public Binding binding() {
	return BindingBuilder.bind(new Queue("test.queue")).to(new DirectExchange("contract-test.exchange")).with("#");
}

The binding definition binds the queue test.queue. So the following listener definition is a match and is invoked with the contract message.

@Bean
public SimpleMessageListenerContainer simpleMessageListenerContainer(ConnectionFactory connectionFactory,
																		MessageListenerAdapter listenerAdapter) {
	SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
	container.setConnectionFactory(connectionFactory);
	container.setQueueNames("test.queue");
	container.setMessageListener(listenerAdapter);

	return container;
}

Also, the following annotated listener represents a match and would be invoked.

@RabbitListener(bindings = @QueueBinding(
		value = @Queue(value = "test.queue"),
		exchange = @Exchange(value = "contract-test.exchange", ignoreDeclarationExceptions = "true")))
public void handlePerson(Person person) {
	this.person = person;
}
Note
The message is directly handed over to the onMessage method of the MessageListener associated with the matching SimpleMessageListenerContainer.
Spring AMQP Test Configuration

In order to avoid that Spring AMQP is trying to connect to a running broker during our tests we configure a mock ConnectionFactory.

To disable the mocked ConnectionFactory set the property stubrunner.amqp.mockConnection=false

stubrunner:
  amqp:
    mockConnection: false

Contract DSL

Important
Remember that inside the contract file you have to provide the fully qualified name to the Contract class and the make static import i.e. org.springframework.cloud.spec.Contract.make { …​ }. You can also provide an import to the Contract class import org.springframework.cloud.spec.Contract and then call Contract.make { …​ }

Contract DSL is written in Groovy, but don’t be alarmed if you didn’t use Groovy before. Knowledge of the language is not really needed as our DSL uses only a tiny subset of it (namely literals, method calls and closures). What’s more the DSL is designed to be programmer-readable without any knowledge of the DSL itself - it’s statically typed.

The Contract is present in the spring-cloud-contract-spec module of the Spring Cloud Contract Verifier repository.

Let’s look at full example of a contract definition.

org.springframework.cloud.contract.spec.Contract.make {
	request {
		method 'PUT'
		url '/api/12'
		headers {
			header 'Content-Type': 'application/vnd.org.springframework.cloud.contract.verifier.twitter-places-analyzer.v1+json'
		}
		body '''\
		[{
			"created_at": "Sat Jul 26 09:38:57 +0000 2014",
			"id": 492967299297845248,
			"id_str": "492967299297845248",
			"text": "Gonna see you at Warsaw",
			"place":
			{
				"attributes":{},
				"bounding_box":
				{
					"coordinates":
						[[
							[-77.119759,38.791645],
							[-76.909393,38.791645],
							[-76.909393,38.995548],
							[-77.119759,38.995548]
						]],
					"type":"Polygon"
				},
				"country":"United States",
				"country_code":"US",
				"full_name":"Washington, DC",
				"id":"01fbe706f872cb32",
				"name":"Washington",
				"place_type":"city",
				"url": "http://api.twitter.com/1/geo/id/01fbe706f872cb32.json"
			}
		}]
	'''
	}
	response {
		status 200
	}
}

Not all features of the DSL are used in example above. If you didn’t find what you are looking for, please check next paragraphs on this page.

You can easily compile Contracts to WireMock stubs mapping using standalone maven command: mvn org.springframework.cloud:spring-cloud-contract-maven-plugin:convert.

Limitations

Warning
Spring Cloud Contract Verifier doesn’t support XML properly. Please use JSON or help us implement this feature.
Warning
Spring Cloud Contract Verifier supports equality check on text response. Regular expressions are not yet available.
Warning
The support for the verification of size of JSON arrays is experimental. If you want to turn it on please provide the value of a system property spring.cloud.contract.verifier.assert.size equal to true. By default this feature is set to false. You can also provide the assertJsonSize property in the plugin configuration.
Warning
Due to the fact that JSON structure can have any form it’s sometimes impossible to parse it properly when using the value(consumer(…​), producer(…​)) notation when using that in GString. That’s why we highly recommend using the Groovy Map notation.

Common Top-Level elements

Description

You can add a description to your contract that is nothing else but an arbitrary text. Example:

		org.springframework.cloud.contract.spec.Contract.make {
			description('''
given:
	An input
when:
	Sth happens
then:
	Output
''')
		}
Ignoring contracts

If you want to ignore a contract you can either set a value of ignored contracts in the plugin configuration or just set the ignored property on the contract itself:

org.springframework.cloud.contract.spec.Contract.make {
	ignored()
}

HTTP Top-Level Elements

Following methods can be called in the top-level closure of a contract definition. Request and response are mandatory, priority is optional.

org.springframework.cloud.contract.spec.Contract.make {
	// Definition of HTTP request part of the contract
	// (this can be a valid request or invalid depending
	// on type of contract being specified).
	request {
		//...
	}

	// Definition of HTTP response part of the contract
	// (a service implementing this contract should respond
	// with following response after receiving request
	// specified in "request" part above).
	response {
		//...
	}

	// Contract priority, which can be used for overriding
	// contracts (1 is highest). Priority is optional.
	priority 1
}

Request

HTTP protocol requires only method and address to be specified in a request. The same information is mandatory in request definition of the Contract.

org.springframework.cloud.contract.spec.Contract.make {
	request {
		// HTTP request method (GET/POST/PUT/DELETE).
		method 'GET'

		// Path component of request URL is specified as follows.
		urlPath('/users')
	}

	response {
		//...
	}
}

It is possible to specify whole url instead of just path, but urlPath is the recommended way as it makes the tests host-independent.

org.springframework.cloud.contract.spec.Contract.make {
	request {
		method 'GET'

		// Specifying `url` and `urlPath` in one contract is illegal.
		url('http://localhost:8888/users')
	}

	response {
		//...
	}
}

Request may contain query parameters, which are specified in a closure nested in a call to urlPath or url.

org.springframework.cloud.contract.spec.Contract.make {
	request {
		//...

		urlPath('/users') {

			// Each parameter is specified in form
			// `'paramName' : paramValue` where parameter value
			// may be a simple literal or one of matcher functions,
			// all of which are used in this example.
			queryParameters {

				// If a simple literal is used as value
				// default matcher function is used (equalTo)
				parameter 'limit': 100

				// `equalTo` function simply compares passed value
				// using identity operator (==).
				parameter 'filter': equalTo("email")

				// `containing` function matches strings
				// that contains passed substring.
				parameter 'gender': value(consumer(containing("[mf]")), producer('mf'))

				// `matching` function tests parameter
				// against passed regular expression.
				parameter 'offset': value(consumer(matching("[0-9]+")), producer(123))

				// `notMatching` functions tests if parameter
				// does not match passed regular expression.
				parameter 'loginStartsWith': value(consumer(notMatching(".{0,2}")), producer(3))
			}
		}

		//...
	}

	response {
		//...
	}
}

It may contain additional request headers…​

org.springframework.cloud.contract.spec.Contract.make {
	request {
		//...

		// Each header is added in form `'Header-Name' : 'Header-Value'`.
		// there are also some helper methods
		headers {
			header 'key': 'value'
			contentType(applicationJson())
		}

		//...
	}

	response {
		//...
	}
}

…​and a request body.

org.springframework.cloud.contract.spec.Contract.make {
	request {
		//...

		// Currently only JSON format of request body is supported.
		// Format will be determined from a header or body's content.
		body '''{ "login" : "john", "name": "John The Contract" }'''
	}

	response {
		//...
	}
}

Response

Minimal response must contain HTTP status code.

org.springframework.cloud.contract.spec.Contract.make {
	request {
		//...
	}
	response {
		// Status code sent by the server
		// in response to request specified above.
		status 200
	}
}

Besides status response may contain headers and body, which are specified the same way as in the request (see previous paragraph).

Dynamic properties

The contract can contain some dynamic properties - timestamps / ids etc. You don’t want to enforce the consumers to stub their clocks to always return the same value of time so that it gets matched by the stub. That’s why we allow you to provide the dynamic parts in your contracts in the following way

either via the value method

value(consumer(...), producer(...))
value(c(...), p(...))
value(stub(...), test(...))
value(client(...), server(...))

or if you’re using the Groovy map notation for body you can use the $() method

$(consumer(...), producer(...))
$(c(...), p(...))
$(stub(...), test(...))
$(client(...), server(...))

All of the aforementioned approaches are equal. That means that stub and client methods are aliases over the consumer method. Let’s take a closer look at what we can do with those values in the subsequent sections.

Regular expressions

You can use regular expressions to write your requests in Contract DSL. It is particularly useful when you want to indicate that a given response should be provided for requests that follow a given pattern. Also, you can use it when you need to use patterns and not exact values both for your test and your server side tests.

Please see the example below:

org.springframework.cloud.contract.spec.Contract.make {
	request {
		method('GET')
		url $(consumer(~/\/[0-9]{2}/), producer('/12'))
	}
	response {
		status 200
		body(
				id: $(anyNumber()),
				surname: $(
						consumer('Kowalsky'),
						producer(regex('[a-zA-Z]+'))
				),
				name: 'Jan',
				created: $(consumer('2014-02-02 12:23:43'), producer(execute('currentDate(it)'))),
				correlationId: value(consumer('5d1f9fef-e0dc-4f3d-a7e4-72d2220dd827'),
						producer(regex('[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12}'))
				)
		)
		headers {
			header 'Content-Type': 'text/plain'
		}
	}
}

You can also provide only one side of the communication using a regular expression. If you do that then automatically we’ll provide the generated string that matches the provided regular expression. For example:

org.springframework.cloud.contract.spec.Contract.make {
	request {
		method 'PUT'
		url value(consumer(regex('/foo/[0-9]{5}')))
		body([
			requestElement: $(consumer(regex('[0-9]{5}')))
		])
		headers {
			header('header', $(consumer(regex('application\\/vnd\\.fraud\\.v1\\+json;.*'))))
		}
	}
	response {
		status 200
		body([
			responseElement: $(producer(regex('[0-9]{7}')))
		])
		headers {
			contentType("application/vnd.fraud.v1.json")
		}
	}
}

In this example for request and response the opposite side of the communication will have the respective data generated.

Spring Cloud Contract comes with a series of predefined regular expressions that you can use in your contracts.

protected static final Pattern TRUE_OR_FALSE = Pattern.compile(/(true|false)/)
protected static final Pattern ONLY_ALPHA_UNICODE = Pattern.compile(/[\p{L}]*/)
protected static final Pattern NUMBER = Pattern.compile('-?\\d*(\\.\\d+)?')
protected static final Pattern IP_ADDRESS = Pattern.compile('([01]?\\d\\d?|2[0-4]\\d|25[0-5])\\.([01]?\\d\\d?|2[0-4]\\d|25[0-5])\\.([01]?\\d\\d?|2[0-4]\\d|25[0-5])\\.([01]?\\d\\d?|2[0-4]\\d|25[0-5])');
protected static final Pattern HOSTNAME_PATTERN = Pattern.compile('((http[s]?|ftp):\\/)\\/?([^:\\/\\s]+)(:[0-9]{1,5})?');
protected static final Pattern EMAIL = Pattern.compile('[a-zA-Z0-9._%+-][email protected][a-zA-Z0-9.-]+\\.[a-zA-Z]{2,4}');
protected static final Pattern URL = Pattern.compile('((www\\.|(http|https|ftp|news|file)+\\:\\/\\/)[_.a-z0-9-]+\\.[a-z0-9\\/_:@=.+?,##%&~-]*[^.|\\\'|\\# |!|\\(|?|,| |>|<|;|\\)])')
protected static final Pattern UUID = Pattern.compile('[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12}')

String onlyAlphaUnicode() {
	return ONLY_ALPHA_UNICODE.pattern()
}

String number() {
	return NUMBER.pattern()
}

String anyBoolean() {
	return TRUE_OR_FALSE.pattern()
}

String ipAddress() {
	return IP_ADDRESS.pattern()
}

String hostname() {
	return HOSTNAME_PATTERN.pattern()
}

String email() {
	return EMAIL.pattern()
}

String url() {
	return URL.pattern()
}

String uuid(){
	return UUID.pattern()
}

so in your contract you can use it like this

Contract dslWithOptionalsInString = Contract.make {
	priority 1
	request {
		method 'POST'
		url '/users/password'
		headers {
			contentType(applicationJson())
		}
		body(
				email: $(consumer(optional(regex(email()))), producer('[email protected]')),
				callback_url: $(consumer(regex(hostname())), producer('http://partners.com'))
		)
	}
	response {
		status 404
		headers {
			contentType(applicationJson())
		}
		body(
				code: value(consumer("123123"), producer(optional("123123"))),
				message: "User not found by email = [${value(producer(regex(email())), consumer('[email protected]'))}]"
		)
	}
}

Passing optional parameters

It is possible to provide optional parameters in your contract. It’s only possible to have optional parameter for the:

  • STUB side of the Request

  • TEST side of the Response

Example:

org.springframework.cloud.contract.spec.Contract.make {
	priority 1
	request {
		method 'POST'
		url '/users/password'
		headers {
			contentType(applicationJson())
		}
		body(
				email: $(consumer(optional(regex(email()))), producer('[email protected]')),
				callback_url: $(consumer(regex(hostname())), producer('http://partners.com'))
		)
	}
	response {
		status 404
		headers {
			header 'Content-Type': 'application/json'
		}
		body(
				code: value(consumer("123123"), producer(optional("123123")))
		)
	}
}

By wrapping a part of the body with the optional() method you are in fact creating a regular expression that should be present 0 or more times.

That way for the example above the following test would be generated if you pick Spock:

"""
 given:
  def request = given()
    .header("Content-Type", "application/json")
    .body('''{"email":"[email protected]","callback_url":"http://partners.com"}''')

 when:
  def response = given().spec(request)
    .post("/users/password")

 then:
  response.statusCode == 404
  response.header('Content-Type')  == 'application/json'
 and:
  DocumentContext parsedJson = JsonPath.parse(response.body.asString())
  assertThatJson(parsedJson).field("code").matches("(123123)?")
"""

and the following stub:

'''
{
  "request" : {
    "url" : "/users/password",
    "method" : "POST",
    "bodyPatterns" : [ {
      "matchesJsonPath" : "$[?(@.email =~ /([a-zA-Z0-9._%+-][email protected][a-zA-Z0-9.-]+\\\\.[a-zA-Z]{2,4})?/)]"
    }, {
      "matchesJsonPath" : "$[?(@.callback_url =~ /((http[s]?|ftp):\\\\/)\\\\/?([^:\\\\/\\\\s]+)(:[0-9]{1,5})?/)]"
    } ],
    "headers" : {
      "Content-Type" : {
        "equalTo" : "application/json"
      }
    }
  },
  "response" : {
    "status" : 404,
    "body" : "{\\"code\\":\\"123123\\",\\"message\\":\\"User not found by email == [[email protected]]\\"}",
    "headers" : {
      "Content-Type" : "application/json"
    }
  },
  "priority" : 1
}
'''

Executing custom methods on server side

It is also possible to define a method call to be executed on the server side during the test. Such a method can be added to the class defined as "baseClassForTests" in the configuration. Please see the examples below:

Contract DSL
org.springframework.cloud.contract.spec.Contract.make {
	request {
		method 'PUT'
		url $(consumer(regex('^/api/[0-9]{2}$')), producer('/api/12'))
		headers {
			header 'Content-Type': 'application/json'
		}
		body '''\
				[{
					"text": "Gonna see you at Warsaw"
				}]
			'''
	}
	response {
		body (
				path: $(consumer('/api/12'), producer(regex('^/api/[0-9]{2}$'))),
				correlationId: $(consumer('1223456'), producer(execute('isProperCorrelationId($it)')))
		)
		status 200
	}
}
Base Mock Spec
abstract class BaseMockMvcSpec extends Specification {

	def setup() {
		RestAssuredMockMvc.standaloneSetup(new PairIdController())
	}

	void isProperCorrelationId(Integer correlationId) {
		assert correlationId == 123456
	}

	void isEmpty(String value) {
		assert value == null
	}

}

JAX-RS support

Starting with release 0.8.0 we support JAX-RS 2 Client API. Base class needs to define protected WebTarget webTarget and server initialization, right now the only option how to test JAX-RS API is to start a web server.

Request with a body needs to have a content type set otherwise application/octet-stream is going to be used.

In order to use JAX-RS mode, use the following settings:

testMode === 'JAXRSCLIENT'

Example of a test API generated:

'''
 // when:
  Response response = webTarget
    .path("/users")
    .queryParam("limit", "10")
    .queryParam("offset", "20")
    .queryParam("filter", "email")
    .queryParam("sort", "name")
    .queryParam("search", "55")
    .queryParam("age", "99")
    .queryParam("name", "Denis.Stepanov")
    .queryParam("email", "[email protected]")
    .request()
    .method("GET");

  String responseAsString = response.readEntity(String.class);

 // then:
  assertThat(response.getStatus()).isEqualTo(200);
 // and:
  DocumentContext parsedJson = JsonPath.parse(responseAsString);
  assertThatJson(parsedJson).field("property1").isEqualTo("a");
'''

Async support

If you’re using asynchronous communication on the server side (your controllers are returning Callable, DeferredResult etc. then inside your contract you have to provide in the response section a async() method. Example:

org.springframework.cloud.contract.spec.Contract.make {
    request {
        method 'GET'
        url '/get'
    }
    response {
        status 200
        body 'Passed'
        async()
    }
}

Messaging Top-Level Elements

The DSL for messaging looks a little bit different than the one that focuses on HTTP.

Output triggered by a method

The output message can be triggered by calling a method (e.g. a Scheduler was started and a message was sent)

def dsl = Contract.make {
	// Human readable description
	description 'Some description'
	// Label by means of which the output message can be triggered
	label 'some_label'
	// input to the contract
	input {
		// the contract will be triggered by a method
		triggeredBy('bookReturnedTriggered()')
	}
	// output message of the contract
	outputMessage {
		// destination to which the output message will be sent
		sentTo('output')
		// the body of the output message
		body('''{ "bookName" : "foo" }''')
		// the headers of the output message
		headers {
			header('BOOK-NAME', 'foo')
		}
	}
}

In this case the output message will be sent to output if a method called bookReturnedTriggered will be executed. In the message publisher’s side we will generate a test that will call that method to trigger the message. On the consumer side you can use the some_label to trigger the message.

Output triggered by a message

The output message can be triggered by receiving a message.

def dsl = Contract.make {
	description 'Some Description'
	label 'some_label'
	// input is a message
	input {
		// the message was received from this destination
		messageFrom('input')
		// has the following body
		messageBody([
		        bookName: 'foo'
		])
		// and the following headers
		messageHeaders {
			header('sample', 'header')
		}
	}
	outputMessage {
		sentTo('output')
		body([
		        bookName: 'foo'
		])
		headers {
			header('BOOK-NAME', 'foo')
		}
	}
}

In this case the output message will be sent to output if a proper message will be received on the input destination. In the message publisher’s side we will generate a test that will send the input message to the defined destination. On the consumer side you can either send a message to the input destination or use the some_label to trigger the message.

Consumer / Producer

In HTTP you have a notion of client/stub and `server/test notation. You can use them also in messaging but we’re providing also the consumer and produer methods as presented below (note you can use either $ or value methods to provide consumer and producer parts)

Contract.make {
	label 'some_label'
	input {
		messageFrom value(consumer('jms:output'), producer('jms:input'))
		messageBody([
				bookName: 'foo'
		])
		messageHeaders {
			header('sample', 'header')
		}
	}
	outputMessage {
		sentTo $(consumer('jms:input'), producer('jms:output'))
		body([
				bookName: 'foo'
		])
	}
}

Extending the DSL

It is possible to provide your own functions to the DSL. The key requirement for this feature was to maintain the static compatibility. Below you will be able to see an example of:

  • creation of a JAR with reusable classes

  • referencing of these classes in the DSLs

The full example can be found here.

Common JAR

Below you can find three classes that we will reuse in the DSLs.

PatternUtils contains functions used by both the consumer and the producer.

package com.example;

import java.util.regex.Pattern;

/**
 * If you want to use {@link Pattern} directly in your tests
 * then you can create a class resembling this one. It can
 * contain all the {@link Pattern} you want to use in the DSL.
 *
 * <pre>
 * {@code
 * request {
 *     body(
 *         [ age: $(c(PatternUtils.oldEnough()))]
 *     )
 * }
 * </pre>
 *
 * Notice that we're using both {@code $()} for dynamic values
 * and {@code c()} for the consumer side.
 *
 * @author Marcin Grzejszczak
 */
public class PatternUtils {
	public static Pattern tooYoung() {
		return Pattern.compile("[0-1][0-9]");
	}

	public static Pattern oldEnough() {
		return Pattern.compile("[2-9][0-9]");
	}

	public static Pattern anyName() {
		return Pattern.compile("[a-zA-Z]+");
	}

	/**
	 * Makes little sense but it's just an example ;)
	 */
	public static Pattern ok() {
		return Pattern.compile("OK");
	}
}

ConsumerUtils contains functions used by the consumer.

package com.example;

import org.springframework.cloud.contract.spec.internal.ClientDslProperty;
import org.springframework.cloud.contract.spec.internal.DslProperty;

/**
 * DSL Properties passed to the DSL from the consumer's perspective.
 * That means that on the input side {@code Request} for HTTP
 * or {@code Input} for messaging you can have a regular expression.
 * On the {@code Response} for HTTP or {@code Output} for messaging
 * you have to have a concrete value.
 *
 * @author Marcin Grzejszczak
 */
public class ConsumerUtils {
	/**
	 * Consumer side property. By using the {@link ClientDslProperty}
	 * you can omit most of boilerplate code from the perspective
	 * of dynamic values. Example
	 *
	 * <pre>
	 * {@code
	 * request {
	 *     body(
	 *         [ age: $(ConsumerUtils.oldEnough())]
	 *     )
	 * }
	 * </pre>
	 *
	 * That way the consumer side value of age field will be
	 * a regular expression and the producer side will be generated.
	 *
	 * @author Marcin Grzejszczak
	 */
	public static ClientDslProperty oldEnough() {
		return new ClientDslProperty(PatternUtils.oldEnough());
	}

	/**
	 * Consumer side property. By using the {@link ClientDslProperty}
	 * you can omit most of boilerplate code from the perspective
	 * of dynamic values. Example
	 *
	 * <pre>
	 * {@code
	 * request {
	 *     body(
	 *         [ name: $(ConsumerUtils.anyName())]
	 *     )
	 * }
	 * </pre>
	 *
	 * That way the consumer will be a regular expression and the
	 * producer side value will be equal to {@code marcin}
	 */
	public static DslProperty anyName() {
		return new DslProperty<>(PatternUtils.anyName(), "marcin");
	}
}

ProducerUtils contains functions used by the producer.

package com.example;

import org.springframework.cloud.contract.spec.internal.ServerDslProperty;

/**
 * DSL Properties passed to the DSL from the producer's perspective.
 * That means that on the input side {@code Request} for HTTP
 * or {@code Input} for messaging you have to have a concrete value.
 * On the {@code Response} for HTTP or {@code Output} for messaging
 * you can have a regular expression.
 *
 * @author Marcin Grzejszczak
 */
public class ProducerUtils {

	/**
	 * Producer side property. By using the {@link ProducerUtils}
	 * you can omit most of boilerplate code from the perspective
	 * of dynamic values. Example
	 *
	 * <pre>
	 * {@code
	 * response {
	 *     body(
	 *         [ status: $(ProducerUtils.ok())]
	 *     )
	 * }
	 * </pre>
	 *
	 * That way the producer side value of age field will be
	 * a regular expression and the consumer side will be generated.
	 */
	public static ServerDslProperty ok() {
		return new ServerDslProperty(PatternUtils.ok());
	}
}

Adding the dependency to project

In order for the plugins and IDE to be able to reference the common JAR classes you need to pass the dependency to your project.

Test dependency in project’s dependencies

First add the common jar dependency as a test dependency. That way since your contracts files are available at test resources path, automatically the common jar classes will be visible in your Groovy files.

Maven
<dependency>
	<groupId>com.example</groupId>
	<artifactId>beer-common</artifactId>
	<version>${project.version}</version>
	<scope>test</scope>
</dependency>
Gradle
testCompile("com.example:beer-common:0.0.1-SNAPSHOT")
Test dependency in plugin’s dependencies

Now you have to add the dependency for the plugin to reuse at runtime.

Maven
<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<version>${spring-cloud-contract.version}</version>
	<extensions>true</extensions>
	<configuration>
		<packageWithBaseClasses>com.example</packageWithBaseClasses>
	</configuration>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-contract-verifier</artifactId>
			<version>${spring-cloud-contract.version}</version>
		</dependency>
		<dependency>
			<groupId>com.example</groupId>
			<artifactId>beer-common</artifactId>
			<version>${project.version}</version>
			<scope>compile</scope>
		</dependency>
	</dependencies>
</plugin>
Gradle
classpath "com.example:beer-common:0.0.1-SNAPSHOT"
Referencing classes in DSLs

Now you can reference your classes in your DSL. Example:

package contracts.beer.rest

import org.springframework.cloud.contract.spec.Contract

import static com.example.ConsumerUtils.oldEnough
import static com.example.ProducerUtils.ok

Contract.make {
	request {
		description("""
Represents a successful scenario of getting a beer

given:
	client is old enough
when:
	he applies for a beer
then:
	we'll grant him the beer
""")
		method 'POST'
		url '/check'
		body(
				age: $(oldEnough())
		)
		headers {
			header 'Content-Type', 'application/json'
		}
	}
	response {
		status 200
		body("""
			{
				"status": "${value(ok())}"
			}
			""")
		headers {
			header(
					'Content-Type', value(consumer('application/json'),producer(regex('application/json.*')))
			)
		}
	}
}

Appendix: Compendium of Configuration Properties

Name Default Description

encrypt.fail-on-error

true

Flag to say that a process should fail if there is an encryption or decryption error.

encrypt.key

A symmetric key. As a stronger alternative consider using a keystore.

encrypt.key-store.alias

Alias for a key in the store.

encrypt.key-store.location

Location of the key store file, e.g. classpath:/keystore.jks.

encrypt.key-store.password

Password that locks the keystore.

encrypt.key-store.secret

Secret protecting the key (defaults to the same as the password).

encrypt.rsa.algorithm

The RSA algorithm to use (DEFAULT or OEAP). Once it is set do not change it (or existing ciphers will not a decryptable).

encrypt.rsa.salt

deadbeef

Salt for the random secret used to encrypt cipher text. Once it is set do not change it (or existing ciphers will not a decryptable).

encrypt.rsa.strong

false

Flag to indicate that "strong" AES encryption should be used internally. If true then the GCM algorithm is applied to the AES encrypted bytes. Default is false (in which case "standard" CBC is used instead). Once it is set do not change it (or existing ciphers will not a decryptable).

endpoints.bus.enabled

endpoints.bus.id

endpoints.bus.sensitive

endpoints.consul.enabled

endpoints.consul.id

endpoints.consul.sensitive

endpoints.env.post.enabled

true

Enable changing the Environment through a POST to /env.

endpoints.features.enabled

endpoints.features.id

endpoints.features.sensitive

endpoints.pause.enabled

true

Enable the /pause endpoint (to send Lifecycle.stop()).

endpoints.pause.id

endpoints.pause.sensitive

endpoints.refresh.enabled

true

Enable the /refresh endpoint to refresh configuration and re-initialize refresh scoped beans.

endpoints.refresh.id

endpoints.refresh.sensitive

endpoints.restart.enabled

true

Enable the /restart endpoint to restart the application context.

endpoints.restart.id

endpoints.restart.pause-endpoint.enabled

endpoints.restart.pause-endpoint.id

endpoints.restart.pause-endpoint.sensitive

endpoints.restart.resume-endpoint.enabled

endpoints.restart.resume-endpoint.id

endpoints.restart.resume-endpoint.sensitive

endpoints.restart.sensitive

endpoints.restart.timeout

0

endpoints.resume.enabled

true

Enable the /resume endpoint (to send Lifecycle.start()).

endpoints.resume.id

endpoints.resume.sensitive

endpoints.zookeeper.enabled

true

Enable the /zookeeper endpoint to inspect the state of zookeeper.

eureka.client.allow-redirects

false

Indicates whether server can redirect a client request to a backup server/cluster. If set to false, the server will handle the request directly, If set to true, it may send HTTP redirect to the client, with a new server location.

eureka.client.availability-zones

Gets the list of availability zones (used in AWS data centers) for the region in which this instance resides.

The changes are effective at runtime at the next registry fetch cycle as specified by registryFetchIntervalSeconds.

eureka.client.backup-registry-impl

Gets the name of the implementation which implements BackupRegistry to fetch the registry information as a fall back option for only the first time when the eureka client starts.

This may be needed for applications which needs additional resiliency for registry information without which it cannot operate.

eureka.client.cache-refresh-executor-exponential-back-off-bound

10

Cache refresh executor exponential back off related property. It is a maximum multiplier value for retry delay, in case where a sequence of timeouts occurred.

eureka.client.cache-refresh-executor-thread-pool-size

2

The thread pool size for the cacheRefreshExecutor to initialise with

eureka.client.client-data-accept

EurekaAccept name for client data accept

eureka.client.decoder-name

This is a transient config and once the latest codecs are stable, can be removed (as there will only be one)

eureka.client.disable-delta

false

Indicates whether the eureka client should disable fetching of delta and should rather resort to getting the full registry information.

Note that the delta fetches can reduce the traffic tremendously, because the rate of change with the eureka server is normally much lower than the rate of fetches.

The changes are effective at runtime at the next registry fetch cycle as specified by registryFetchIntervalSeconds

eureka.client.dollar-replacement

_-

Get a replacement string for Dollar sign <code>$</code> during serializing/deserializing information in eureka server.

eureka.client.enabled

true

Flag to indicate that the Eureka client is enabled.

eureka.client.encoder-name

This is a transient config and once the latest codecs are stable, can be removed (as there will only be one)

eureka.client.escape-char-replacement

__

Get a replacement string for underscore sign <code>_</code> during serializing/deserializing information in eureka server.

eureka.client.eureka-connection-idle-timeout-seconds

30

Indicates how much time (in seconds) that the HTTP connections to eureka server can stay idle before it can be closed.

In the AWS environment, it is recommended that the values is 30 seconds or less, since the firewall cleans up the connection information after a few mins leaving the connection hanging in limbo

eureka.client.eureka-server-connect-timeout-seconds

5

Indicates how long to wait (in seconds) before a connection to eureka server needs to timeout. Note that the connections in the client are pooled by org.apache.http.client.HttpClient and this setting affects the actual connection creation and also the wait time to get the connection from the pool.

eureka.client.eureka-server-d-n-s-name

Gets the DNS name to be queried to get the list of eureka servers.This information is not required if the contract returns the service urls by implementing serviceUrls.

The DNS mechanism is used when useDnsForFetchingServiceUrls is set to true and the eureka client expects the DNS to configured a certain way so that it can fetch changing eureka servers dynamically.

The changes are effective at runtime.

eureka.client.eureka-server-port

Gets the port to be used to construct the service url to contact eureka server when the list of eureka servers come from the DNS.This information is not required if the contract returns the service urls eurekaServerServiceUrls(String).

The DNS mechanism is used when useDnsForFetchingServiceUrls is set to true and the eureka client expects the DNS to configured a certain way so that it can fetch changing eureka servers dynamically.

The changes are effective at runtime.

eureka.client.eureka-server-read-timeout-seconds

8

Indicates how long to wait (in seconds) before a read from eureka server needs to timeout.

eureka.client.eureka-server-total-connections

200

Gets the total number of connections that is allowed from eureka client to all eureka servers.

eureka.client.eureka-server-total-connections-per-host

50

Gets the total number of connections that is allowed from eureka client to a eureka server host.

eureka.client.eureka-server-u-r-l-context

Gets the URL context to be used to construct the service url to contact eureka server when the list of eureka servers come from the DNS. This information is not required if the contract returns the service urls from eurekaServerServiceUrls.

The DNS mechanism is used when useDnsForFetchingServiceUrls is set to true and the eureka client expects the DNS to configured a certain way so that it can fetch changing eureka servers dynamically. The changes are effective at runtime.

eureka.client.eureka-service-url-poll-interval-seconds

0

Indicates how often(in seconds) to poll for changes to eureka server information. Eureka servers could be added or removed and this setting controls how soon the eureka clients should know about it.

eureka.client.fetch-registry

true

Indicates whether this client should fetch eureka registry information from eureka server.

eureka.client.fetch-remote-regions-registry

Comma separated list of regions for which the eureka registry information will be fetched. It is mandatory to define the availability zones for each of these regions as returned by availabilityZones. Failing to do so, will result in failure of discovery client startup.

eureka.client.filter-only-up-instances

true

Indicates whether to get the applications after filtering the applications for instances with only InstanceStatus UP states.

eureka.client.g-zip-content

true

Indicates whether the content fetched from eureka server has to be compressed whenever it is supported by the server. The registry information from the eureka server is compressed for optimum network traffic.

eureka.client.heartbeat-executor-exponential-back-off-bound

10

Heartbeat executor exponential back off related property. It is a maximum multiplier value for retry delay, in case where a sequence of timeouts occurred.

eureka.client.heartbeat-executor-thread-pool-size

2

The thread pool size for the heartbeatExecutor to initialise with

eureka.client.initial-instance-info-replication-interval-seconds

40

Indicates how long initially (in seconds) to replicate instance info to the eureka server

eureka.client.instance-info-replication-interval-seconds

30

Indicates how often(in seconds) to replicate instance changes to be replicated to the eureka server.

eureka.client.log-delta-diff

false

Indicates whether to log differences between the eureka server and the eureka client in terms of registry information.

Eureka client tries to retrieve only delta changes from eureka server to minimize network traffic. After receiving the deltas, eureka client reconciles the information from the server to verify it has not missed out some information. Reconciliation failures could happen when the client has had network issues communicating to server.If the reconciliation fails, eureka client gets the full registry information.

While getting the full registry information, the eureka client can log the differences between the client and the server and this setting controls that.

The changes are effective at runtime at the next registry fetch cycle as specified by registryFetchIntervalSecondsr

eureka.client.on-demand-update-status-change

true

If set to true, local status updates via ApplicationInfoManager will trigger on-demand (but rate limited) register/updates to remote eureka servers

eureka.client.prefer-same-zone-eureka

true

Indicates whether or not this instance should try to use the eureka server in the same zone for latency and/or other reason.

Ideally eureka clients are configured to talk to servers in the same zone

The changes are effective at runtime at the next registry fetch cycle as specified by registryFetchIntervalSeconds

eureka.client.property-resolver

eureka.client.proxy-host

Gets the proxy host to eureka server if any.

eureka.client.proxy-password

Gets the proxy password if any.

eureka.client.proxy-port

Gets the proxy port to eureka server if any.

eureka.client.proxy-user-name

Gets the proxy user name if any.

eureka.client.region

us-east-1

Gets the region (used in AWS datacenters) where this instance resides.

eureka.client.register-with-eureka

true

Indicates whether or not this instance should register its information with eureka server for discovery by others.

In some cases, you do not want your instances to be discovered whereas you just want do discover other instances.

eureka.client.registry-fetch-interval-seconds

30

Indicates how often(in seconds) to fetch the registry information from the eureka server.

eureka.client.registry-refresh-single-vip-address

Indicates whether the client is only interested in the registry information for a single VIP.

eureka.client.service-url

Map of availability zone to list of fully qualified URLs to communicate with eureka server. Each value can be a single URL or a comma separated list of alternative locations.

Typically the eureka server URLs carry protocol,host,port,context and version information if any. Example: http://ec2-256-156-243-129.compute-1.amazonaws.com:7001/eureka/

The changes are effective at runtime at the next service url refresh cycle as specified by eurekaServiceUrlPollIntervalSeconds.

eureka.client.transport

eureka.client.use-dns-for-fetching-service-urls

false

Indicates whether the eureka client should use the DNS mechanism to fetch a list of eureka servers to talk to. When the DNS name is updated to have additional servers, that information is used immediately after the eureka client polls for that information as specified in eurekaServiceUrlPollIntervalSeconds.

Alternatively, the service urls can be returned serviceUrls, but the users should implement their own mechanism to return the updated list in case of changes.

The changes are effective at runtime.

eureka.dashboard.enabled

true

Flag to enable the Eureka dashboard. Default true.

eureka.dashboard.path

/

The path to the Eureka dashboard (relative to the servlet path). Defaults to "/".

eureka.instance.a-s-g-name

Gets the AWS autoscaling group name associated with this instance. This information is specifically used in an AWS environment to automatically put an instance out of service after the instance is launched and it has been disabled for traffic..

eureka.instance.app-group-name

Get the name of the application group to be registered with eureka.

eureka.instance.appname

unknown

Get the name of the application to be registered with eureka.

eureka.instance.data-center-info

Returns the data center this instance is deployed. This information is used to get some AWS specific instance information if the instance is deployed in AWS.

eureka.instance.default-address-resolution-order

[]

eureka.instance.health-check-url

Gets the absolute health check page URL for this instance. The users can provide the healthCheckUrlPath if the health check page resides in the same instance talking to eureka, else in the cases where the instance is a proxy for some other server, users can provide the full URL. If the full URL is provided it takes precedence.

<p> It is normally used for making educated decisions based on the health of the instance - for example, it can be used to determine whether to proceed deployments to an entire farm or stop the deployments without causing further damage. The full URL should follow the format http://${eureka.hostname}:7001/ where the value ${eureka.hostname} is replaced at runtime.

eureka.instance.health-check-url-path

/health

Gets the relative health check URL path for this instance. The health check page URL is then constructed out of the hostname and the type of communication - secure or unsecure as specified in securePort and nonSecurePort.

It is normally used for making educated decisions based on the health of the instance - for example, it can be used to determine whether to proceed deployments to an entire farm or stop the deployments without causing further damage.

eureka.instance.home-page-url

Gets the absolute home page URL for this instance. The users can provide the homePageUrlPath if the home page resides in the same instance talking to eureka, else in the cases where the instance is a proxy for some other server, users can provide the full URL. If the full URL is provided it takes precedence.

It is normally used for informational purposes for other services to use it as a landing page. The full URL should follow the format http://${eureka.hostname}:7001/ where the value ${eureka.hostname} is replaced at runtime.

eureka.instance.home-page-url-path

/

Gets the relative home page URL Path for this instance. The home page URL is then constructed out of the hostName and the type of communication - secure or unsecure.

It is normally used for informational purposes for other services to use it as a landing page.

eureka.instance.host-info

eureka.instance.hostname

The hostname if it can be determined at configuration time (otherwise it will be guessed from OS primitives).

eureka.instance.inet-utils

eureka.instance.initial-status

Initial status to register with rmeote Eureka server.

eureka.instance.instance-enabled-onit

false

Indicates whether the instance should be enabled for taking traffic as soon as it is registered with eureka. Sometimes the application might need to do some pre-processing before it is ready to take traffic.

eureka.instance.instance-id

Get the unique Id (within the scope of the appName) of this instance to be registered with eureka.

eureka.instance.ip-address

Get the IPAdress of the instance. This information is for academic purposes only as the communication from other instances primarily happen using the information supplied in {@link #getHostName(boolean)}.

eureka.instance.lease-expiration-duration-in-seconds

90

Indicates the time in seconds that the eureka server waits since it received the last heartbeat before it can remove this instance from its view and there by disallowing traffic to this instance.

Setting this value too long could mean that the traffic could be routed to the instance even though the instance is not alive. Setting this value too small could mean, the instance may be taken out of traffic because of temporary network glitches.This value to be set to atleast higher than the value specified in leaseRenewalIntervalInSeconds.

eureka.instance.lease-renewal-interval-in-seconds

30

Indicates how often (in seconds) the eureka client needs to send heartbeats to eureka server to indicate that it is still alive. If the heartbeats are not received for the period specified in leaseExpirationDurationInSeconds, eureka server will remove the instance from its view, there by disallowing traffic to this instance.

Note that the instance could still not take traffic if it implements HealthCheckCallback and then decides to make itself unavailable.

eureka.instance.metadata-map

Gets the metadata name/value pairs associated with this instance. This information is sent to eureka server and can be used by other instances.

eureka.instance.namespace

eureka

Get the namespace used to find properties. Ignored in Spring Cloud.

eureka.instance.non-secure-port

80

Get the non-secure port on which the instance should receive traffic.

eureka.instance.non-secure-port-enabled

true

Indicates whether the non-secure port should be enabled for traffic or not.

eureka.instance.prefer-ip-address

false

Flag to say that, when guessing a hostname, the IP address of the server should be used in prference to the hostname reported by the OS.

eureka.instance.secure-health-check-url

Gets the absolute secure health check page URL for this instance. The users can provide the secureHealthCheckUrl if the health check page resides in the same instance talking to eureka, else in the cases where the instance is a proxy for some other server, users can provide the full URL. If the full URL is provided it takes precedence.

<p> It is normally used for making educated decisions based on the health of the instance - for example, it can be used to determine whether to proceed deployments to an entire farm or stop the deployments without causing further damage. The full URL should follow the format http://${eureka.hostname}:7001/ where the value ${eureka.hostname} is replaced at runtime.

eureka.instance.secure-port

443

Get the Secure port on which the instance should receive traffic.

eureka.instance.secure-port-enabled

false

Indicates whether the secure port should be enabled for traffic or not.

eureka.instance.secure-virtual-host-name

Gets the secure virtual host name defined for this instance.

This is typically the way other instance would find this instance by using the secure virtual host name.Think of this as similar to the fully qualified domain name, that the users of your services will need to find this instance.

eureka.instance.status-page-url

Gets the absolute status page URL path for this instance. The users can provide the statusPageUrlPath if the status page resides in the same instance talking to eureka, else in the cases where the instance is a proxy for some other server, users can provide the full URL. If the full URL is provided it takes precedence.

It is normally used for informational purposes for other services to find about the status of this instance. Users can provide a simple HTML indicating what is the current status of the instance.

eureka.instance.status-page-url-path

/info

Gets the relative status page URL path for this instance. The status page URL is then constructed out of the hostName and the type of communication - secure or unsecure as specified in securePort and nonSecurePort.

It is normally used for informational purposes for other services to find about the status of this instance. Users can provide a simple HTML indicating what is the current status of the instance.

eureka.instance.virtual-host-name

Gets the virtual host name defined for this instance.

This is typically the way other instance would find this instance by using the virtual host name.Think of this as similar to the fully qualified domain name, that the users of your services will need to find this instance.

eureka.server.a-s-g-cache-expiry-timeout-ms

0

eureka.server.a-s-g-query-timeout-ms

300

eureka.server.a-s-g-update-interval-ms

0

eureka.server.a-w-s-access-id

eureka.server.a-w-s-secret-key

eureka.server.batch-replication

false

eureka.server.binding-strategy

eureka.server.delta-retention-timer-interval-in-ms

0

eureka.server.disable-delta

false

eureka.server.disable-delta-for-remote-regions

false

eureka.server.disable-transparent-fallback-to-other-region

false

eureka.server.e-i-p-bind-rebind-retries

3

eureka.server.e-i-p-binding-retry-interval-ms

0

eureka.server.e-i-p-binding-retry-interval-ms-when-unbound

0

eureka.server.enable-replicated-request-compression

false

eureka.server.enable-self-preservation

true

eureka.server.eviction-interval-timer-in-ms

0

eureka.server.g-zip-content-from-remote-region

true

eureka.server.json-codec-name

eureka.server.list-auto-scaling-groups-role-name

ListAutoScalingGroups

eureka.server.log-identity-headers

true

eureka.server.max-elements-in-peer-replication-pool

10000

eureka.server.max-elements-in-status-replication-pool

10000

eureka.server.max-idle-thread-age-in-minutes-for-peer-replication

15

eureka.server.max-idle-thread-in-minutes-age-for-status-replication

10

eureka.server.max-threads-for-peer-replication

20

eureka.server.max-threads-for-status-replication

1

eureka.server.max-time-for-replication

30000

eureka.server.min-threads-for-peer-replication

5

eureka.server.min-threads-for-status-replication

1

eureka.server.number-of-replication-retries

5

eureka.server.peer-eureka-nodes-update-interval-ms

0

eureka.server.peer-eureka-status-refresh-time-interval-ms

0

eureka.server.peer-node-connect-timeout-ms

200

eureka.server.peer-node-connection-idle-timeout-seconds

30

eureka.server.peer-node-read-timeout-ms

200

eureka.server.peer-node-total-connections

1000

eureka.server.peer-node-total-connections-per-host

500

eureka.server.prime-aws-replica-connections

true

eureka.server.property-resolver

eureka.server.rate-limiter-burst-size

10

eureka.server.rate-limiter-enabled

false

eureka.server.rate-limiter-full-fetch-average-rate

100

eureka.server.rate-limiter-privileged-clients

eureka.server.rate-limiter-registry-fetch-average-rate

500

eureka.server.rate-limiter-throttle-standard-clients

false

eureka.server.registry-sync-retries

0

eureka.server.registry-sync-retry-wait-ms

0

eureka.server.remote-region-app-whitelist

eureka.server.remote-region-connect-timeout-ms

1000

eureka.server.remote-region-connection-idle-timeout-seconds

30

eureka.server.remote-region-fetch-thread-pool-size

20

eureka.server.remote-region-read-timeout-ms

1000

eureka.server.remote-region-registry-fetch-interval

30

eureka.server.remote-region-total-connections

1000

eureka.server.remote-region-total-connections-per-host

500

eureka.server.remote-region-trust-store

eureka.server.remote-region-trust-store-password

changeit

eureka.server.remote-region-urls

eureka.server.remote-region-urls-with-name

eureka.server.renewal-percent-threshold

0.85

eureka.server.renewal-threshold-update-interval-ms

0

eureka.server.response-cache-auto-expiration-in-seconds

180

eureka.server.response-cache-update-interval-ms

0

eureka.server.retention-time-in-m-s-in-delta-queue

0

eureka.server.route53-bind-rebind-retries

3

eureka.server.route53-binding-retry-interval-ms

0

eureka.server.route53-domain-t-t-l

30

eureka.server.sync-when-timestamp-differs

true

eureka.server.use-read-only-response-cache

true

eureka.server.wait-time-in-ms-when-sync-empty

0

eureka.server.xml-codec-name

feign.compression.request.mime-types

[text/xml, application/xml, application/json]

The list of supported mime types.

feign.compression.request.min-request-size

2048

The minimum threshold content size.

health.config.enabled

false

Flag to indicate that the config server health indicator should be installed.

hystrix.metrics.enabled

true

Enable Hystrix metrics polling. Defaults to true.

hystrix.metrics.polling-interval-ms

2000

Interval between subsequent polling of metrics. Defaults to 2000 ms.

management.health.refresh.enabled

true

Enable the health endpoint for the refresh scope.

management.health.zookeeper.enabled

true

Enable the health endpoint for zookeeper.

netflix.atlas.batch-size

10000

netflix.atlas.enabled

true

netflix.atlas.uri

netflix.metrics.servo.cache-warning-threshold

1000

When the ServoMonitorCache reaches this size, a warning is logged. This will be useful if you are using string concatenation in RestTemplate urls.

netflix.metrics.servo.registry-class

com.netflix.servo.BasicMonitorRegistry

Fully qualified class name for monitor registry used by Servo.

spring.cloud.bus.ack.destination-service

Service that wants to listen to acks. By default null (meaning all services).

spring.cloud.bus.ack.enabled

true

Flag to switch off acks (default on).

spring.cloud.bus.destination

springCloudBus

Name of Spring Cloud Stream destination for messages.

spring.cloud.bus.enabled

true

Flag to indicate that the bus is enabled.

spring.cloud.bus.env.enabled

true

Flag to switch off environment change events (default on).

spring.cloud.bus.refresh.enabled

true

Flag to switch off refresh events (default on).

spring.cloud.bus.trace.enabled

false

Flag to switch on tracing of acks (default off).

spring.cloud.cloudfoundry.discovery.enabled

true

Flag to indicate that discovery is enabled.

spring.cloud.cloudfoundry.discovery.heartbeat-frequency

5000

Frequency in milliseconds of poll for heart beat. The client will poll on this frequency and broadcast a list of service ids.

spring.cloud.cloudfoundry.discovery.org

Organization name to authenticate with (default to user’s default).

spring.cloud.cloudfoundry.discovery.password

Password for user to authenticate and obtain token.

spring.cloud.cloudfoundry.discovery.space

Space name to authenticate with (default to user’s default).

spring.cloud.cloudfoundry.discovery.url

https://api.run.pivotal.io

URL of Cloud Foundry API (Cloud Controller).

spring.cloud.cloudfoundry.discovery.username

Username to authenticate (usually an email address).

spring.cloud.config.allow-override

true

Flag to indicate that {@link #isSystemPropertiesOverride() systemPropertiesOverride} can be used. Set to false to prevent users from changing the default accidentally. Default true.

spring.cloud.config.authorization

Authorization token used by the client to connect to the server.

spring.cloud.config.discovery.enabled

false

Flag to indicate that config server discovery is enabled (config server URL will be looked up via discovery).

spring.cloud.config.discovery.service-id

configserver

Service id to locate config server.

spring.cloud.config.enabled

true

Flag to say that remote configuration is enabled. Default true;

spring.cloud.config.fail-fast

false

Flag to indicate that failure to connect to the server is fatal (default false).

spring.cloud.config.label

The label name to use to pull remote configuration properties. The default is set on the server (generally "master" for a git based server).

spring.cloud.config.name

Name of application used to fetch remote properties.

spring.cloud.config.override-none

false

Flag to indicate that when {@link #setAllowOverride(boolean) allowOverride} is true, external properties should take lowest priority, and not override any existing property sources (including local config files). Default false.

spring.cloud.config.override-system-properties

true

Flag to indicate that the external properties should override system properties. Default true.

spring.cloud.config.password

The password to use (HTTP Basic) when contacting the remote server.

spring.cloud.config.profile

default

The default profile to use when fetching remote configuration (comma-separated). Default is "default".

spring.cloud.config.retry.initial-interval

1000

Initial retry interval in milliseconds.

spring.cloud.config.retry.max-attempts

6

Maximum number of attempts.

spring.cloud.config.retry.max-interval

2000

Maximum interval for backoff.

spring.cloud.config.retry.multiplier

1.1

Multiplier for next interval.

spring.cloud.config.token

Security Token passed thru to underlying environment repository.

spring.cloud.config.uri

http://localhost:8888

The URI of the remote server (default http://localhost:8888).

spring.cloud.config.username

The username to use (HTTP Basic) when contacting the remote server.

spring.cloud.consul.config.acl-token

spring.cloud.consul.config.data-key

data

If format is Format.PROPERTIES or Format.YAML then the following field is used as key to look up consul for configuration.

spring.cloud.consul.config.default-context

application

spring.cloud.consul.config.enabled

true

spring.cloud.consul.config.fail-fast

true

Throw exceptions during config lookup if true, otherwise, log warnings.

spring.cloud.consul.config.format

spring.cloud.consul.config.prefix

config

spring.cloud.consul.config.profile-separator

,

spring.cloud.consul.config.watch.delay

1000

The value of the fixed delay for the watch in millis. Defaults to 1000.

spring.cloud.consul.config.watch.enabled

true

If the watch is enabled. Defaults to true.

spring.cloud.consul.config.watch.wait-time

60

The number of seconds to wait (or block) for watch query. Defaults to 60.

spring.cloud.consul.discovery.acl-token

spring.cloud.consul.discovery.catalog-services-watch-delay

10

spring.cloud.consul.discovery.catalog-services-watch-timeout

2

spring.cloud.consul.discovery.default-query-tag

Tag to query for in service list if one is not listed in serverListQueryTags.

spring.cloud.consul.discovery.enabled

true

Is service discovery enabled?

spring.cloud.consul.discovery.health-check-interval

10s

How often to perform the health check (e.g. 10s)

spring.cloud.consul.discovery.health-check-path

/health

Alternate server path to invoke for health checking

spring.cloud.consul.discovery.health-check-timeout

Timeout for health check (e.g. 10s)

spring.cloud.consul.discovery.health-check-url

Custom health check url to override default

spring.cloud.consul.discovery.heartbeat.enabled

false

spring.cloud.consul.discovery.heartbeat.heartbeat-interval

spring.cloud.consul.discovery.heartbeat.interval-ratio

spring.cloud.consul.discovery.heartbeat.ttl-unit

s

spring.cloud.consul.discovery.heartbeat.ttl-value

30

spring.cloud.consul.discovery.host-info

spring.cloud.consul.discovery.hostname

Hostname to use when accessing server

spring.cloud.consul.discovery.instance-id

Unique service instance id

spring.cloud.consul.discovery.ip-address

IP address to use when accessing service (must also set preferIpAddress to use)

spring.cloud.consul.discovery.lifecycle.enabled

true

spring.cloud.consul.discovery.management-port

Port to register the management service under (defaults to management port)

spring.cloud.consul.discovery.management-suffix

management

Suffix to use when registering management service

spring.cloud.consul.discovery.management-tags

Tags to use when registering management service

spring.cloud.consul.discovery.port

Port to register the service under (defaults to listening port)

spring.cloud.consul.discovery.prefer-agent-address

false

Source of how we will determine the address to use

spring.cloud.consul.discovery.prefer-ip-address

false

Use ip address rather than hostname during registration

spring.cloud.consul.discovery.query-passing

false

Add the 'passing` parameter to /v1/health/service/serviceName. This pushes health check passing to the server.

spring.cloud.consul.discovery.register

true

Register as a service in consul.

spring.cloud.consul.discovery.register-health-check

true

Register health check in consul. Useful during development of a service.

spring.cloud.consul.discovery.scheme

http

Whether to register an http or https service

spring.cloud.consul.discovery.server-list-query-tags

Map of serviceId’s → tag to query for in server list. This allows filtering services by a single tag.

spring.cloud.consul.discovery.service-name

Service name

spring.cloud.consul.discovery.tags

Tags to use when registering service

spring.cloud.consul.enabled

true

Is spring cloud consul enabled

spring.cloud.consul.host

localhost

Consul agent hostname. Defaults to 'localhost'.

spring.cloud.consul.port

8500

Consul agent port. Defaults to '8500'.

spring.cloud.consul.retry.initial-interval

1000

Initial retry interval in milliseconds.

spring.cloud.consul.retry.max-attempts

6

Maximum number of attempts.

spring.cloud.consul.retry.max-interval

2000

Maximum interval for backoff.

spring.cloud.consul.retry.multiplier

1.1

Multiplier for next interval.

spring.cloud.hypermedia.refresh.fixed-delay

5000

spring.cloud.hypermedia.refresh.initial-delay

10000

spring.cloud.inetutils.default-hostname

localhost

The default hostname. Used in case of errors.

spring.cloud.inetutils.default-ip-address

127.0.0.1

The default ipaddress. Used in case of errors.

spring.cloud.inetutils.ignored-interfaces

List of Java regex expressions for network interfaces that will be ignored.

spring.cloud.inetutils.timeout-seconds

1

Timeout in seconds for calculating hostname.

spring.cloud.stream.binders

spring.cloud.stream.bindings

spring.cloud.stream.consul.binder.event-timeout

5

spring.cloud.stream.consumer-defaults

spring.cloud.stream.default-binder

spring.cloud.stream.dynamic-destinations

[]

spring.cloud.stream.ignore-unknown-properties

true

spring.cloud.stream.instance-count

1

spring.cloud.stream.instance-index

0

spring.cloud.stream.producer-defaults

spring.cloud.stream.rabbit.binder.admin-adresses

[]

spring.cloud.stream.rabbit.binder.compression-level

0

spring.cloud.stream.rabbit.binder.nodes

[]

spring.cloud.stream.rabbit.bindings

spring.cloud.zookeeper.base-sleep-time-ms

50

Initial amount of time to wait between retries

spring.cloud.zookeeper.block-until-connected-unit

The unit of time related to blocking on connection to Zookeeper

spring.cloud.zookeeper.block-until-connected-wait

10

Wait time to block on connection to Zookeeper

spring.cloud.zookeeper.connect-string

localhost:2181

Connection string to the Zookeeper cluster

spring.cloud.zookeeper.default-health-endpoint

Default health endpoint that will be checked to verify that a dependency is alive

spring.cloud.zookeeper.dependencies

Mapping of alias to ZookeeperDependency. From Ribbon perspective the alias is actually serviceID since Ribbon can’t accept nested structures in serviceID

spring.cloud.zookeeper.dependency-configurations

spring.cloud.zookeeper.dependency-names

spring.cloud.zookeeper.discovery.enabled

true

spring.cloud.zookeeper.discovery.instance-host

Predefined host with which a service can register itself in Zookeeper. Corresponds to the {code address} from the URI spec.

spring.cloud.zookeeper.discovery.instance-port

Port to register the service under (defaults to listening port)

spring.cloud.zookeeper.discovery.metadata

Gets the metadata name/value pairs associated with this instance. This information is sent to zookeeper and can be used by other instances.

spring.cloud.zookeeper.discovery.register

true

Register as a service in zookeeper.

spring.cloud.zookeeper.discovery.root

/services

Root Zookeeper folder in which all instances are registered

spring.cloud.zookeeper.discovery.uri-spec

{scheme}://{address}:{port}

The URI specification to resolve during service registration in Zookeeper

spring.cloud.zookeeper.enabled

true

Is Zookeeper enabled

spring.cloud.zookeeper.max-retries

10

Max number of times to retry

spring.cloud.zookeeper.max-sleep-ms

500

Max time in ms to sleep on each retry

spring.cloud.zookeeper.prefix

Common prefix that will be applied to all Zookeeper dependencies' paths

spring.integration.poller.fixed-delay

1000

Fixed delay for default poller.

spring.integration.poller.max-messages-per-poll

1

Maximum messages per poll for the default poller.

spring.sleuth.integration.enabled

true

Enable Spring Integration sleuth instrumentation.

spring.sleuth.integration.patterns

*

An array of simple patterns against which channel names will be matched. Default is * (all channels). See org.springframework.util.PatternMatchUtils.simpleMatch(String, String).

spring.sleuth.keys.async.class-name-key

class

Simple name of the class with a method annotated with {@code @Async} from which the asynchronous process started

@see org.springframework.scheduling.annotation.Async

spring.sleuth.keys.async.method-name-key

method

Name of the method annotated with {@code @Async}

@see org.springframework.scheduling.annotation.Async

spring.sleuth.keys.async.prefix

Prefix for header names if they are added as tags.

spring.sleuth.keys.async.thread-name-key

thread

Name of the thread that executed the async method

@see org.springframework.scheduling.annotation.Async

spring.sleuth.keys.http.headers

Additional headers that should be added as tags if they exist. If the header value is multi-valued, the tag value will be a comma-separated, single-quoted list.

spring.sleuth.keys.http.host

http.host

The domain portion of the URL or host header. Example: "mybucket.s3.amazonaws.com". Used to filter by host as opposed to ip address.

spring.sleuth.keys.http.method

http.method

The HTTP method, or verb, such as "GET" or "POST". Used to filter against an http route.

spring.sleuth.keys.http.path

http.path

The absolute http path, without any query parameters. Example: "/objects/abcd-ff". Used to filter against an http route, portably with zipkin v1. In zipkin v1, only equals filters are supported. Dropping query parameters makes the number of distinct URIs less. For example, one can query for the same resource, regardless of signing parameters encoded in the query line. This does not reduce cardinality to a HTTP single route. For example, it is common to express a route as an http URI template like "/resource/{resource_id}". In systems where only equals queries are available, searching for {@code http.uri=/resource} won’t match if the actual request was "/resource/abcd-ff". Historical note: This was commonly expressed as "http.uri" in zipkin, eventhough it was most often just a path.

spring.sleuth.keys.http.prefix

http.

Prefix for header names if they are added as tags.

spring.sleuth.keys.http.request-size

http.request.size

The size of the non-empty HTTP request body, in bytes. Ex. "16384"

<p>Large uploads can exceed limits or contribute directly to latency.

spring.sleuth.keys.http.response-size

http.response.size

The size of the non-empty HTTP response body, in bytes. Ex. "16384"

<p>Large downloads can exceed limits or contribute directly to latency.

spring.sleuth.keys.http.status-code

http.status_code

The HTTP response code, when not in 2xx range. Ex. "503" Used to filter for error status. 2xx range are not logged as success codes are less interesting for latency troubleshooting. Omitting saves at least 20 bytes per span.

spring.sleuth.keys.http.url

http.url

The entire URL, including the scheme, host and query parameters if available. Ex. "https://mybucket.s3.amazonaws.com/objects/abcd-ff?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Algorithm=AWS4-HMAC-SHA256…​" Combined with {@link #method}, you can understand the fully-qualified request line. This is optional as it may include private data or be of considerable length.

spring.sleuth.keys.hystrix.command-group

commandGroup

Name of the command group. Hystrix uses the command group key to group together commands such as for reporting, alerting, dashboards, or team/library ownership.

@see com.netflix.hystrix.HystrixCommandGroupKey

spring.sleuth.keys.hystrix.command-key

commandKey

Name of the command key. Describes the name for the given command. A key to represent a {@link com.netflix.hystrix.HystrixCommand} for monitoring, circuit-breakers, metrics publishing, caching and other such uses.

@see com.netflix.hystrix.HystrixCommandKey

spring.sleuth.keys.hystrix.prefix

Prefix for header names if they are added as tags.

spring.sleuth.keys.hystrix.thread-pool-key

threadPoolKey

Name of the thread pool key. The thread-pool key represents a {@link com.netflix.hystrix.HystrixThreadPool} for monitoring, metrics publishing, caching, and other such uses. A {@link com.netflix.hystrix.HystrixCommand} is associated with a single {@link com.netflix.hystrix.HystrixThreadPool} as retrieved by the {@link com.netflix.hystrix.HystrixThreadPoolKey} injected into it, or it defaults to one created using the {@link com.netflix.hystrix.HystrixCommandGroupKey} it is created with.

@see com.netflix.hystrix.HystrixThreadPoolKey

spring.sleuth.keys.message.headers

Additional headers that should be added as tags if they exist. If the header value is not a String it will be converted to a String using its toString() method.

spring.sleuth.keys.message.payload.size

message/payload-size

An estimate of the size of the payload if available.

spring.sleuth.keys.message.payload.type

message/payload-type

The type of the payload.

spring.sleuth.keys.message.prefix

message/

Prefix for header names if they are added as tags.

spring.sleuth.keys.mvc.controller-class

mvc.controller.class

The lower case, hyphen delimited name of the class that processes the request. Ex. class named "BookController" will result in "book-controller" tag value.

spring.sleuth.keys.mvc.controller-method

mvc.controller.method

The lower case, hyphen delimited name of the class that processes the request. Ex. method named "listOfBooks" will result in "list-of-books" tag value.

spring.sleuth.metric.span.accepted-name

counter.span.accepted

spring.sleuth.metric.span.dropped-name

counter.span.dropped

spring.sleuth.sampler.percentage

0.1

Percentage of requests that should be sampled. E.g. 1.0 - 100% requests should be sampled. The precision is whole-numbers only (i.e. there’s no support for 0.1% of the traces).

zuul.add-host-header

false

Flag to determine whether the proxy forwards the Host header.

zuul.add-proxy-headers

true

Flag to determine whether the proxy adds X-Forwarded-* headers.

zuul.host.max-per-route-connections

20

The maximum number of connections that can be used by a single route.

zuul.host.max-total-connections

200

The maximum number of total connections the proxy can hold open to backends.

zuul.ignore-local-service

true

zuul.ignored-headers

Names of HTTP headers to ignore completely (i.e. leave them out of downstream requests and drop them from downstream responses).

zuul.ignored-patterns

zuul.ignored-services

Set of service names not to consider for proxying automatically. By default all services in the discovery client will be proxied.

zuul.prefix

A common prefix for all routes.

zuul.remove-semicolon-content

true

Flag to say that path elements past the first semicolon can be dropped.

zuul.retryable

Flag for whether retry is supported by default (assuming the routes themselves support it).

zuul.ribbon-isolation-strategy

zuul.routes

Map of route names to properties.

zuul.security_headers

Headers that are generally expected to be added by Spring Security, and hence often duplicated if the proxy and the backend are secured with Spring. By default they are added to the ignored headers if Spring Security is present.

zuul.semaphore.max-semaphores

100

The maximum number of total semaphores for Hystrix.

zuul.sensitive-headers

List of sensitive headers that are not passed to downstream requests. Defaults to a "safe" set of headers that commonly contain user credentials. It’s OK to remove those from the list if the downstream service is part of the same system as the proxy, so they are sharing authentication data. If using a physical URL outside your own domain, then generally it would be a bad idea to leak user credentials.

zuul.servlet-path

/zuul

Path to install Zuul as a servlet (not part of Spring MVC). The servlet is more memory efficient for requests with large bodies, e.g. file uploads.

zuul.ssl-hostname-validation-enabled

true

Flag to say whether the hostname for ssl connections should be verified or not. Default is true. This should only be used in test setups!

zuul.strip-prefix

true

Flag saying whether to strip the prefix from the path before forwarding.

zuul.trace-request-body

true

Flag to say that request bodies can be traced.