Table of Contents
@EnableZuulProxy
vs. @EnableZuulServer
@EnableZuulServer
Filters@EnableZuulProxy
FiltersSpring Cloud provides tools for developers to quickly build some of the common patterns in distributed systems (e.g. configuration management, service discovery, circuit breakers, intelligent routing, micro-proxy, control bus). Coordination of distributed systems leads to boiler plate patterns, and using Spring Cloud developers can quickly stand up services and applications that implement those patterns. They will work well in any distributed environment, including the developer’s own laptop, bare metal data centres, and managed platforms such as Cloud Foundry.
Version: Dalston.SR5
Spring Cloud focuses on providing good out of box experience for typical use cases and extensibility mechanism to cover others.
Cloud Native is a style of application development that encourages easy adoption of best practices in the areas of continuous delivery and value-driven development. A related discipline is that of building 12-factor Apps in which development practices are aligned with delivery and operations goals, for instance by using declarative programming and management and monitoring. Spring Cloud facilitates these styles of development in a number of specific ways and the starting point is a set of features that all components in a distributed system either need or need easy access to when required.
Many of those features are covered by Spring Boot, which we build on in Spring Cloud. Some more are delivered by Spring Cloud as two libraries: Spring Cloud Context and Spring Cloud Commons. Spring Cloud Context provides utilities and special services for the ApplicationContext
of a Spring Cloud application (bootstrap context, encryption, refresh scope and environment endpoints). Spring Cloud Commons is a set of abstractions and common classes used in different Spring Cloud implementations (eg. Spring Cloud Netflix vs. Spring Cloud Consul).
If you are getting an exception due to "Illegal key size" and you are using Sun’s JDK, you need to install the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files. See the following links for more information:
Extract files into JDK/jre/lib/security folder (whichever version of JRE/JDK x64/x86 you are using).
![]() | Note |
---|---|
Spring Cloud is released under the non-restrictive Apache 2.0 license. If you would like to contribute to this section of the documentation or if you find an error, please find the source code and issue trackers in the project at github. |
Spring Boot has an opinionated view of how to build an application with Spring: for instance it has conventional locations for common configuration file, and endpoints for common management and monitoring tasks. Spring Cloud builds on top of that and adds a few features that probably all components in a system would use or occasionally need.
A Spring Cloud application operates by creating a "bootstrap"
context, which is a parent context for the main application. Out of
the box it is responsible for loading configuration properties from
the external sources, and also decrypting properties in the local
external configuration files. The two contexts share an Environment
which is the source of external properties for any Spring
application. Bootstrap properties are added with high precedence, so
they cannot be overridden by local configuration, by default.
The bootstrap context uses a different convention for locating
external configuration than the main application context, so instead
of application.yml
(or .properties
) you use bootstrap.yml
,
keeping the external configuration for bootstrap and main context
nicely separate. Example:
bootstrap.yml.
spring: application: name: foo cloud: config: uri: ${SPRING_CONFIG_URI:http://localhost:8888}
It is a good idea to set the spring.application.name
(in
bootstrap.yml
or application.yml
) if your application needs any
application-specific configuration from the server.
You can disable the bootstrap process completely by setting
spring.cloud.bootstrap.enabled=false
(e.g. in System properties).
If you build an application context from SpringApplication
or
SpringApplicationBuilder
, then the Bootstrap context is added as a
parent to that context. It is a feature of Spring that child contexts
inherit property sources and profiles from their parent, so the "main"
application context will contain additional property sources, compared
to building the same context without Spring Cloud Config. The
additional property sources are:
CompositePropertySource
appears with high
priority if any PropertySourceLocators
are found in the Bootstrap
context, and they have non-empty properties. An example would be
properties from the Spring Cloud Config Server. See
below for instructions
on how to customize the contents of this property source.bootstrap.yml
(or
properties) then those properties are used to configure the Bootstrap
context, and then they get added to the child context when its parent
is set. They have lower precedence than the application.yml
(or
properties) and any other property sources that are added to the child
as a normal part of the process of creating a Spring Boot
application. See below for
instructions on how to customize the contents of these property
sources.Because of the ordering rules of property sources the "bootstrap"
entries take precedence, but note that these do not contain any data
from bootstrap.yml
, which has very low precedence, but can be used
to set defaults.
You can extend the context hierarchy by simply setting the parent
context of any ApplicationContext
you create, e.g. using its own
interface, or with the SpringApplicationBuilder
convenience methods
(parent()
, child()
and sibling()
). The bootstrap context will be
the parent of the most senior ancestor that you create yourself.
Every context in the hierarchy will have its own "bootstrap" property
source (possibly empty) to avoid promoting values inadvertently from
parents down to their descendants. Every context in the hierarchy can
also (in principle) have a different spring.application.name
and
hence a different remote property source if there is a Config
Server. Normal Spring application context behaviour rules apply to
property resolution: properties from a child context override those in
the parent, by name and also by property source name (if the child has
a property source with the same name as the parent, the one from the
parent is not included in the child).
Note that the SpringApplicationBuilder
allows you to share an
Environment
amongst the whole hierarchy, but that is not the
default. Thus, sibling contexts in particular do not need to have the
same profiles or property sources, even though they will share common
things with their parent.
The bootstrap.yml
(or .properties
) location can be specified using
spring.cloud.bootstrap.name
(default "bootstrap") or
spring.cloud.bootstrap.location
(default empty), e.g. in System
properties. Those properties behave like the spring.config.*
variants with the same name, in fact they are used to set up the
bootstrap ApplicationContext
by setting those properties in its
Environment
. If there is an active profile (from
spring.profiles.active
or through the Environment
API in the
context you are building) then properties in that profile will be
loaded as well, just like in a regular Spring Boot app, e.g. from
bootstrap-development.properties
for a "development" profile.
The property sources that are added to you application by the
bootstrap context are often "remote" (e.g. from a Config Server), and
by default they cannot be overridden locally, except on the command
line. If you want to allow your applications to override the remote
properties with their own System properties or config files, the
remote property source has to grant it permission by setting
spring.cloud.config.allowOverride=true
(it doesn’t work to set this
locally). Once that flag is set there are some finer grained settings
to control the location of the remote properties in relation to System
properties and the application’s local configuration:
spring.cloud.config.overrideNone=true
to override with any local
property source, and
spring.cloud.config.overrideSystemProperties=false
if only System
properties and env vars should override the remote settings, but not
the local config files.
The bootstrap context can be trained to do anything you like by adding
entries to /META-INF/spring.factories
under the key
org.springframework.cloud.bootstrap.BootstrapConfiguration
. This is
a comma-separated list of Spring @Configuration
classes which will
be used to create the context. Any beans that you want to be available
to the main application context for autowiring can be created here,
and also there is a special contract for @Beans
of type
ApplicationContextInitializer
. Classes can be marked with an @Order
if you want to control the startup sequence (the default order is
"last").
![]() | Warning |
---|---|
Be careful when adding custom |
The bootstrap process ends by injecting initializers into the main
SpringApplication
instance (i.e. the normal Spring Boot startup
sequence, whether it is running as a standalone app or deployed in an
application server). First a bootstrap context is created from the
classes found in spring.factories
and then all @Beans
of type
ApplicationContextInitializer
are added to the main
SpringApplication
before it is started.
The default property source for external configuration added by the
bootstrap process is the Config Server, but you can add additional
sources by adding beans of type PropertySourceLocator
to the
bootstrap context (via spring.factories
). You could use this to
insert additional properties from a different server, or from a
database, for instance.
As an example, consider the following trivial custom locator:
@Configuration public class CustomPropertySourceLocator implements PropertySourceLocator { @Override public PropertySource<?> locate(Environment environment) { return new MapPropertySource("customProperty", Collections.<String, Object>singletonMap("property.from.sample.custom.source", "worked as intended")); } }
The Environment
that is passed in is the one for the
ApplicationContext
about to be created, i.e. the one that we are
supplying additional property sources for. It will already have its
normal Spring Boot-provided property sources, so you can use those to
locate a property source specific to this Environment
(e.g. by
keying it on the spring.application.name
, as is done in the default
Config Server property source locator).
If you create a jar with this class in it and then add a
META-INF/spring.factories
containing:
org.springframework.cloud.bootstrap.BootstrapConfiguration=sample.custom.CustomPropertySourceLocator
then the "customProperty" PropertySource
will show up in any
application that includes that jar on its classpath.
The application will listen for an EnvironmentChangeEvent
and react
to the change in a couple of standard ways (additional
ApplicationListeners
can be added as @Beans
by the user in the
normal way). When an EnvironmentChangeEvent
is observed it will
have a list of key values that have changed, and the application will
use those to:
@ConfigurationProperties
beans in the contextlogging.level.*
Note that the Config Client does not by default poll for changes in
the Environment
, and generally we would not recommend that approach
for detecting changes (although you could set it up with a
@Scheduled
annotation). If you have a scaled-out client application
then it is better to broadcast the EnvironmentChangeEvent
to all
the instances instead of having them polling for changes (e.g. using
the Spring Cloud
Bus).
The EnvironmentChangeEvent
covers a large class of refresh use
cases, as long as you can actually make a change to the Environment
and publish the event (those APIs are public and part of core
Spring). You can verify the changes are bound to
@ConfigurationProperties
beans by visiting the /configprops
endpoint (normal Spring Boot Actuator feature). For instance a
DataSource
can have its maxPoolSize
changed at runtime (the
default DataSource
created by Spring Boot is an
@ConfigurationProperties
bean) and grow capacity
dynamically. Re-binding @ConfigurationProperties
does not cover
another large class of use cases, where you need more control over the
refresh, and where you need a change to be atomic over the whole
ApplicationContext
. To address those concerns we have
@RefreshScope
.
A Spring @Bean
that is marked as @RefreshScope
will get special
treatment when there is a configuration change. This addresses the
problem of stateful beans that only get their configuration injected
when they are initialized. For instance if a DataSource
has open
connections when the database URL is changed via the Environment
, we
probably want the holders of those connections to be able to complete
what they are doing. Then the next time someone borrows a connection
from the pool he gets one with the new URL.
Refresh scope beans are lazy proxies that initialize when they are used (i.e. when a method is called), and the scope acts as a cache of initialized values. To force a bean to re-initialize on the next method call you just need to invalidate its cache entry.
The RefreshScope
is a bean in the context and it has a public method
refreshAll()
to refresh all beans in the scope by clearing the
target cache. There is also a refresh(String)
method to refresh an
individual bean by name. This functionality is exposed in the
/refresh
endpoint (over HTTP or JMX).
![]() | Note |
---|---|
|
Spring Cloud has an Environment
pre-processor for decrypting
property values locally. It follows the same rules as the Config
Server, and has the same external configuration via encrypt.*
. Thus
you can use encrypted values in the form {cipher}*
and as long as
there is a valid key then they will be decrypted before the main
application context gets the Environment
. To use the encryption
features in an application you need to include Spring Security RSA in
your classpath (Maven co-ordinates
"org.springframework.security:spring-security-rsa") and you also need
the full strength JCE extensions in your JVM.
If you are getting an exception due to "Illegal key size" and you are using Sun’s JDK, you need to install the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files. See the following links for more information:
Extract files into JDK/jre/lib/security folder (whichever version of JRE/JDK x64/x86 you are using).
For a Spring Boot Actuator application there are some additional management endpoints:
/env
to update the Environment
and rebind @ConfigurationProperties
and log levels/refresh
for re-loading the boot strap context and refreshing the @RefreshScope
beans/restart
for closing the ApplicationContext
and restarting it (disabled by default)/pause
and /resume
for calling the Lifecycle
methods (stop()
and start()
on the ApplicationContext
)Patterns such as service discovery, load balancing and circuit breakers lend themselves to a common abstraction layer that can be consumed by all Spring Cloud clients, independent of the implementation (e.g. discovery via Eureka or Consul).
Commons provides the @EnableDiscoveryClient
annotation. This looks for implementations of the DiscoveryClient
interface via META-INF/spring.factories
. Implementations of Discovery Client will add a configuration class to spring.factories
under the org.springframework.cloud.client.discovery.EnableDiscoveryClient
key. Examples of DiscoveryClient
implementations: are Spring Cloud Netflix Eureka, Spring Cloud Consul Discovery and Spring Cloud Zookeeper Discovery.
By default, implementations of DiscoveryClient
will auto-register the local Spring Boot server with the remote discovery server. This can be disabled by setting autoRegister=false
in @EnableDiscoveryClient
.
Commons now provides a ServiceRegistry
interface which provides methods like register(Registration)
and deregister(Registration)
which allow you to provide custom registered services. Registration
is a marker interface.
@Configuration @EnableDiscoveryClient(autoRegister=false) public class MyConfiguration { private ServiceRegistry registry; public MyConfiguration(ServiceRegistry registry) { this.registry = registry; } // called via some external process, such as an event or a custom actuator endpoint public void register() { Registration registration = constructRegistration(); this.registry.register(registration); } }
Each ServiceRegistry
implementation has its own Registry
implementation.
By default, the ServiceRegistry
implementation will auto-register the running service. To disable that behavior, there are two methods. You can set @EnableDiscoveryClient(autoRegister=false)
to permanently disable auto-registration. You can also set spring.cloud.service-registry.auto-registration.enabled=false
to disable the behavior via configuration.
A /service-registry
actuator endpoint is provided by Commons. This endpoint relys on a Registration
bean in the Spring Application Context. Calling /service-registry/instance-status
via a GET will return the status of the Registration
. A POST to the same endpoint with a String
body will change the status of the current Registration
to the new value. Please see the documentation of the ServiceRegistry
implementation you are using for the allowed values for updating the status and the values retured for the status.
RestTemplate
can be automatically configured to use ribbon. To create a load balanced RestTemplate
create a RestTemplate
@Bean
and use the @LoadBalanced
qualifier.
![]() | Warning |
---|---|
A |
@Configuration public class MyConfiguration { @LoadBalanced @Bean RestTemplate restTemplate() { return new RestTemplate(); } } public class MyClass { @Autowired private RestTemplate restTemplate; public String doOtherStuff() { String results = restTemplate.getForObject("http://stores/stores", String.class); return results; } }
The URI needs to use a virtual host name (ie. service name, not a host name).
The Ribbon client is used to create a full physical address. See
RibbonAutoConfiguration
for details of how the RestTemplate
is set up.
A load balanced RestTemplate
can be configured to retry failed requests.
By default this logic is disabled, you can enable it by adding Spring Retry to your application’s classpath. The load balanced RestTemplate
will
honor some of the Ribbon configuration values related to retrying failed requests. If
you would like to disable the retry logic with Spring Retry on the classpath
you can set spring.cloud.loadbalancer.retry.enabled=false
.
The properties you can use are client.ribbon.MaxAutoRetries
,
client.ribbon.MaxAutoRetriesNextServer
, and client.ribbon.OkToRetryOnAllOperations
.
See the Ribbon documentation
for a description of what there properties do.
![]() | Note |
---|---|
|
If you want a RestTemplate
that is not load balanced, create a RestTemplate
bean and inject it as normal. To access the load balanced RestTemplate
use
the @LoadBalanced
qualifier when you create your @Bean
.
![]() | Important |
---|---|
Notice the |
@Configuration public class MyConfiguration { @LoadBalanced @Bean RestTemplate loadBalanced() { return new RestTemplate(); } @Primary @Bean RestTemplate restTemplate() { return new RestTemplate(); } } public class MyClass { @Autowired private RestTemplate restTemplate; @Autowired @LoadBalanced private RestTemplate loadBalanced; public String doOtherStuff() { return loadBalanced.getForObject("http://stores/stores", String.class); } public String doStuff() { return restTemplate.getForObject("http://example.com", String.class); } }
![]() | Tip |
---|---|
If you see errors like |
Sometimes it is useful to ignore certain named network interfaces so they can be excluded from Service Discovery registration (eg. running in a Docker container). A list of regular expressions can be set that will cause the desired network interfaces to be ignored. The following configuration will ignore the "docker0" interface and all interfaces that start with "veth".
application.yml.
spring: cloud: inetutils: ignoredInterfaces: - docker0 - veth.*
You can also force to use only specified network addresses using list of regular expressions:
application.yml.
spring: cloud: inetutils: preferredNetworks: - 192.168 - 10.0
You can also force to use only site local addresses. See Inet4Address.html.isSiteLocalAddress() for more details what is site local address.
application.yml.
spring: cloud: inetutils: useOnlySiteLocalInterfaces: true
Dalston.SR5
Spring Cloud Config provides server and client-side support for externalized configuration in a distributed system. With the Config Server you have a central place to manage external properties for applications across all environments. The concepts on both client and server map identically to the Spring Environment
and PropertySource
abstractions, so they fit very well with Spring applications, but can be used with any application running in any language. As an application moves through the deployment pipeline from dev to test and into production you can manage the configuration between those environments and be certain that applications have everything they need to run when they migrate. The default implementation of the server storage backend uses git so it easily supports labelled versions of configuration environments, as well as being accessible to a wide range of tooling for managing the content. It is easy to add alternative implementations and plug them in with Spring configuration.
Start the server:
$ cd spring-cloud-config-server $ ../mvnw spring-boot:run
The server is a Spring Boot application so you can run it from your
IDE instead if you prefer (the main class is
ConfigServerApplication
). Then try out a client:
$ curl localhost:8888/foo/development {"name":"development","label":"master","propertySources":[ {"name":"https://github.com/scratches/config-repo/foo-development.properties","source":{"bar":"spam"}}, {"name":"https://github.com/scratches/config-repo/foo.properties","source":{"foo":"bar"}} ]}
The default strategy for locating property sources is to clone a git
repository (at spring.cloud.config.server.git.uri
) and use it to
initialize a mini SpringApplication
. The mini-application’s
Environment
is used to enumerate property sources and publish them
via a JSON endpoint.
The HTTP service has resources in the form:
/{application}/{profile}[/{label}] /{application}-{profile}.yml /{label}/{application}-{profile}.yml /{application}-{profile}.properties /{label}/{application}-{profile}.properties
where the "application" is injected as the spring.config.name
in the
SpringApplication
(i.e. what is normally "application" in a regular
Spring Boot app), "profile" is an active profile (or comma-separated
list of properties), and "label" is an optional git label (defaults to
"master".)
Spring Cloud Config Server pulls configuration for remote clients from a git repository (which must be provided):
spring: cloud: config: server: git: uri: https://github.com/spring-cloud-samples/config-repo
To use these features in an application, just build it as a Spring
Boot application that depends on spring-cloud-config-client (e.g. see
the test cases for the config-client, or the sample app). The most
convenient way to add the dependency is via a Spring Boot starter
org.springframework.cloud:spring-cloud-starter-config
. There is also a
parent pom and BOM (spring-cloud-starter-parent
) for Maven users and a
Spring IO version management properties file for Gradle and Spring CLI
users. Example Maven configuration:
pom.xml.
<parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.3.5.RELEASE</version> <relativePath /> <!-- lookup parent from repository --> </parent> <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>Brixton.RELEASE</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-config</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> <!-- repositories also needed for snapshots and milestones -->
Then you can create a standard Spring Boot application, like this simple HTTP server:
@SpringBootApplication @RestController public class Application { @RequestMapping("/") public String home() { return "Hello World!"; } public static void main(String[] args) { SpringApplication.run(Application.class, args); } }
When it runs it will pick up the external configuration from the
default local config server on port 8888 if it is running. To modify
the startup behaviour you can change the location of the config server
using bootstrap.properties
(like application.properties
but for
the bootstrap phase of an application context), e.g.
spring.cloud.config.uri: http://myconfigserver.com
The bootstrap properties will show up in the /env
endpoint as a
high-priority property source, e.g.
$ curl localhost:8080/env { "profiles":[], "configService:https://github.com/spring-cloud-samples/config-repo/bar.properties":{"foo":"bar"}, "servletContextInitParams":{}, "systemProperties":{...}, ... }
(a property source called "configService:<URL of remote repository>/<file name>" contains the property "foo" with value "bar" and is highest priority).
![]() | Note |
---|---|
the URL in the property source name is the git repository not the config server URL. |
The Server provides an HTTP, resource-based API for external
configuration (name-value pairs, or equivalent YAML content). The
server is easily embeddable in a Spring Boot application using the
@EnableConfigServer
annotation. So this app is a config server:
ConfigServer.java.
@SpringBootApplication @EnableConfigServer public class ConfigServer { public static void main(String[] args) { SpringApplication.run(ConfigServer.class, args); } }
Like all Spring Boot apps it runs on port 8080 by default, but you
can switch it to the conventional port 8888 in various ways. The
easiest, which also sets a default configuration repository,
is by launching it with spring.config.name=configserver
(there
is a configserver.yml
in the Config Server jar). Another is
to use your own application.properties
, e.g.
application.properties.
server.port: 8888 spring.cloud.config.server.git.uri: file://${user.home}/config-repo
where ${user.home}/config-repo
is a git repository containing
YAML and properties files.
![]() | Note |
---|---|
in Windows you need an extra "/" in the file URL if it is
absolute with a drive prefix, e.g. |
![]() | Tip |
---|---|
Here’s a recipe for creating the git repository in the example above: $ cd $HOME $ mkdir config-repo $ cd config-repo $ git init . $ echo info.foo: bar > application.properties $ git add -A . $ git commit -m "Add application.properties" |
![]() | Warning |
---|---|
using the local filesystem for your git repository is intended for testing only. Use a server to host your configuration repositories in production. |
![]() | Warning |
---|---|
the initial clone of your configuration repository will be quick and efficient if you only keep text files in it. If you start to store binary files, especially large ones, you may experience delays on the first request for configuration and/or out of memory errors in the server. |
Where do you want to store the configuration data for the Config
Server? The strategy that governs this behaviour is the
EnvironmentRepository
, serving Environment
objects. This
Environment
is a shallow copy of the domain from the Spring
Environment
(including propertySources
as the main feature). The
Environment
resources are parametrized by three variables:
{application}
maps to "spring.application.name" on the client side;{profile}
maps to "spring.profiles.active" on the client (comma separated list); and{label}
which is a server side feature labelling a "versioned" set of config files.Repository implementations generally behave just like a Spring Boot
application loading configuration files from a "spring.config.name"
equal to the {application}
parameter, and "spring.profiles.active"
equal to the {profiles}
parameter. Precedence rules for profiles are
also the same as in a regular Boot application: active profiles take
precedence over defaults, and if there are multiple profiles the last
one wins (like adding entries to a Map
).
Example: a client application has this bootstrap configuration:
bootstrap.yml.
spring: application: name: foo profiles: active: dev,mysql
(as usual with a Spring Boot application, these properties could also be set as environment variables or command line arguments).
If the repository is file-based, the server will create an
Environment
from application.yml
(shared between all clients), and
foo.yml
(with foo.yml
taking precedence). If the YAML files have
documents inside them that point to Spring profiles, those are applied
with higher precedence (in order of the profiles listed), and if
there are profile-specific YAML (or properties) files these are also
applied with higher precedence than the defaults. Higher precedence
translates to a PropertySource
listed earlier in the
Environment
. (These are the same rules as apply in a standalone
Spring Boot application.)
The default implementation of EnvironmentRepository
uses a Git
backend, which is very convenient for managing upgrades and physical
environments, and also for auditing changes. To change the location of
the repository you can set the "spring.cloud.config.server.git.uri"
configuration property in the Config Server (e.g. in
application.yml
). If you set it with a file:
prefix it should work
from a local repository so you can get started quickly and easily
without a server, but in that case the server operates directly on the
local repository without cloning it (it doesn’t matter if it’s not
bare because the Config Server never makes changes to the "remote"
repository). To scale the Config Server up and make it highly
available, you would need to have all instances of the server pointing
to the same repository, so only a shared file system would work. Even
in that case it is better to use the ssh:
protocol for a shared
filesystem repository, so that the server can clone it and use a local
working copy as a cache.
This repository implementation maps the {label}
parameter of the
HTTP resource to a git label (commit id, branch name or tag). If the
git branch or tag name contains a slash ("/") then the label in the
HTTP URL should be specified with the special string "(_)" instead (to
avoid ambiguity with other URL paths). For example, if the label is
foo/bar
, replacing the slash would result in a label that looks like
foo(_)bar
. Be careful with the brackets in
the URL if you are using a command line client like curl (e.g. escape
them from the shell with quotes '').
Spring Cloud Config Server supports a git repository URL with
placeholders for the {application}
and {profile}
(and {label}
if
you need it, but remember that the label is applied as a git label
anyway). So you can easily support a "one repo per application" policy
using (for example):
spring: cloud: config: server: git: uri: https://github.com/myorg/{application}
or a "one repo per profile" policy using a similar pattern but with
{profile}
.
There is also support for more complex requirements with pattern
matching on the application and profile name. The pattern format is a
comma-separated list of {application}/{profile}
names with wildcards
(where a pattern beginning with a wildcard may need to be
quoted). Example:
spring: cloud: config: server: git: uri: https://github.com/spring-cloud-samples/config-repo repos: simple: https://github.com/simple/config-repo special: pattern: special*/dev*,*special*/dev* uri: https://github.com/special/config-repo local: pattern: local* uri: file:/home/configsvc/config-repo
If {application}/{profile}
does not match any of the patterns, it
will use the default uri defined under
"spring.cloud.config.server.git.uri". In the above example, for the
"simple" repository, the pattern is simple/*
(i.e. it only matches
one application named "simple" in all profiles). The "local"
repository matches all application names beginning with "local" in all
profiles (the /*
suffix is added automatically to any pattern that
doesn’t have a profile matcher).
![]() | Note |
---|---|
the "one-liner" short cut used in the "simple" example above can only be used if the only property to be set is the URI. If you need to set anything else (credentials, pattern, etc.) you need to use the full form. |
The pattern
property in the repo is actually an array, so you can
use a YAML array (or [0]
, [1]
, etc. suffixes in properties files)
to bind to multiple patterns. You may need to do this if you are going
to run apps with multiple profiles. Example:
spring: cloud: config: server: git: uri: https://github.com/spring-cloud-samples/config-repo repos: development: pattern: - '*/development' - '*/staging' uri: https://github.com/development/config-repo staging: pattern: - '*/qa' - '*/production' uri: https://github.com/staging/config-repo
![]() | Note |
---|---|
Spring Cloud will guess that a pattern containing a profile that
doesn’t end in |
Every repository can also optionally store config files in
sub-directories, and patterns to search for those directories can be
specified as searchPaths
. For example at the top level:
spring: cloud: config: server: git: uri: https://github.com/spring-cloud-samples/config-repo searchPaths: foo,bar*
In this example the server searches for config files in the top level and in the "foo/" sub-directory and also any sub-directory whose name begins with "bar".
By default the server clones remote repositories when configuration is first requested. The server can be configured to clone the repositories at startup. For example at the top level:
spring: cloud: config: server: git: uri: https://git/common/config-repo.git repos: team-a: pattern: team-a-* cloneOnStart: true uri: http://git/team-a/config-repo.git team-b: pattern: team-b-* cloneOnStart: false uri: http://git/team-b/config-repo.git team-c: pattern: team-c-* uri: http://git/team-a/config-repo.git
In this example the server clones team-a’s config-repo on startup before it accepts any requests. All other repositories will not be cloned until configuration from the repository is requested.
![]() | Note |
---|---|
Setting a repository to be cloned when the Config Server starts up can
help to identify a misconfigured configuration source (e.g., an invalid
repository URI) quickly, while the Config Server is starting up. With
|
To use HTTP basic authentication on the remote repository add the "username" and "password" properties separately (not in the URL), e.g.
spring: cloud: config: server: git: uri: https://github.com/spring-cloud-samples/config-repo username: trolley password: strongpassword
If you don’t use HTTPS and user credentials, SSH should also work out
of the box when you store keys in the default directories (~/.ssh
)
and the uri points to an SSH location,
e.g. "[email protected]:configuration/cloud-configuration". It is important that an entry for the Git server be present in the ~/.ssh/known_hosts
file and that it is in ssh-rsa
format. Other formats (like ecdsa-sha2-nistp256
) are not supported. To avoid surprises, you should ensure that only one entry is present in the known_hosts
file for the Git server and that it is matching with the URL you provided to the config server. If you used a hostname in the URL, you want to have exactly that in the known_hosts
file, not the IP.
The repository is accessed using JGit, so any documentation you find on
that should be applicable. HTTPS proxy settings can be set in
~/.git/config
or in the same way as for any other JVM process via
system properties (-Dhttps.proxyHost
and -Dhttps.proxyPort
).
![]() | Tip |
---|---|
If you don’t know where your |
AWS CodeCommit authentication can also be done. AWS CodeCommit uses an authentication helper when using Git from the command line. This helper is not used with the JGit library, so a JGit CredentialProvider for AWS CodeCommit will be created if the Git URI matches the AWS CodeCommit pattern. AWS CodeCommit URIs always look like https://git-codecommit.${AWS_REGION}.amazonaws.com/${repopath}.
If you provide a username and password with an AWS CodeCommit URI, then these must be the AWS accessKeyId and secretAccessKey to be used to access the repository. If you do not specify a username and password, then the accessKeyId and secretAccessKey will be retrieved using the AWS Default Credential Provider Chain.
If your Git URI matches the CodeCommit URI pattern (above) then you must provide valid AWS credentials in the username and password, or in one of the locations supported by the default credential provider chain. AWS EC2 instances may use IAM Roles for EC2 Instances.
Note: The aws-java-sdk-core jar is an optional dependency. If the aws-java-sdk-core jar is not on your classpath, then the AWS Code Commit credential provider will not be created regardless of the git server URI.
By default, the JGit library used by Spring Cloud Config Server uses SSH configuration files such as ~/.ssh/known_hosts
and /etc/ssh/ssh_config
when connecting to Git repositories using an SSH URI.
In cloud environments such as Cloud Foundry, the local filesystem may be ephemeral or not easily accessible. For cases such as these, SSH configuration can be set using
Java properties. In order to activate property based SSH configuration, the property spring.cloud.config.server.git.ignoreLocalSshSettings
must be set to true
.
Example:
spring: cloud: config: server: git: uri: git@gitserver.com:team/repo1.git ignoreLocalSshSettings: true hostKey: someHostKey hostKeyAlgorithm: ssh-rsa privateKey: | -----BEGIN RSA PRIVATE KEY----- MIIEpgIBAAKCAQEAx4UbaDzY5xjW6hc9jwN0mX33XpTDVW9WqHp5AKaRbtAC3DqX IXFMPgw3K45jxRb93f8tv9vL3rD9CUG1Gv4FM+o7ds7FRES5RTjv2RT/JVNJCoqF ol8+ngLqRZCyBtQN7zYByWMRirPGoDUqdPYrj2yq+ObBBNhg5N+hOwKjjpzdj2Ud 1l7R+wxIqmJo1IYyy16xS8WsjyQuyC0lL456qkd5BDZ0Ag8j2X9H9D5220Ln7s9i oezTipXipS7p7Jekf3Ywx6abJwOmB0rX79dV4qiNcGgzATnG1PkXxqt76VhcGa0W DDVHEEYGbSQ6hIGSh0I7BQun0aLRZojfE3gqHQIDAQABAoIBAQCZmGrk8BK6tXCd fY6yTiKxFzwb38IQP0ojIUWNrq0+9Xt+NsypviLHkXfXXCKKU4zUHeIGVRq5MN9b BO56/RrcQHHOoJdUWuOV2qMqJvPUtC0CpGkD+valhfD75MxoXU7s3FK7yjxy3rsG EmfA6tHV8/4a5umo5TqSd2YTm5B19AhRqiuUVI1wTB41DjULUGiMYrnYrhzQlVvj 5MjnKTlYu3V8PoYDfv1GmxPPh6vlpafXEeEYN8VB97e5x3DGHjZ5UrurAmTLTdO8 +AahyoKsIY612TkkQthJlt7FJAwnCGMgY6podzzvzICLFmmTXYiZ/28I4BX/mOSe pZVnfRixAoGBAO6Uiwt40/PKs53mCEWngslSCsh9oGAaLTf/XdvMns5VmuyyAyKG ti8Ol5wqBMi4GIUzjbgUvSUt+IowIrG3f5tN85wpjQ1UGVcpTnl5Qo9xaS1PFScQ xrtWZ9eNj2TsIAMp/svJsyGG3OibxfnuAIpSXNQiJPwRlW3irzpGgVx/AoGBANYW dnhshUcEHMJi3aXwR12OTDnaLoanVGLwLnkqLSYUZA7ZegpKq90UAuBdcEfgdpyi PhKpeaeIiAaNnFo8m9aoTKr+7I6/uMTlwrVnfrsVTZv3orxjwQV20YIBCVRKD1uX VhE0ozPZxwwKSPAFocpyWpGHGreGF1AIYBE9UBtjAoGBAI8bfPgJpyFyMiGBjO6z FwlJc/xlFqDusrcHL7abW5qq0L4v3R+FrJw3ZYufzLTVcKfdj6GelwJJO+8wBm+R gTKYJItEhT48duLIfTDyIpHGVm9+I1MGhh5zKuCqIhxIYr9jHloBB7kRm0rPvYY4 VAykcNgyDvtAVODP+4m6JvhjAoGBALbtTqErKN47V0+JJpapLnF0KxGrqeGIjIRV cYA6V4WYGr7NeIfesecfOC356PyhgPfpcVyEztwlvwTKb3RzIT1TZN8fH4YBr6Ee KTbTjefRFhVUjQqnucAvfGi29f+9oE3Ei9f7wA+H35ocF6JvTYUsHNMIO/3gZ38N CPjyCMa9AoGBAMhsITNe3QcbsXAbdUR00dDsIFVROzyFJ2m40i4KCRM35bC/BIBs q0TY3we+ERB40U8Z2BvU61QuwaunJ2+uGadHo58VSVdggqAo0BSkH58innKKt96J 69pcVH/4rmLbXdcmNYGm6iu+MlPQk4BUZknHSmVHIFdJ0EPupVaQ8RHT -----END RSA PRIVATE KEY-----
Table 5.1. SSH Configuration properties
Property Name | Remarks |
---|---|
ignoreLocalSshSettings | If true, use property based SSH config instead of file based. Must be set at as |
privateKey | Valid SSH private key. Must be set if |
hostKey | Valid SSH host key. Must be set if |
hostKeyAlgorithm | One of |
strictHostKeyChecking |
|
Spring Cloud Config Server also supports a search path with
placeholders for the {application}
and {profile}
(and {label}
if
you need it). Example:
spring: cloud: config: server: git: uri: https://github.com/spring-cloud-samples/config-repo searchPaths: '{application}'
searches the repository for files in the same name as the directory (as well as the top level). Wildcards are also valid in a search path with placeholders (any matching directory is included in the search).
As mentioned before Spring Cloud Config Server makes a clone of the remote git repository and if somehow the local copy gets dirty (e.g. folder content changes by OS process) so Spring Cloud Config Server cannot update the local copy from remote repository.
To solve this there is a force-pull
property that will make Spring Cloud
Config Server force pull from remote repository if the local copy is dirty.
Example:
spring: cloud: config: server: git: uri: https://github.com/spring-cloud-samples/config-repo force-pull: true
If you have a multiple repositories configuration you can configure the
force-pull
property per repository. Example:
spring: cloud: config: server: git: uri: https://git/common/config-repo.git force-pull: true repos: team-a: pattern: team-a-* uri: http://git/team-a/config-repo.git force-pull: true team-b: pattern: team-b-* uri: http://git/team-b/config-repo.git force-pull: true team-c: pattern: team-c-* uri: http://git/team-a/config-repo.git
![]() | Note |
---|---|
The default value for |
![]() | Warning |
---|---|
With VCS based backends (git, svn) files are checked out or cloned to the local filesystem. By default they are put in the system temporary directory with a prefix of |
There is also a "native" profile in the Config Server that doesn’t use Git, but just loads the config files from the local classpath or file system (any static URL you want to point to with "spring.cloud.config.server.native.searchLocations"). To use the native profile just launch the Config Server with "spring.profiles.active=native".
![]() | Note |
---|---|
Remember to use the |
![]() | Warning |
---|---|
The default value of the |
![]() | Tip |
---|---|
A filesystem backend is great for getting started quickly and for testing. To use it in production you need to be sure that the file system is reliable, and shared across all instances of the Config Server. |
The search locations can contain placeholders for {application}
,
{profile}
and {label}
. In this way you can segregate the
directories in the path, and choose a strategy that makes sense for
you (e.g. sub-directory per application, or sub-directory per
profile).
If you don’t use placeholders in the search locations, this repository
also appends the {label}
parameter of the HTTP resource to a suffix
on the search path, so properties files are loaded from each search
location and a subdirectory with the same name as the label (the
labelled properties take precedence in the Spring Environment). Thus
the default behaviour with no placeholders is the same as adding a
search location ending with /{label}/
. For example file:/tmp/config
is the same as file:/tmp/config,file:/tmp/config/{label}
. This behavior can be
disabled by setting spring.cloud.config.server.native.addLabelLocations=false
.
Spring Cloud Config Server also supports Vault as a backend.
For more information on Vault see the Vault quickstart guide.
To enable the config server to use a Vault backend you must run your config server
with the vault
profile. For example in your config server’s application.properties
you can add spring.profiles.active=vault
.
By default the config server will assume your Vault server is running at
http://127.0.0.1:8200
. It also will assume that the name of backend
is secret
and the key is application
. All of these defaults can be
configured in your config server’s application.properties
. Below is a
table of configurable Vault properties. All properties are prefixed with
spring.cloud.config.server.vault
.
Name | Default Value |
---|---|
host | 127.0.0.1 |
port | 8200 |
scheme | http |
backend | secret |
defaultKey | application |
profileSeparator | , |
All configurable properties can be found in
org.springframework.cloud.config.server.environment.VaultEnvironmentRepository
.
With your config server running you can make HTTP requests to the server to retrieve values from the Vault backend. To do this you will need a token for your Vault server.
First place some data in you Vault. For example
$ vault write secret/application foo=bar baz=bam $ vault write secret/myapp foo=myappsbar
Now make the HTTP request to your config server to retrieve the values.
$ curl -X "GET" "http://localhost:8888/myapp/default" -H "X-Config-Token: yourtoken"
You should see a response similar to this after making the above request.
{ "name":"myapp", "profiles":[ "default" ], "label":null, "version":null, "state":null, "propertySources":[ { "name":"vault:myapp", "source":{ "foo":"myappsbar" } }, { "name":"vault:application", "source":{ "baz":"bam", "foo":"bar" } } ] }
When using Vault you can provide your applications with multiple properties sources. For example, assume you have written data to the following paths in Vault.
secret/myApp,dev secret/myApp secret/application,dev secret/application
Properties written to secret/application
are available to
all applications using the Config Server. An
application with the name myApp
would have any properties
written to secret/myApp
and secret/application
available to it.
When myApp
has the dev
profile enabled then properties written to
all of the above paths would be available to it, with properties in
the first path in the list taking priority over the others.
With file-based (i.e. git, svn and native) repositories, resources
with file names in application*
are shared between all client
applications (so application.properties
, application.yml
,
application-*.properties
etc.). You can use resources with these
file names to configure global defaults and have them overridden by
application-specific files as necessary.
The #_property_overrides[property overrides] feature can also be used for setting global defaults, and with placeholders applications are allowed to override them locally.
![]() | Tip |
---|---|
With the "native" profile (local file system backend) it is
recommended that you use an explicit search location that isn’t part
of the server’s own configuration. Otherwise the |
When using Vault as a backend you can share configuration with
all applications by placing configuration in
secret/application
. For example, if you run this Vault command
$ vault write secret/application foo=bar baz=bam
All applications using the config server will have the properties
foo
and baz
available to them.
In some scenarios you may wish to pull configuration data from multiple environment repositories. To do this just enable multiple profiles in your config server’s application properties or YAML file. If, for example, you want to pull configuration data from a Git repository as well as a SVN repository you would set the following properties for your configuration server.
spring: profiles: active: git, svn cloud: config: server: svn: uri: file:///path/to/svn/repo order: 2 git: uri: file:///path/to/git/repo order: 1
In addition to each repo specifying a URI, you can also specify an order
property.
The order
property allows you to specify the priority order for all your repositories.
The lower the numerical value of the order
property the higher priority it will have.
The priority order of a repository will help resolve any potential conflicts between
repositories that contain values for the same properties.
![]() | Note |
---|---|
Any type of failure when retrieving values from an environment repositoy will result in a failure for the entire composite environment. |
![]() | Note |
---|---|
When using a composite environment it is important that all repos contain
the same label(s). If you have an environment similar to the one above and you request
configuration data with the label |
It is also possible to provide your own EnvironmentRepository
bean
to be included as part of a composite environment in addition to
using one of the environment repositories from Spring Cloud. To do this your bean
must implement the EnvironmentRepository
interface. If you would like to control
the priority of you custom EnvironmentRepository
within the composite
environment you should also implement the Ordered
interface and override the
getOrdered
method. If you do not implement the Ordered
interface then your
EnvironmentRepository
will be given the lowest priority.
The Config Server has an "overrides" feature that allows the operator
to provide configuration properties to all applications that cannot be
accidentally changed by the application using the normal Spring Boot
hooks. To declare overrides just add a map of name-value pairs to
spring.cloud.config.server.overrides
. For example
spring: cloud: config: server: overrides: foo: bar
will cause all applications that are config clients to read foo=bar
independent of their own configuration. (Of course an application can
use the data in the Config Server in any way it likes, so overrides
are not enforceable, but they do provide useful default behaviour if
they are Spring Cloud Config clients.)
![]() | Tip |
---|---|
Normal, Spring environment placeholders with "${}" can be escaped
(and resolved on the client) by using backslash ("\") to escape the
"$" or the "{", e.g. |
You can change the priority of all overrides in the client to be more
like default values, allowing applications to supply their own values
in environment variables or System properties, by setting the flag
spring.cloud.config.overrideNone=true
(default is false) in the
remote repository.
Config Server comes with a Health Indicator that checks if the configured
EnvironmentRepository
is working. By default it asks the EnvironmentRepository
for an application named app
, the default
profile and the default
label provided by the EnvironmentRepository
implementation.
You can configure the Health Indicator to check more applications along with custom profiles and custom labels, e.g.
spring: cloud: config: server: health: repositories: myservice: label: mylabel myservice-dev: name: myservice profiles: development
You can disable the Health Indicator by setting spring.cloud.config.server.health.enabled=false
.
You are free to secure your Config Server in any way that makes sense to you (from physical network security to OAuth2 bearer tokens), and Spring Security and Spring Boot make it easy to do pretty much anything.
To use the default Spring Boot configured HTTP Basic security, just
include Spring Security on the classpath (e.g. through
spring-boot-starter-security
). The default is a username of "user"
and a randomly generated password, which isn’t going to be very useful
in practice, so we recommend you configure the password (via
security.user.password
) and encrypt it (see below for instructions
on how to do that).
![]() | Important |
---|---|
Prerequisites: to use the encryption and decryption features you need the full-strength JCE installed in your JVM (it’s not there by default). You can download the "Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files" from Oracle, and follow instructions for installation (essentially replace the 2 policy files in the JRE lib/security directory with the ones that you downloaded). |
If the remote property sources contain encrypted content (values
starting with {cipher}
) they will be decrypted before sending to
clients over HTTP. The main advantage of this set up is that the
property values don’t have to be in plain text when they are "at rest"
(e.g. in a git repository). If a value cannot be decrypted it is
removed from the property source and an additional property is added
with the same key, but prefixed with "invalid." and a value that means
"not applicable" (usually "<n/a>"). This is largely to prevent cipher
text being used as a password and accidentally leaking.
If you are setting up a remote config repository for config client
applications it might contain an application.yml
like this, for
instance:
application.yml.
spring: datasource: username: dbuser password: '{cipher}FKSAJDFGYOS8F7GLHAKERGFHLSAJ'
Encrypted values in a .properties file must not be wrapped in quotes, otherwise the value will not be decrypted:
application.properties.
spring.datasource.username: dbuser spring.datasource.password: {cipher}FKSAJDFGYOS8F7GLHAKERGFHLSAJ
You can safely push this plain text to a shared git repository and the secret password is protected.
The server also exposes /encrypt
and /decrypt
endpoints (on the
assumption that these will be secured and only accessed by authorized
agents). If you are editing a remote config file you can use the Config Server
to encrypt values by POSTing to the /encrypt
endpoint, e.g.
$ curl localhost:8888/encrypt -d mysecret 682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda
![]() | Note |
---|---|
If the value you are encrypting has characters in it that need to be URL encoded you should use
the |
![]() | Tip |
---|---|
Be sure not to include any of the curl command statistics in the encrypted value. Outputting the value to a file can help avoid this problem. |
The inverse operation is also available via /decrypt
(provided the server is
configured with a symmetric key or a full key pair):
$ curl localhost:8888/decrypt -d 682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda mysecret
![]() | Tip |
---|---|
If you are testing like this with curl, then use
|
Take the encrypted value and add the {cipher}
prefix before you put
it in the YAML or properties file, and before you commit and push it
to a remote, potentially insecure store.
The /encrypt
and /decrypt
endpoints also both accept paths of the
form /*/{name}/{profiles}
which can be used to control cryptography
per application (name) and profile when clients call into the main
Environment resource.
![]() | Note |
---|---|
to control the cryptography in this granular way you must also
provide a |
The spring
command line client (with Spring Cloud CLI extensions
installed) can also be used to encrypt and decrypt, e.g.
$ spring encrypt mysecret --key foo 682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda $ spring decrypt --key foo 682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda mysecret
To use a key in a file (e.g. an RSA public key for encryption) prepend the key value with "@" and provide the file path, e.g.
$ spring encrypt mysecret --key @${HOME}/.ssh/id_rsa.pub AQAjPgt3eFZQXwt8tsHAVv/QHiY5sI2dRcR+...
The key argument is mandatory (despite having a --
prefix).
The Config Server can use a symmetric (shared) key or an asymmetric one (RSA key pair). The asymmetric choice is superior in terms of security, but it is often more convenient to use a symmetric key since it is just a single property value to configure.
To configure a symmetric key you just need to set encrypt.key
to a
secret String (or use an enviroment variable ENCRYPT_KEY
to keep it
out of plain text configuration files).
To configure an asymmetric key you can either set the key as a
PEM-encoded text value (in encrypt.key
), or via a keystore (e.g. as
created by the keytool
utility that comes with the JDK). The
keystore properties are encrypt.keyStore.*
with *
equal to
location
(a Resource
location),password
(to unlock the keystore) andalias
(to identify which key in the store is to be
used).The encryption is done with the public key, and a private key is needed for decryption. Thus in principle you can configure only the public key in the server if you only want to do encryption (and are prepared to decrypt the values yourself locally with the private key). In practice you might not want to do that because it spreads the key management process around all the clients, instead of concentrating it in the server. On the other hand it’s a useful option if your config server really is relatively insecure and only a handful of clients need the encrypted properties.
To create a keystore for testing you can do something like this:
$ keytool -genkeypair -alias mytestkey -keyalg RSA \ -dname "CN=Web Server,OU=Unit,O=Organization,L=City,S=State,C=US" \ -keypass changeme -keystore server.jks -storepass letmein
Put the server.jks
file in the classpath (for instance) and then in
your application.yml
for the Config Server:
encrypt: keyStore: location: classpath:/server.jks password: letmein alias: mytestkey secret: changeme
In addition to the {cipher}
prefix in encrypted property values, the
Config Server looks for {name:value}
prefixes (zero or many) before
the start of the (Base64 encoded) cipher text. The keys are passed to
a TextEncryptorLocator
which can do whatever logic it needs to
locate a TextEncryptor
for the cipher. If you have configured a
keystore (encrypt.keystore.location
) the default locator will look
for keys in the store with aliases as supplied by the "key" prefix,
i.e. with a cipher text like this:
foo: bar: `{cipher}{key:testkey}...`
the locator will look for a key named "testkey". A secret can also be
supplied via a {secret:…}
value in the prefix, but if it is not
the default is to use the keystore password (which is what you get
when you build a keytore and don’t specify a secret). If you do
supply a secret it is recommended that you also encrypt the secrets
using a custom SecretLocator
.
Key rotation is hardly ever necessary on cryptographic grounds if the
keys are only being used to encrypt a few bytes of configuration data
(i.e. they are not being used elsewhere), but occasionally you might
need to change the keys if there is a security breach for instance. In
that case all the clients would need to change their source config
files (e.g. in git) and use a new {key:…}
prefix in all the
ciphers, checking beforehand of course that the key alias is available
in the Config Server keystore.
![]() | Tip |
---|---|
the |
Sometimes you want the clients to decrypt the configuration locally,
instead of doing it in the server. In that case you can still have
/encrypt and /decrypt endpoints (if you provide the encrypt.*
configuration to locate a key), but you need to explicitly switch off
the decryption of outgoing properties using
spring.cloud.config.server.encrypt.enabled=false
. If you don’t care
about the endpoints, then it should work if you configure neither the
key nor the enabled flag.
The default JSON format from the environment endpoints is perfect for
consumption by Spring applications because it maps directly onto the
Environment
abstraction. If you prefer you can consume the same data
as YAML or Java properties by adding a suffix to the resource path
(".yml", ".yaml" or ".properties"). This can be useful for consumption
by applications that do not care about the structure of the JSON
endpoints, or the extra metadata they provide, for example an
application that is not using Spring might benefit from the simplicity
of this approach.
The YAML and properties representations have an additional flag
(provided as a boolean query parameter resolvePlaceholders
) to
signal that placeholders in the source documents, in the standard
Spring ${…}
form, should be resolved in the output where possible
before rendering. This is a useful feature for consumers that don’t
know about the Spring placeholder conventions.
![]() | Note |
---|---|
there are limitations in using the YAML or properties formats, mainly in relation to the loss of metadata. The JSON is structured as an ordered list of property sources, for example, with names that correlate with the source. The YAML and properties forms are coalesced into a single map, even if the origin of the values has multiple sources, and the names of the original source files are lost. The YAML representation is not necessarily a faithful representation of the YAML source in a backing repository either: it is constructed from a list of flat property sources, and assumptions have to be made about the form of the keys. |
Instead of using the Environment
abstraction (or one of the
alternative representations of it in YAML or properties format) your
applications might need generic plain text configuration files,
tailored to their environment. The Config Server provides these
through an additional endpoint at /{name}/{profile}/{label}/{path}
where "name", "profile" and "label" have the same meaning as the
regular environment endpoint, but "path" is a file name
(e.g. log.xml
). The source files for this endpoint are located in
the same way as for the environment endpoints: the same search path is
used as for properties or YAML files, but instead of aggregating all
matching resources, only the first one to match is returned.
After a resource is located, placeholders in the normal format
(${…}
) are resolved using the effective Environment
for the
application name, profile and label supplied. In this way the resource
endpoint is tightly integrated with the environment
endpoints. Example, if you have this layout for a GIT (or SVN)
repository:
application.yml nginx.conf
where nginx.conf
looks like this:
server { listen 80; server_name ${nginx.server.name}; }
and application.yml
like this:
nginx: server: name: example.com --- spring: profiles: development nginx: server: name: develop.com
then the /foo/default/master/nginx.conf
resource looks like this:
server { listen 80; server_name example.com; }
and /foo/development/master/nginx.conf
like this:
server { listen 80; server_name develop.com; }
![]() | Note |
---|---|
just like the source files for environment configuration, the
"profile" is used to resolve the file name, so if you want a
profile-specific file then |
The Config Server runs best as a standalone application, but if you
need to you can embed it in another application. Just use the
@EnableConfigServer
annotation. An optional property that can be
useful in this case is spring.cloud.config.server.bootstrap
which is
a flag to indicate that the server should configure itself from its
own remote repository. The flag is off by default because it can delay
startup, but when embedded in another application it makes sense to
initialize the same way as any other application.
![]() | Note |
---|---|
It should be obvious, but remember that if you use the bootstrap
flag the config server will need to have its name and repository URI
configured in |
To change the location of the server endpoints you can (optionally)
set spring.cloud.config.server.prefix
, e.g. "/config", to serve the
resources under a prefix. The prefix should start but not end with a
"/". It is applied to the @RequestMappings
in the Config Server
(i.e. underneath the Spring Boot prefixes server.servletPath
and
server.contextPath
).
If you want to read the configuration for an application directly from
the backend repository (instead of from the config server) that’s
basically an embedded config server with no endpoints. You can switch
off the endpoints entirely if you don’t use the @EnableConfigServer
annotation (just set spring.cloud.config.server.bootstrap=true
).
Many source code repository providers (like Github, Gitlab or Bitbucket
for instance) will notify you of changes in a repository through a
webhook. You can configure the webhook via the provider’s user
interface as a URL and a set of events in which you are
interested. For instance
Github
will POST to the webhook with a JSON body containing a list of
commits, and a header "X-Github-Event" equal to "push". If you add a
dependency on the spring-cloud-config-monitor
library and activate
the Spring Cloud Bus in your Config Server, then a "/monitor" endpoint
is enabled.
When the webhook is activated the Config Server will send a
RefreshRemoteApplicationEvent
targeted at the applications it thinks
might have changed. The change detection can be strategized, but by
default it just looks for changes in files that match the application
name (e.g. "foo.properties" is targeted at the "foo" application, and
"application.properties" is targeted at all applications). The strategy
if you want to override the behaviour is PropertyPathNotificationExtractor
which accepts the request headers and body as parameters and returns a list
of file paths that changed.
The default configuration works out of the box with Github, Gitlab or
Bitbucket. In addition to the JSON notifications from Github, Gitlab
or Bitbucket you can trigger a change notification by POSTing to
"/monitor" with a form-encoded body parameters path={name}
. This will
broadcast to applications matching the "{name}" pattern (can contain
wildcards).
![]() | Note |
---|---|
the |
![]() | Note |
---|---|
the default configuration also detects filesystem changes in local git repositories (the webhook is not used in that case but as soon as you edit a config file a refresh will be broadcast). |
A Spring Boot application can take immediate advantage of the Spring
Config Server (or other external property sources provided by the
application developer), and it will also pick up some additional
useful features related to Environment
change events.
This is the default behaviour for any application which has the Spring
Cloud Config Client on the classpath. When a config client starts up
it binds to the Config Server (via the bootstrap configuration
property spring.cloud.config.uri
) and initializes Spring
Environment
with remote property sources.
The net result of this is that all client apps that want to consume
the Config Server need a bootstrap.yml
(or an environment variable)
with the server address in spring.cloud.config.uri
(defaults to
"http://localhost:8888").
If you are using a `DiscoveryClient implementation, such as Spring Cloud Netflix and Eureka Service Discovery or Spring Cloud Consul (Spring Cloud Zookeeper does not support this yet), then you can have the Config Server register with the Discovery Service if you want to, but in the default "Config First" mode, clients won’t be able to take advantage of the registration.
If you prefer to use DiscoveryClient
to locate the Config Server, you can do
that by setting spring.cloud.config.discovery.enabled=true
(default
"false"). The net result of that is that client apps all need a
bootstrap.yml
(or an environment variable) with the appropriate discovery
configuration. For example, with Spring Cloud Netflix, you need to define the
Eureka server address, e.g. in eureka.client.serviceUrl.defaultZone
. The
price for using this option is an extra network round trip on start up to
locate the service registration. The benefit is that the Config Server
can change its co-ordinates, as long as the Discovery Service is a fixed point. The
default service id is "configserver" but you can change that on the
client with spring.cloud.config.discovery.serviceId
(and on the server
in the usual way for a service, e.g. by setting spring.application.name
).
The discovery client implementations all support some kind of metadata
map (e.g. for Eureka we have eureka.instance.metadataMap
). Some
additional properties of the Config Server may need to be configured
in its service registration metadata so that clients can connect
correctly. If the Config Server is secured with HTTP Basic you can
configure the credentials as "username" and "password". And if the
Config Server has a context path you can set "configPath". Example,
for a Config Server that is a Eureka client:
bootstrap.yml.
eureka: instance: ... metadataMap: user: osufhalskjrtl password: lviuhlszvaorhvlo5847 configPath: /config
In some cases, it may be desirable to fail startup of a service if
it cannot connect to the Config Server. If this is the desired
behavior, set the bootstrap configuration property
spring.cloud.config.failFast=true
and the client will halt with
an Exception.
If you expect that the config server may occasionally be unavailable when
your app starts, you can ask it to keep trying after a failure. First you need
to set spring.cloud.config.failFast=true
, and then you need to add
spring-retry
and spring-boot-starter-aop
to your classpath. The default
behaviour is to retry 6 times with an initial backoff interval of 1000ms and an
exponential multiplier of 1.1 for subsequent backoffs. You can configure these
properties (and others) using spring.cloud.config.retry.*
configuration properties.
![]() | Tip |
---|---|
To take full control of the retry add a |
The Config Service serves property sources from /{name}/{profile}/{label}
, where the default bindings in the client app are
${spring.application.name}
${spring.profiles.active}
(actually Environment.getActiveProfiles()
)All of them can be overridden by setting spring.cloud.config.*
(where *
is "name", "profile" or "label"). The "label" is useful for
rolling back to previous versions of configuration; with the default
Config Server implementation it can be a git label, branch name or
commit id. Label can also be provided as a comma-separated list, in
which case the items in the list are tried on-by-one until one succeeds.
This can be useful when working on a feature branch, for instance,
when you might want to align the config label with your branch, but
make it optional (e.g. spring.cloud.config.label=myfeature,develop
).
If you use HTTP Basic security on the server then clients just need to know the password (and username if it isn’t the default). You can do that via the config server URI, or via separate username and password properties, e.g.
bootstrap.yml.
spring: cloud: config: uri: https://user:[email protected]
or
bootstrap.yml.
spring: cloud: config: uri: https://myconfig.mycompany.com username: user password: secret
The spring.cloud.config.password
and spring.cloud.config.username
values override anything that is provided in the URI.
If you deploy your apps on Cloud Foundry then the best way to provide the password is through service credentials, e.g. in the URI, since then it doesn’t even need to be in a config file. An example which works locally and for a user-provided service on Cloud Foundry named "configserver":
bootstrap.yml.
spring: cloud: config: uri: ${vcap.services.configserver.credentials.uri:http://user:password@localhost:8888}
If you use another form of security you might need to provide a
RestTemplate
to the ConfigServicePropertySourceLocator
(e.g. by
grabbing it in the bootstrap context and injecting one).
The Config Client supplies a Spring Boot Health Indicator that attempts to load configuration from Config Server. The health indicator can be disabled by setting health.config.enabled=false
. The response is also cached for performance reasons. The default cache time to live is 5 minutes. To change that value set the health.config.time-to-live
property (in milliseconds).
In some cases you might need to customize the requests made to the config server from
the client. Typically this involves passing special Authorization
headers to
authenticate requests to the server. To provide a custom RestTemplate
follow the
steps below.
PropertySourceLocator
.CustomConfigServiceBootstrapConfiguration.java.
@Configuration public class CustomConfigServiceBootstrapConfiguration { @Bean public ConfigServicePropertySourceLocator configServicePropertySourceLocator() { ConfigClientProperties clientProperties = configClientProperties(); ConfigServicePropertySourceLocator configServicePropertySourceLocator = new ConfigServicePropertySourceLocator(clientProperties); configServicePropertySourceLocator.setRestTemplate(customRestTemplate(clientProperties)); return configServicePropertySourceLocator; } }
resources/META-INF
create a file called
spring.factories
and specify your custom configuration.spring.factories.
org.springframework.cloud.bootstrap.BootstrapConfiguration = com.my.config.client.CustomConfigServiceBootstrapConfiguration
When using Vault as a backend to your config server the client will need to
supply a token for the server to retrieve values from Vault. This token
can be provided within the client by setting spring.cloud.config.token
in bootstrap.yml
.
bootstrap.yml.
spring: cloud: config: token: YourVaultToken
Vault supports the ability to nest keys in a value stored in Vault. For example
echo -n '{"appA": {"secret": "appAsecret"}, "bar": "baz"}' | vault write secret/myapp -
This command will write a JSON object to your Vault. To access these values in Spring you would use the traditional dot(.) annotation. For example
@Value("${appA.secret}") String name = "World";
The above code would set the name
variable to appAsecret
.
Dalston.SR5
This project provides Netflix OSS integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations you can quickly enable and configure the common patterns inside your application and build large distributed systems with battle-tested Netflix components. The patterns provided include Service Discovery (Eureka), Circuit Breaker (Hystrix), Intelligent Routing (Zuul) and Client Side Load Balancing (Ribbon).
Service Discovery is one of the key tenets of a microservice based architecture. Trying to hand configure each client or some form of convention can be very difficult to do and can be very brittle. Eureka is the Netflix Service Discovery Server and Client. The server can be configured and deployed to be highly available, with each server replicating state about the registered services to the others.
To include Eureka Client in your project use the starter with group org.springframework.cloud
and artifact id spring-cloud-starter-eureka
. See the Spring Cloud Project page
for details on setting up your build system with the current Spring Cloud Release Train.
When a client registers with Eureka, it provides meta-data about itself such as host and port, health indicator URL, home page etc. Eureka receives heartbeat messages from each instance belonging to a service. If the heartbeat fails over a configurable timetable, the instance is normally removed from the registry.
Example eureka client:
@Configuration @ComponentScan @EnableAutoConfiguration @EnableEurekaClient @RestController public class Application { @RequestMapping("/") public String home() { return "Hello world"; } public static void main(String[] args) { new SpringApplicationBuilder(Application.class).web(true).run(args); } }
(i.e. utterly normal Spring Boot app). In this example we use
@EnableEurekaClient
explicitly, but with only Eureka available you
could also use @EnableDiscoveryClient
. Configuration is required to
locate the Eureka server. Example:
application.yml.
eureka: client: serviceUrl: defaultZone: http://localhost:8761/eureka/
where "defaultZone" is a magic string fallback value that provides the service URL for any client that doesn’t express a preference (i.e. it’s a useful default).
The default application name (service ID), virtual host and non-secure
port, taken from the Environment
, are ${spring.application.name}
,
${spring.application.name}
and ${server.port}
respectively.
@EnableEurekaClient
makes the app into both a Eureka "instance"
(i.e. it registers itself) and a "client" (i.e. it can query the
registry to locate other services). The instance behaviour is driven
by eureka.instance.*
configuration keys, but the defaults will be
fine if you ensure that your application has a
spring.application.name
(this is the default for the Eureka service
ID, or VIP).
See EurekaInstanceConfigBean and EurekaClientConfigBean for more details of the configurable options.
HTTP basic authentication will be automatically added to your eureka
client if one of the eureka.client.serviceUrl.defaultZone
URLs has
credentials embedded in it (curl style, like
http://user:password@localhost:8761/eureka
). For more complex needs
you can create a @Bean
of type DiscoveryClientOptionalArgs
and
inject ClientFilter
instances into it, all of which will be applied
to the calls from the client to the server.
![]() | Note |
---|---|
Because of a limitation in Eureka it isn’t possible to support per-server basic auth credentials, so only the first set that are found will be used. |
The status page and health indicators for a Eureka instance default to
"/info" and "/health" respectively, which are the default locations of
useful endpoints in a Spring Boot Actuator application. You need to
change these, even for an Actuator application if you use a
non-default context path or servlet path
(e.g. server.servletPath=/foo
) or management endpoint path
(e.g. management.contextPath=/admin
). Example:
application.yml.
eureka: instance: statusPageUrlPath: ${management.context-path}/info healthCheckUrlPath: ${management.context-path}/health
These links show up in the metadata that is consumed by clients, and used in some scenarios to decide whether to send requests to your application, so it’s helpful if they are accurate.
If your app wants to be contacted over HTTPS you can set two flags in
the EurekaInstanceConfig
, viz
eureka.instance.[nonSecurePortEnabled,securePortEnabled]=[false,true]
respectively. This will make Eureka publish instance information
showing an explicit preference for secure communication. The Spring
Cloud DiscoveryClient
will always return a URI starting with https
for a
service configured this way, and the Eureka (native) instance
information will have a secure health check URL.
Because of the way Eureka works internally, it will still publish a non-secure URL for status and home page unless you also override those explicitly. You can use placeholders to configure the eureka instance urls, e.g.
application.yml.
eureka: instance: statusPageUrl: https://${eureka.hostname}/info healthCheckUrl: https://${eureka.hostname}/health homePageUrl: https://${eureka.hostname}/
(Note that ${eureka.hostname}
is a native placeholder only available
in later versions of Eureka. You could achieve the same thing with
Spring placeholders as well, e.g. using ${eureka.instance.hostName}
.)
![]() | Note |
---|---|
If your app is running behind a proxy, and the SSL termination is in the proxy (e.g. if you run in Cloud Foundry or other platforms as a service) then you will need to ensure that the proxy "forwarded" headers are intercepted and handled by the application. An embedded Tomcat container in a Spring Boot app does this automatically if it has explicit configuration for the 'X-Forwarded-\*` headers. A sign that you got this wrong will be that the links rendered by your app to itself will be wrong (the wrong host, port or protocol). |
By default, Eureka uses the client heartbeat to determine if a client is up. Unless specified otherwise the Discovery Client will not propagate the current health check status of the application per the Spring Boot Actuator. Which means that after successful registration Eureka will always announce that the application is in 'UP' state. This behaviour can be altered by enabling Eureka health checks, which results in propagating application status to Eureka. As a consequence every other application won’t be sending traffic to application in state other then 'UP'.
application.yml.
eureka: client: healthcheck: enabled: true
![]() | Warning |
---|---|
|
If you require more control over the health checks, you may consider
implementing your own com.netflix.appinfo.HealthCheckHandler
.
It’s worth spending a bit of time understanding how the Eureka metadata works, so you can use it in a way that makes sense in your platform. There is standard metadata for things like hostname, IP address, port numbers, status page and health check. These are published in the service registry and used by clients to contact the services in a straightforward way. Additional metadata can be added to the instance registration in the eureka.instance.metadataMap
, and this will be accessible in the remote clients, but in general will not change the behaviour of the client, unless it is made aware of the meaning of the metadata. There are a couple of special cases described below where Spring Cloud already assigns meaning to the metadata map.
Cloudfoundry has a global router so that all instances of the same app have the same hostname (it’s the same in other PaaS solutions with a similar architecture). This isn’t necessarily a barrier to using Eureka, but if you use the router (recommended, or even mandatory depending on the way your platform was set up), you need to explicitly set the hostname and port numbers (secure or non-secure) so that they use the router. You might also want to use instance metadata so you can distinguish between the instances on the client (e.g. in a custom load balancer). By default, the eureka.instance.instanceId
is vcap.application.instance_id
. For example:
application.yml.
eureka: instance: hostname: ${vcap.application.uris[0]} nonSecurePort: 80
Depending on the way the security rules are set up in your Cloudfoundry instance, you might be able to register and use the IP address of the host VM for direct service-to-service calls. This feature is not (yet) available on Pivotal Web Services (PWS).
If the application is planned to be deployed to an AWS cloud, then the Eureka instance will have to be configured to be AWS aware and this can be done by customizing the EurekaInstanceConfigBean the following way:
@Bean @Profile("!default") public EurekaInstanceConfigBean eurekaInstanceConfig(InetUtils inetUtils) { EurekaInstanceConfigBean b = new EurekaInstanceConfigBean(inetUtils); AmazonInfo info = AmazonInfo.Builder.newBuilder().autoBuild("eureka"); b.setDataCenterInfo(info); return b; }
A vanilla Netflix Eureka instance is registered with an ID that is equal to its host name (i.e. only one service per host). Spring Cloud Eureka provides a sensible default that looks like this: ${spring.cloud.client.hostname}:${spring.application.name}:${spring.application.instance_id:${server.port}}}
. For example myhost:myappname:8080
.
Using Spring Cloud you can override this by providing a unique identifier in eureka.instance.instanceId
. For example:
application.yml.
eureka: instance: instanceId: ${spring.application.name}:${vcap.application.instance_id:${spring.application.instance_id:${random.value}}}
With this metadata, and multiple service instances deployed on
localhost, the random value will kick in there to make the instance
unique. In Cloudfoundry the vcap.application.instance_id
will be
populated automatically in a Spring Boot application, so the
random value will not be needed.
Once you have an app that is @EnableDiscoveryClient
(or @EnableEurekaClient
) you can use it to
discover service instances from the Eureka Server. One way to do that is to use the native
com.netflix.discovery.EurekaClient
(as opposed to the Spring
Cloud DiscoveryClient
), e.g.
@Autowired private EurekaClient discoveryClient; public String serviceUrl() { InstanceInfo instance = discoveryClient.getNextServerFromEureka("STORES", false); return instance.getHomePageUrl(); }
![]() | Tip |
---|---|
Don’t use the |
You don’t have to use the raw Netflix EurekaClient
and usually it
is more convenient to use it behind a wrapper of some sort. Spring
Cloud has support for Feign (a REST client
builder) and also Spring RestTemplate
using
the logical Eureka service identifiers (VIPs) instead of physical
URLs. To configure Ribbon with a fixed list of physical servers you
can simply set <client>.ribbon.listOfServers
to a comma-separated
list of physical addresses (or hostnames), where <client>
is the ID
of the client.
You can also use the org.springframework.cloud.client.discovery.DiscoveryClient
which provides a simple API for discovery clients that is not specific
to Netflix, e.g.
@Autowired private DiscoveryClient discoveryClient; public String serviceUrl() { List<ServiceInstance> list = discoveryClient.getInstances("STORES"); if (list != null && list.size() > 0 ) { return list.get(0).getUri(); } return null; }
Being an instance also involves a periodic heartbeat to the registry
(via the client’s serviceUrl
) with default duration 30 seconds. A
service is not available for discovery by clients until the instance,
the server and the client all have the same metadata in their local
cache (so it could take 3 heartbeats). You can change the period using
eureka.instance.leaseRenewalIntervalInSeconds
and this will speed up
the process of getting clients connected to other services. In
production it’s probably better to stick with the default because
there are some computations internally in the server that make
assumptions about the lease renewal period.
If you have deployed Eureka clients to multiple zones than you may prefer that those clients leverage services within the same zone before trying services in another zone. To do this you need to configure your Eureka clients correctly.
First, you need to make sure you have Eureka servers deployed to each zone and that they are peers of each other. See the section on zones and regions for more information.
Next you need to tell Eureka which zone your service is in. You can do this using
the metadataMap
property. For example if service 1
is deployed to both zone 1
and zone 2
you would need to set the following Eureka properties in service 1
Service 1 in Zone 1
eureka.instance.metadataMap.zone = zone1 eureka.client.preferSameZoneEureka = true
Service 1 in Zone 2
eureka.instance.metadataMap.zone = zone2 eureka.client.preferSameZoneEureka = true
To include Eureka Server in your project use the starter with group org.springframework.cloud
and artifact id spring-cloud-starter-eureka-server
. See the Spring Cloud Project page
for details on setting up your build system with the current Spring Cloud Release Train.
Example eureka server;
@SpringBootApplication @EnableEurekaServer public class Application { public static void main(String[] args) { new SpringApplicationBuilder(Application.class).web(true).run(args); } }
The server has a home page with a UI, and HTTP API endpoints per the
normal Eureka functionality under /eureka/*
.
Eureka background reading: see flux capacitor and google group discussion.
![]() | Tip |
---|---|
Due to Gradle’s dependency resolution rules and the lack of a parent bom feature, simply depending on spring-cloud-starter-eureka-server can cause failures on application startup. To remedy this the Spring Boot Gradle plugin must be added and the Spring cloud starter parent bom must be imported like so: build.gradle. buildscript { dependencies { classpath("org.springframework.boot:spring-boot-gradle-plugin:1.3.5.RELEASE") } } apply plugin: "spring-boot" dependencyManagement { imports { mavenBom "org.springframework.cloud:spring-cloud-dependencies:Brixton.RELEASE" } }
|
The Eureka server does not have a backend store, but the service instances in the registry all have to send heartbeats to keep their registrations up to date (so this can be done in memory). Clients also have an in-memory cache of eureka registrations (so they don’t have to go to the registry for every single request to a service).
By default every Eureka server is also a Eureka client and requires (at least one) service URL to locate a peer. If you don’t provide it the service will run and work, but it will shower your logs with a lot of noise about not being able to register with the peer.
See also below for details of Ribbon support on the client side for Zones and Regions.
The combination of the two caches (client and server) and the heartbeats make a standalone Eureka server fairly resilient to failure, as long as there is some sort of monitor or elastic runtime keeping it alive (e.g. Cloud Foundry). In standalone mode, you might prefer to switch off the client side behaviour, so it doesn’t keep trying and failing to reach its peers. Example:
application.yml (Standalone Eureka Server).
server: port: 8761 eureka: instance: hostname: localhost client: registerWithEureka: false fetchRegistry: false serviceUrl: defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
Notice that the serviceUrl
is pointing to the same host as the local
instance.
Eureka can be made even more resilient and available by running
multiple instances and asking them to register with each other. In
fact, this is the default behaviour, so all you need to do to make it
work is add a valid serviceUrl
to a peer, e.g.
application.yml (Two Peer Aware Eureka Servers).
--- spring: profiles: peer1 eureka: instance: hostname: peer1 client: serviceUrl: defaultZone: http://peer2/eureka/ --- spring: profiles: peer2 eureka: instance: hostname: peer2 client: serviceUrl: defaultZone: http://peer1/eureka/
In this example we have a YAML file that can be used to run the same
server on 2 hosts (peer1 and peer2), by running it in different
Spring profiles. You could use this configuration to test the peer
awareness on a single host (there’s not much value in doing that in
production) by manipulating /etc/hosts
to resolve the host names. In
fact, the eureka.instance.hostname
is not needed if you are running
on a machine that knows its own hostname (it is looked up using
java.net.InetAddress
by default).
You can add multiple peers to a system, and as long as they are all connected to each other by at least one edge, they will synchronize the registrations amongst themselves. If the peers are physically separated (inside a data centre or between multiple data centres) then the system can in principle survive split-brain type failures.
Netflix has created a library called Hystrix that implements the circuit breaker pattern. In a microservice architecture it is common to have multiple layers of service calls.
A service failure in the lower level of services can cause cascading failure all the way up to the user. When calls to a particular service is greater than circuitBreaker.requestVolumeThreshold
(default: 20 requests) and failue percentage is greater than circuitBreaker.errorThresholdPercentage
(default: >50%) in a rolling window defined by metrics.rollingStats.timeInMilliseconds
(default: 10 seconds), the circuit opens and the call is not made. In cases of error and an open circuit a fallback can be provided by the developer.
Having an open circuit stops cascading failures and allows overwhelmed or failing services time to heal. The fallback can be another Hystrix protected call, static data or a sane empty value. Fallbacks may be chained so the first fallback makes some other business call which in turn falls back to static data.
To include Hystrix in your project use the starter with group org.springframework.cloud
and artifact id spring-cloud-starter-hystrix
. See the Spring Cloud Project page
for details on setting up your build system with the current Spring Cloud Release Train.
Example boot app:
@SpringBootApplication @EnableCircuitBreaker public class Application { public static void main(String[] args) { new SpringApplicationBuilder(Application.class).web(true).run(args); } } @Component public class StoreIntegration { @HystrixCommand(fallbackMethod = "defaultStores") public Object getStores(Map<String, Object> parameters) { //do stuff that might fail } public Object defaultStores(Map<String, Object> parameters) { return /* something useful */; } }
The @HystrixCommand
is provided by a Netflix contrib library called
"javanica".
Spring Cloud automatically wraps Spring beans with that
annotation in a proxy that is connected to the Hystrix circuit
breaker. The circuit breaker calculates when to open and close the
circuit, and what to do in case of a failure.
To configure the @HystrixCommand
you can use the commandProperties
attribute with a list of @HystrixProperty
annotations. See
here
for more details. See the Hystrix wiki
for details on the properties available.
If you want some thread local context to propagate into a @HystrixCommand
the default declaration will not work because it executes the command in a thread pool (in case of timeouts). You can switch Hystrix to use the same thread as the caller using some configuration, or directly in the annotation, by asking it to use a different "Isolation Strategy". For example:
@HystrixCommand(fallbackMethod = "stubMyService",
commandProperties = {
@HystrixProperty(name="execution.isolation.strategy", value="SEMAPHORE")
}
)
...
The same thing applies if you are using @SessionScope
or @RequestScope
. You will know when you need to do this because of a runtime exception that says it can’t find the scoped context.
You also have the option to set the hystrix.shareSecurityContext
property to true
. Doing so will auto configure an Hystrix concurrency strategy plugin hook who will transfer the SecurityContext
from your main thread to the one used by the Hystrix command. Hystrix does not allow multiple hystrix concurrency strategy to be registered so an extension mechanism is available by declaring your own HystrixConcurrencyStrategy
as a Spring bean. Spring Cloud will lookup for your implementation within the Spring context and wrap it inside its own plugin.
The state of the connected circuit breakers are also exposed in the
/health
endpoint of the calling application.
{ "hystrix": { "openCircuitBreakers": [ "StoreIntegration::getStoresByLocationLink" ], "status": "CIRCUIT_OPEN" }, "status": "UP" }
To enable the Hystrix metrics stream include a dependency on spring-boot-starter-actuator
. This will expose the /hystrix.stream
as a management endpoint.
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency>
One of the main benefits of Hystrix is the set of metrics it gathers about each HystrixCommand. The Hystrix Dashboard displays the health of each circuit breaker in an efficient manner.
When using Hystrix commands that wrap Ribbon clients you want to make sure your Hystrix timeout is configured to be longer than the configured Ribbon timeout, including any potential retries that might be made. For example, if your Ribbon connection timeout is one second and the Ribbon client might retry the request three times, than your Hystrix timeout should be slightly more than three seconds.
To include the Hystrix Dashboard in your project use the starter with group org.springframework.cloud
and artifact id spring-cloud-starter-hystrix-dashboard
. See the Spring Cloud Project page
for details on setting up your build system with the current Spring Cloud Release Train.
To run the Hystrix Dashboard annotate your Spring Boot main class with @EnableHystrixDashboard
. You then visit /hystrix
and point the dashboard to an individual instances /hystrix.stream
endpoint in a Hystrix client application.
![]() | Note |
---|---|
When connecting to a |
Looking at an individual instances Hystrix data is not very useful in terms of the overall health of the system. Turbine is an application that aggregates all of the relevant /hystrix.stream
endpoints into a combined /turbine.stream
for use in the Hystrix Dashboard. Individual instances are located via Eureka. Running Turbine is as simple as annotating your main class with the @EnableTurbine
annotation (e.g. using spring-cloud-starter-turbine to set up the classpath). All of the documented configuration properties from the Turbine 1 wiki apply. The only difference is that the turbine.instanceUrlSuffix
does not need the port prepended as this is handled automatically unless turbine.instanceInsertPort=false
.
![]() | Note |
---|---|
By default, Turbine looks for the |
eureka: instance: metadata-map: management.port: ${management.port:8081}
The configuration key turbine.appConfig
is a list of eureka serviceIds that turbine will use to lookup instances. The turbine stream is then used in the Hystrix dashboard using a url that looks like: http://my.turbine.sever:8080/turbine.stream?cluster=CLUSTERNAME
(the cluster parameter can be omitted if the name is "default"). The cluster
parameter must match an entry in turbine.aggregator.clusterConfig
. Values returned from eureka are uppercase, thus we expect this example to work if there is an app registered with Eureka called "customers":
turbine: aggregator: clusterConfig: CUSTOMERS appConfig: customers
The clusterName
can be customized by a SPEL expression in turbine.clusterNameExpression
with root an instance of InstanceInfo
. The default value is appName
, which means that the Eureka serviceId ends up as the cluster key (i.e. the InstanceInfo
for customers has an appName
of "CUSTOMERS"). A different example would be turbine.clusterNameExpression=aSGName
, which would get the cluster name from the AWS ASG name. Another example:
turbine: aggregator: clusterConfig: SYSTEM,USER appConfig: customers,stores,ui,admin clusterNameExpression: metadata['cluster']
In this case, the cluster name from 4 services is pulled from their metadata map, and is expected to have values that include "SYSTEM" and "USER".
To use the "default" cluster for all apps you need a string literal expression (with single quotes, and escaped with double quotes if it is in YAML as well):
turbine: appConfig: customers,stores clusterNameExpression: "'default'"
Spring Cloud provides a spring-cloud-starter-turbine
that has all the dependencies you need to get a Turbine server running. Just create a Spring Boot application and annotate it with @EnableTurbine
.
![]() | Note |
---|---|
by default Spring Cloud allows Turbine to use the host and port to allow multiple processes per host, per cluster. If you want the native Netflix behaviour built into Turbine that does not allow multiple processes per host, per cluster (the key to the instance id is the hostname), then set the property |
In some environments (e.g. in a PaaS setting), the classic Turbine model of pulling metrics from all the distributed Hystrix commands doesn’t work. In that case you might want to have your Hystrix commands push metrics to Turbine, and Spring Cloud enables that with messaging. All you need to do on the client is add a dependency to spring-cloud-netflix-hystrix-stream
and the spring-cloud-starter-stream-*
of your choice (see Spring Cloud Stream documentation for details on the brokers, and how to configure the client credentials, but it should work out of the box for a local broker).
On the server side Just create a Spring Boot application and annotate it with @EnableTurbineStream
and by default it will come up on port 8989 (point your Hystrix dashboard to that port, any path). You can customize the port using either server.port
or turbine.stream.port
. If you have spring-boot-starter-web
and spring-boot-starter-actuator
on the classpath as well, then you can open up the Actuator endpoints on a separate port (with Tomcat by default) by providing a management.port
which is different.
You can then point the Hystrix Dashboard to the Turbine Stream Server instead of individual Hystrix streams. If Turbine Stream is running on port 8989 on myhost, then put http://myhost:8989
in the stream input field in the Hystrix Dashboard. Circuits will be prefixed by their respective serviceId, followed by a dot, then the circuit name.
Spring Cloud provides a spring-cloud-starter-turbine-stream
that has all the dependencies you need to get a Turbine Stream server running - just add the Stream binder of your choice, e.g. spring-cloud-starter-stream-rabbit
. You need Java 8 to run the app because it is Netty-based.
Ribbon is a client side load balancer which gives you a lot of control
over the behaviour of HTTP and TCP clients. Feign already uses Ribbon,
so if you are using @FeignClient
then this section also applies.
A central concept in Ribbon is that of the named client. Each load
balancer is part of an ensemble of components that work together to
contact a remote server on demand, and the ensemble has a name that
you give it as an application developer (e.g. using the @FeignClient
annotation). Spring Cloud creates a new ensemble as an
ApplicationContext
on demand for each named client using
RibbonClientConfiguration
. This contains (amongst other things) an
ILoadBalancer
, a RestClient
, and a ServerListFilter
.
To include Ribbon in your project use the starter with group org.springframework.cloud
and artifact id spring-cloud-starter-ribbon
. See the Spring Cloud Project page
for details on setting up your build system with the current Spring Cloud Release Train.
You can configure some bits of a Ribbon client using external
properties in <client>.ribbon.*
, which is no different than using
the Netflix APIs natively, except that you can use Spring Boot
configuration files. The native options can
be inspected as static fields in CommonClientConfigKey
(part of
ribbon-core).
Spring Cloud also lets you take full control of the client by
declaring additional configuration (on top of the
RibbonClientConfiguration
) using @RibbonClient
. Example:
@Configuration @RibbonClient(name = "foo", configuration = FooConfiguration.class) public class TestConfiguration { }
In this case the client is composed from the components already in
RibbonClientConfiguration
together with any in FooConfiguration
(where the latter generally will override the former).
![]() | Warning |
---|---|
The |
Spring Cloud Netflix provides the following beans by default for ribbon
(BeanType
beanName: ClassName
):
IClientConfig
ribbonClientConfig: DefaultClientConfigImpl
IRule
ribbonRule: ZoneAvoidanceRule
IPing
ribbonPing: NoOpPing
ServerList<Server>
ribbonServerList: ConfigurationBasedServerList
ServerListFilter<Server>
ribbonServerListFilter: ZonePreferenceServerListFilter
ILoadBalancer
ribbonLoadBalancer: ZoneAwareLoadBalancer
ServerListUpdater
ribbonServerListUpdater: PollingServerListUpdater
Creating a bean of one of those type and placing it in a @RibbonClient
configuration (such as FooConfiguration
above) allows you to override each
one of the beans described. Example:
@Configuration public class FooConfiguration { @Bean public IPing ribbonPing(IClientConfig config) { return new PingUrl(); } }
This replaces the NoOpPing
with PingUrl
.
Starting with version 1.2.0, Spring Cloud Netflix now supports customizing Ribbon clients using properties to be compatible with the Ribbon documentation.
This allows you to change behavior at start up time in different environments.
The supported properties are listed below and should be prefixed by <clientName>.ribbon.
:
NFLoadBalancerClassName
: should implement ILoadBalancer
NFLoadBalancerRuleClassName
: should implement IRule
NFLoadBalancerPingClassName
: should implement IPing
NIWSServerListClassName
: should implement ServerList
NIWSServerListFilterClassName
should implement ServerListFilter
![]() | Note |
---|---|
Classes defined in these properties have precedence over beans defined using |
To set the IRule
for a service name users
you could set the following:
application.yml.
users: ribbon: NFLoadBalancerRuleClassName: com.netflix.loadbalancer.WeightedResponseTimeRule
See the Ribbon documentation for implementations provided by Ribbon.
When Eureka is used in conjunction with Ribbon (i.e., both are on the classpath) the ribbonServerList
is overridden with an extension of DiscoveryEnabledNIWSServerList
which populates the list of servers from Eureka. It also replaces the
IPing
interface with NIWSDiscoveryPing
which delegates to Eureka
to determine if a server is up. The ServerList
that is installed by
default is a DomainExtractingServerList
and the purpose of this is
to make physical metadata available to the load balancer without using
AWS AMI metadata (which is what Netflix relies on). By default the
server list will be constructed with "zone" information as provided in
the instance metadata (so on the remote clients set
eureka.instance.metadataMap.zone
), and if that is missing it can use
the domain name from the server hostname as a proxy for zone (if the
flag approximateZoneFromHostname
is set). Once the zone information
is available it can be used in a ServerListFilter
. By default it
will be used to locate a server in the same zone as the client because
the default is a ZonePreferenceServerListFilter
. The zone of the
client is determined the same way as the remote instances by default,
i.e. via eureka.instance.metadataMap.zone
.
![]() | Note |
---|---|
The orthodox "archaius" way to set the client zone is via a configuration property called "@zone", and Spring Cloud will use that in preference to all other settings if it is available (note that the key will have to be quoted in YAML configuration). |
![]() | Note |
---|---|
If there is no other source of zone data then a guess is made
based on the client configuration (as opposed to the instance
configuration). We take |
Eureka is a convenient way to abstract the discovery of remote servers
so you don’t have to hard code their URLs in clients, but if you
prefer not to use it, Ribbon and Feign are still quite
amenable. Suppose you have declared a @RibbonClient
for "stores",
and Eureka is not in use (and not even on the classpath). The Ribbon
client defaults to a configured server list, and you can supply the
configuration like this
application.yml.
stores: ribbon: listOfServers: example.com,google.com
Setting the property ribbon.eureka.enabled = false
will explicitly
disable the use of Eureka in Ribbon.
application.yml.
ribbon: eureka: enabled: false
You can also use the LoadBalancerClient
directly. Example:
public class MyClass { @Autowired private LoadBalancerClient loadBalancer; public void doStuff() { ServiceInstance instance = loadBalancer.choose("stores"); URI storesUri = URI.create(String.format("http://%s:%s", instance.getHost(), instance.getPort())); // ... do something with the URI } }
Each Ribbon named client has a corresponding child Application Context that Spring Cloud maintains, this application context is lazily loaded up on the first request to the named client. This lazy loading behavior can be changed to instead eagerly load up these child Application contexts at startup by specifying the names of the Ribbon clients.
application.yml.
ribbon: eager-load: enabled: true clients: client1, client2, client3
Feign is a declarative web service client. It makes writing web service clients easier. To use Feign create an interface and annotate it. It has pluggable annotation support including Feign annotations and JAX-RS annotations. Feign also supports pluggable encoders and decoders. Spring Cloud adds support for Spring MVC annotations and for using the same HttpMessageConverters
used by default in Spring Web. Spring Cloud integrates Ribbon and Eureka to provide a load balanced http client when using Feign.
To include Feign in your project use the starter with group org.springframework.cloud
and artifact id spring-cloud-starter-feign
. See the Spring Cloud Project page
for details on setting up your build system with the current Spring Cloud Release Train.
Example spring boot app
@Configuration @ComponentScan @EnableAutoConfiguration @EnableEurekaClient @EnableFeignClients public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } }
StoreClient.java.
@FeignClient("stores") public interface StoreClient { @RequestMapping(method = RequestMethod.GET, value = "/stores") List<Store> getStores(); @RequestMapping(method = RequestMethod.POST, value = "/stores/{storeId}", consumes = "application/json") Store update(@PathVariable("storeId") Long storeId, Store store); }
In the @FeignClient
annotation the String value ("stores" above) is
an arbitrary client name, which is used to create a Ribbon load
balancer (see below for details of Ribbon
support). You can also specify a URL using the url
attribute
(absolute value or just a hostname). The name of the bean in the
application context is the fully qualified name of the interface.
To specify your own alias value you can use the qualifier
value
of the @FeignClient
annotation.
The Ribbon client above will want to discover the physical addresses for the "stores" service. If your application is a Eureka client then it will resolve the service in the Eureka service registry. If you don’t want to use Eureka, you can simply configure a list of servers in your external configuration (see above for example).
A central concept in Spring Cloud’s Feign support is that of the named client. Each feign client is part of an ensemble of components that work together to contact a remote server on demand, and the ensemble has a name that you give it as an application developer using the @FeignClient
annotation. Spring Cloud creates a new ensemble as an
ApplicationContext
on demand for each named client using FeignClientsConfiguration
. This contains (amongst other things) an feign.Decoder
, a feign.Encoder
, and a feign.Contract
.
Spring Cloud lets you take full control of the feign client by declaring additional configuration (on top of the FeignClientsConfiguration
) using @FeignClient
. Example:
@FeignClient(name = "stores", configuration = FooConfiguration.class) public interface StoreClient { //.. }
In this case the client is composed from the components already in FeignClientsConfiguration
together with any in FooConfiguration
(where the latter will override the former).
![]() | Note |
---|---|
|
![]() | Note |
---|---|
The |
![]() | Warning |
---|---|
Previously, using the |
Placeholders are supported in the name
and url
attributes.
@FeignClient(name = "${feign.name}", url = "${feign.url}") public interface StoreClient { //.. }
Spring Cloud Netflix provides the following beans by default for feign (BeanType
beanName: ClassName
):
Decoder
feignDecoder: ResponseEntityDecoder
(which wraps a SpringDecoder
)Encoder
feignEncoder: SpringEncoder
Logger
feignLogger: Slf4jLogger
Contract
feignContract: SpringMvcContract
Feign.Builder
feignBuilder: HystrixFeign.Builder
Client
feignClient: if Ribbon is enabled it is a LoadBalancerFeignClient
, otherwise the default feign client is used.The OkHttpClient and ApacheHttpClient feign clients can be used by setting feign.okhttp.enabled
or feign.httpclient.enabled
to true
, respectively, and having them on the classpath.
Spring Cloud Netflix does not provide the following beans by default for feign, but still looks up beans of these types from the application context to create the feign client:
Logger.Level
Retryer
ErrorDecoder
Request.Options
Collection<RequestInterceptor>
SetterFactory
Creating a bean of one of those type and placing it in a @FeignClient
configuration (such as FooConfiguration
above) allows you to override each one of the beans described. Example:
@Configuration public class FooConfiguration { @Bean public Contract feignContract() { return new feign.Contract.Default(); } @Bean public BasicAuthRequestInterceptor basicAuthRequestInterceptor() { return new BasicAuthRequestInterceptor("user", "password"); } }
This replaces the SpringMvcContract
with feign.Contract.Default
and adds a RequestInterceptor
to the collection of RequestInterceptor
.
Default configurations can be specified in the @EnableFeignClients
attribute defaultConfiguration
in a similar manner as described above. The difference is that this configuration will apply to all feign clients.
![]() | Note |
---|---|
If you need to use |
application.yml
# To disable Hystrix in Feign feign: hystrix: enabled: false # To set thread isolation to SEMAPHORE hystrix: command: default: execution: isolation: strategy: SEMAPHORE
In some cases it might be necessary to customize your Feign Clients in a way that is not possible using the methods above. In this case you can create Clients using the Feign Builder API. Below is an example which creates two Feign Clients with the same interface but configures each one with a separate request interceptor.
@Import(FeignClientsConfiguration.class) class FooController { private FooClient fooClient; private FooClient adminClient; @Autowired public FooController( Decoder decoder, Encoder encoder, Client client) { this.fooClient = Feign.builder().client(client) .encoder(encoder) .decoder(decoder) .requestInterceptor(new BasicAuthRequestInterceptor("user", "user")) .target(FooClient.class, "http://PROD-SVC"); this.adminClient = Feign.builder().client(client) .encoder(encoder) .decoder(decoder) .requestInterceptor(new BasicAuthRequestInterceptor("admin", "admin")) .target(FooClient.class, "http://PROD-SVC"); } }
![]() | Note |
---|---|
In the above example |
![]() | Note |
---|---|
|
If Hystrix is on the classpath and feign.hystrix.enabled=true
, Feign will wrap all methods with a circuit breaker. Returning a com.netflix.hystrix.HystrixCommand
is also available. This lets you use reactive patterns (with a call to .toObservable()
or .observe()
or asynchronous use (with a call to .queue()
).
To disable Hystrix support on a per-client basis create a vanilla Feign.Builder
with the "prototype" scope, e.g.:
@Configuration public class FooConfiguration { @Bean @Scope("prototype") public Feign.Builder feignBuilder() { return Feign.builder(); } }
![]() | Warning |
---|---|
Prior to the Spring Cloud Dalston release, if Hystrix was on the classpath Feign would have wrapped all methods in a circuit breaker by default. This default behavior was changed in Spring Cloud Dalston in favor for an opt-in approach. |
Hystrix supports the notion of a fallback: a default code path that is executed when they circuit is open or there is an error. To enable fallbacks for a given @FeignClient
set the fallback
attribute to the class name that implements the fallback. You also need to declare your implementation as a Spring bean.
@FeignClient(name = "hello", fallback = HystrixClientFallback.class) protected interface HystrixClient { @RequestMapping(method = RequestMethod.GET, value = "/hello") Hello iFailSometimes(); } static class HystrixClientFallback implements HystrixClient { @Override public Hello iFailSometimes() { return new Hello("fallback"); } }
If one needs access to the cause that made the fallback trigger, one can use the fallbackFactory
attribute inside @FeignClient
.
@FeignClient(name = "hello", fallbackFactory = HystrixClientFallbackFactory.class) protected interface HystrixClient { @RequestMapping(method = RequestMethod.GET, value = "/hello") Hello iFailSometimes(); } @Component static class HystrixClientFallbackFactory implements FallbackFactory<HystrixClient> { @Override public HystrixClient create(Throwable cause) { return new HystrixClientWithFallBackFactory() { @Override public Hello iFailSometimes() { return new Hello("fallback; reason was: " + cause.getMessage()); } }; } }
![]() | Warning |
---|---|
There is a limitation with the implementation of fallbacks in Feign and how Hystrix fallbacks work. Fallbacks are currently not supported for methods that return |
When using Feign with Hystrix fallbacks, there are multiple beans in the ApplicationContext
of the same type. This will cause @Autowired
to not work because there isn’t exactly one bean, or one marked as primary. To work around this, Spring Cloud Netflix marks all Feign instances as @Primary
, so Spring Framework will know which bean to inject. In some cases, this may not be desirable. To turn off this behavior set the primary
attribute of @FeignClient
to false.
@FeignClient(name = "hello", primary = false) public interface HelloClient { // methods here }
Feign supports boilerplate apis via single-inheritance interfaces. This allows grouping common operations into convenient base interfaces.
UserService.java.
public interface UserService { @RequestMapping(method = RequestMethod.GET, value ="/users/{id}") User getUser(@PathVariable("id") long id); }
UserResource.java.
@RestController public class UserResource implements UserService { }
UserClient.java.
package project.user; @FeignClient("users") public interface UserClient extends UserService { }
![]() | Note |
---|---|
It is generally not advisable to share an interface between a server and a client. It introduces tight coupling, and also actually doesn’t work with Spring MVC in its current form (method parameter mapping is not inherited). |
You may consider enabling the request or response GZIP compression for your Feign requests. You can do this by enabling one of the properties:
feign.compression.request.enabled=true feign.compression.response.enabled=true
Feign request compression gives you settings similar to what you may set for your web server:
feign.compression.request.enabled=true
feign.compression.request.mime-types=text/xml,application/xml,application/json
feign.compression.request.min-request-size=2048
These properties allow you to be selective about the compressed media types and minimum request threshold length.
A logger is created for each Feign client created. By default the name of the logger is the full class name of the interface used to create the Feign client. Feign logging only responds to the DEBUG
level.
application.yml.
logging.level.project.user.UserClient: DEBUG
The Logger.Level
object that you may configure per client, tells Feign how much to log. Choices are:
NONE
, No logging (DEFAULT).BASIC
, Log only the request method and URL and the response status code and execution time.HEADERS
, Log the basic information along with request and response headers.FULL
, Log the headers, body, and metadata for both requests and responses.For example, the following would set the Logger.Level
to FULL
:
@Configuration public class FooConfiguration { @Bean Logger.Level feignLoggerLevel() { return Logger.Level.FULL; } }
Archaius is the Netflix client side configuration library. It is the library used by all of the Netflix OSS components for configuration. Archaius is an extension of the Apache Commons Configuration project. It allows updates to configuration by either polling a source for changes or for a source to push changes to the client. Archaius uses Dynamic<Type>Property classes as handles to properties.
Archaius Example.
class ArchaiusTest { DynamicStringProperty myprop = DynamicPropertyFactory .getInstance() .getStringProperty("my.prop"); void doSomething() { OtherClass.someMethod(myprop.get()); } }
Archaius has its own set of configuration files and loading priorities. Spring applications should generally not use Archaius directly, but the need to configure the Netflix tools natively remains. Spring Cloud has a Spring Environment Bridge so Archaius can read properties from the Spring Environment. This allows Spring Boot projects to use the normal configuration toolchain, while allowing them to configure the Netflix tools, for the most part, as documented.
Routing in an integral part of a microservice architecture. For example, /
may be mapped to your web application, /api/users
is mapped to the user service and /api/shop
is mapped to the shop service. Zuul is a JVM based router and server side load balancer by Netflix.
Netflix uses Zuul for the following:
Zuul’s rule engine allows rules and filters to be written in essentially any JVM language, with built in support for Java and Groovy.
![]() | Note |
---|---|
The configuration property |
![]() | Note |
---|---|
Default Hystrix isolation pattern (ExecutionIsolationStrategy) for all routes is SEMAPHORE. |
To include Zuul in your project use the starter with group org.springframework.cloud
and artifact id spring-cloud-starter-zuul
. See the Spring Cloud Project page
for details on setting up your build system with the current Spring Cloud Release Train.
Spring Cloud has created an embedded Zuul proxy to ease the development of a very common use case where a UI application wants to proxy calls to one or more back end services. This feature is useful for a user interface to proxy to the backend services it requires, avoiding the need to manage CORS and authentication concerns independently for all the backends.
To enable it, annotate a Spring Boot main class with
@EnableZuulProxy
, and this forwards local calls to the appropriate
service. By convention, a service with the ID "users", will
receive requests from the proxy located at /users
(with the prefix
stripped). The proxy uses Ribbon to locate an instance to forward to
via discovery, and all requests are executed in a
hystrix command, so
failures will show up in Hystrix metrics, and once the circuit is open
the proxy will not try to contact the service.
![]() | Note |
---|---|
the Zuul starter does not include a discovery client, so for routes based on service IDs you need to provide one of those on the classpath as well (e.g. Eureka is one choice). |
To skip having a service automatically added, set
zuul.ignored-services
to a list of service id patterns. If a service
matches a pattern that is ignored, but also included in the explicitly
configured routes map, then it will be unignored. Example:
application.yml.
zuul: ignoredServices: '*' routes: users: /myusers/**
In this example, all services are ignored except "users".
To augment or change the proxy routes, you can add external configuration like the following:
application.yml.
zuul: routes: users: /myusers/**
This means that http calls to "/myusers" get forwarded to the "users" service (for example "/myusers/101" is forwarded to "/101").
To get more fine-grained control over a route you can specify the path and the serviceId independently:
application.yml.
zuul: routes: users: path: /myusers/** serviceId: users_service
This means that http calls to "/myusers" get forwarded to the "users_service" service. The route has to have a "path" which can be specified as an ant-style pattern, so "/myusers/*" only matches one level, but "/myusers/**" matches hierarchically.
The location of the backend can be specified as either a "serviceId" (for a service from discovery) or a "url" (for a physical location), e.g.
application.yml.
zuul: routes: users: path: /myusers/** url: http://example.com/users_service
These simple url-routes don’t get executed as a HystrixCommand
nor can you loadbalance multiple URLs with Ribbon.
To achieve this, specify a service-route and configure a Ribbon client for the
serviceId (this currently requires disabling Eureka support in Ribbon:
see above for more information), e.g.
application.yml.
zuul: routes: users: path: /myusers/** serviceId: users ribbon: eureka: enabled: false users: ribbon: listOfServers: example.com,google.com
You can provide convention between serviceId and routes using regexmapper. It uses regular expression named groups to extract variables from serviceId and inject them into a route pattern.
ApplicationConfiguration.java.
@Bean public PatternServiceRouteMapper serviceRouteMapper() { return new PatternServiceRouteMapper( "(?<name>^.+)-(?<version>v.+$)", "${version}/${name}"); }
This means that a serviceId "myusers-v1" will be mapped to route "/v1/myusers/**". Any regular expression is accepted but all named groups must be present in both servicePattern and routePattern. If servicePattern does not match a serviceId, the default behavior is used. In the example above, a serviceId "myusers" will be mapped to route "/myusers/**" (no version detected) This feature is disable by default and only applies to discovered services.
To add a prefix to all mappings, set zuul.prefix
to a value, such as
/api
. The proxy prefix is stripped from the request before the
request is forwarded by default (switch this behaviour off with
zuul.stripPrefix=false
). You can also switch off the stripping of
the service-specific prefix from individual routes, e.g.
application.yml.
zuul: routes: users: path: /myusers/** stripPrefix: false
![]() | Note |
---|---|
|
In this example, requests to "/myusers/101" will be forwarded to "/myusers/101" on the "users" service.
The zuul.routes
entries actually bind to an object of type ZuulProperties
. If you
look at the properties of that object you will see that it also has a "retryable" flag.
Set that flag to "true" to have the Ribbon client automatically retry failed requests
(and if you need to you can modify the parameters of the retry operations using
the Ribbon client configuration).
The X-Forwarded-Host
header is added to the forwarded requests by
default. To turn it off set zuul.addProxyHeaders = false
. The
prefix path is stripped by default, and the request to the backend
picks up a header "X-Forwarded-Prefix" ("/myusers" in the examples
above).
An application with @EnableZuulProxy
could act as a standalone
server if you set a default route ("/"), for example zuul.route.home:
/
would route all traffic (i.e. "/**") to the "home" service.
If more fine-grained ignoring is needed, you can specify specific patterns to ignore. These patterns are evaluated at the start of the route location process, which means prefixes should be included in the pattern to warrant a match. Ignored patterns span all services and supersede any other route specification.
application.yml.
zuul: ignoredPatterns: /**/admin/** routes: users: /myusers/**
This means that all calls such as "/myusers/101" will be forwarded to "/101" on the "users" service. But calls including "/admin/" will not resolve.
![]() | Warning |
---|---|
If you need your routes to have their order preserved you need to use a YAML file as the ordering will be lost using a properties file. For example: |
application.yml.
zuul: routes: users: path: /myusers/** legacy: path: /**
If you were to use a properties file, the legacy
path may end up in front of the users
path rendering the users
path unreachable.
The default HTTP client used by zuul is now backed by the Apache HTTP Client instead of the
deprecated Ribbon RestClient
. To use RestClient
or to use the okhttp3.OkHttpClient
set
ribbon.restclient.enabled=true
or ribbon.okhttp.enabled=true
respectively.
It’s OK to share headers between services in the same system, but you probably don’t want sensitive headers leaking downstream into external servers. You can specify a list of ignored headers as part of the route configuration. Cookies play a special role because they have well-defined semantics in browsers, and they are always to be treated as sensitive. If the consumer of your proxy is a browser, then cookies for downstream services also cause problems for the user because they all get jumbled up (all downstream services look like they come from the same place).
If you are careful with the design of your services, for example if only one of the downstream services sets cookies, then you might be able to let them flow from the backend all the way up to the caller. Also, if your proxy sets cookies and all your back end services are part of the same system, it can be natural to simply share them (and for instance use Spring Session to link them up to some shared state). Other than that, any cookies that get set by downstream services are likely to be not very useful to the caller, so it is recommended that you make (at least) "Set-Cookie" and "Cookie" into sensitive headers for routes that are not part of your domain. Even for routes that are part of your domain, try to think carefully about what it means before allowing cookies to flow between them and the proxy.
The sensitive headers can be configured as a comma-separated list per route, e.g.
application.yml.
zuul: routes: users: path: /myusers/** sensitiveHeaders: Cookie,Set-Cookie,Authorization url: https://downstream
![]() | Note |
---|---|
this is the default value for |
The sensitiveHeaders
are a blacklist and the default is not empty,
so to make Zuul send all headers (except the "ignored" ones) you would
have to explicitly set it to the empty list. This is necessary if you
want to pass cookie or authorization headers to your back end. Example:
application.yml.
zuul: routes: users: path: /myusers/** sensitiveHeaders: url: https://downstream
Sensitive headers can also be set globally by setting zuul.sensitiveHeaders
. If sensitiveHeaders
is set on a route, this will override the global sensitiveHeaders
setting.
In addition to the per-route sensitive headers, you can set a global
value for zuul.ignoredHeaders
for values that should be discarded
(both request and response) during interactions with downstream
services. By default these are empty, if Spring Security is not on the
classpath, and otherwise they are initialized to a set of well-known
"security" headers (e.g. involving caching) as specified by Spring
Security. The assumption in this case is that the downstream services
might add these headers too, and we want the values from the proxy.
To not discard these well known security headers in case Spring Security is on the classpath you can set zuul.ignoreSecurityHeaders
to false
. This can be useful if you disabled the HTTP Security response headers in Spring Security and want the values provided by downstream services
If you are using @EnableZuulProxy
with tha Spring Boot Actuator you
will enable (by default) an additional endpoint, available via HTTP as
/routes
. A GET to this endpoint will return a list of the mapped
routes. A POST will force a refresh of the existing routes (e.g. in
case there have been changes in the service catalog). You can disable
this endpoint by setting endpoints.routes.enabled
to false
.
![]() | Note |
---|---|
the routes should respond automatically to changes in the service catalog, but the POST to /routes is a way to force the change to happen immediately. |
A common pattern when migrating an existing application or API is to "strangle" old endpoints, slowly replacing them with different implementations. The Zuul proxy is a useful tool for this because you can use it to handle all traffic from clients of the old endpoints, but redirect some of the requests to new ones.
Example configuration:
application.yml.
zuul: routes: first: path: /first/** url: http://first.example.com second: path: /second/** url: forward:/second third: path: /third/** url: forward:/3rd legacy: path: /** url: http://legacy.example.com
In this example we are strangling the "legacy" app which is mapped to
all requests that do not match one of the other patterns. Paths in
/first/**
have been extracted into a new service with an external
URL. And paths in /second/**
are forwarded so they can be handled
locally, e.g. with a normal Spring @RequestMapping
. Paths in
/third/**
are also forwarded, but with a different prefix
(i.e. /third/foo
is forwarded to /3rd/foo
).
![]() | Note |
---|---|
The ignored patterns aren’t completely ignored, they just aren’t handled by the proxy (so they are also effectively forwarded locally). |
If you @EnableZuulProxy
you can use the proxy paths to
upload files and it should just work as long as the files
are small. For large files there is an alternative path
which bypasses the Spring DispatcherServlet
(to
avoid multipart processing) in "/zuul/*". I.e. if
zuul.routes.customers=/customers/**
then you can
POST large files to "/zuul/customers/*". The servlet
path is externalized via zuul.servletPath
. Extremely
large files will also require elevated timeout settings
if the proxy route takes you through a Ribbon load
balancer, e.g.
application.yml.
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds: 60000 ribbon: ConnectTimeout: 3000 ReadTimeout: 60000
Note that for streaming to work with large files, you need to use chunked encoding in the request (which some browsers do not do by default). E.g. on the command line:
$ curl -v -H "Transfer-Encoding: chunked" \ -F "[email protected]" localhost:9999/zuul/simple/file
When processing the incoming request, query params are decoded so they can be available for possible modifications in
Zuul filters. They are then re-encoded when building the backend request in the route filters. The result
can be different than the original input if it was encoded using Javascript’s encodeURIComponent()
method for example.
While this causes no issues in most cases, some web servers can be picky with the encoding of complex query string.
To force the original encoding of the query string, it is possible to pass a special flag to ZuulProperties
so
that the query string is taken as is with the HttpServletRequest::getQueryString
method :
application.yml.
zuul: forceOriginalQueryStringEncoding: true
Note: This special flag only works with SimpleHostRoutingFilter
and you loose the ability to easily override
query parameters with RequestContext.getCurrentContext().setRequestQueryParams(someOverriddenParameters)
since
the query string is now fetched directly on the original HttpServletRequest
.
You can also run a Zuul server without the proxying, or switch on parts of the proxying platform selectively, if you
use @EnableZuulServer
(instead of @EnableZuulProxy
). Any beans that you add to the application of type ZuulFilter
will be installed automatically, as they are with @EnableZuulProxy
, but without any of the proxy filters being added
automatically.
In this case the routes into the Zuul server are still specified by configuring "zuul.routes.*", but there is no service discovery and no proxying, so the "serviceId" and "url" settings are ignored. For example:
application.yml.
zuul: routes: api: /api/**
maps all paths in "/api/**" to the Zuul filter chain.
Zuul for Spring Cloud comes with a number of ZuulFilter
beans enabled by default
in both proxy and server mode. See the zuul filters package for the
possible filters that are enabled. If you want to disable one, simply set
zuul.<SimpleClassName>.<filterType>.disable=true
. By convention, the package after
filters
is the Zuul filter type. For example to disable
org.springframework.cloud.netflix.zuul.filters.post.SendResponseFilter
set
zuul.SendResponseFilter.post.disable=true
.
When a circuit for a given route in Zuul is tripped you can provide a fallback response
by creating a bean of type ZuulFallbackProvider
. Within this bean you need to specify
the route ID the fallback is for and provide a ClientHttpResponse
to return
as a fallback. Here is a very simple ZuulFallbackProvider
implementation.
class MyFallbackProvider implements ZuulFallbackProvider { @Override public String getRoute() { return "customers"; } @Override public ClientHttpResponse fallbackResponse() { return new ClientHttpResponse() { @Override public HttpStatus getStatusCode() throws IOException { return HttpStatus.OK; } @Override public int getRawStatusCode() throws IOException { return 200; } @Override public String getStatusText() throws IOException { return "OK"; } @Override public void close() { } @Override public InputStream getBody() throws IOException { return new ByteArrayInputStream("fallback".getBytes()); } @Override public HttpHeaders getHeaders() { HttpHeaders headers = new HttpHeaders(); headers.setContentType(MediaType.APPLICATION_JSON); return headers; } }; } }
And here is what the route configuration would look like.
zuul: routes: customers: /customers/**
If you would like to provide a default fallback for all routes than you can create a bean of
type ZuulFallbackProvider
and have the getRoute
method return *
or null
.
class MyFallbackProvider implements ZuulFallbackProvider { @Override public String getRoute() { return "*"; } @Override public ClientHttpResponse fallbackResponse() { return new ClientHttpResponse() { @Override public HttpStatus getStatusCode() throws IOException { return HttpStatus.OK; } @Override public int getRawStatusCode() throws IOException { return 200; } @Override public String getStatusText() throws IOException { return "OK"; } @Override public void close() { } @Override public InputStream getBody() throws IOException { return new ByteArrayInputStream("fallback".getBytes()); } @Override public HttpHeaders getHeaders() { HttpHeaders headers = new HttpHeaders(); headers.setContentType(MediaType.APPLICATION_JSON); return headers; } }; } }
For a general overview of how Zuul works, please see the Zuul Wiki.
Zuul is implemented as a Servlet. For the general cases, Zuul is embedded into the Spring Dispatch mechanism. This allows Spring MVC to be in control of the routing. In this case, Zuul is configured to buffer requests. If there is a need to go through Zuul without buffering requests (e.g. for large file uploads), the Servlet is also installed outside of the Spring Dispatcher. By default, this is located at /zuul
. This path can be changed with the zuul.servlet-path
property.
To pass information between filters, Zuul uses a RequestContext
. Its data is held in a ThreadLocal
specific to each request. Information about where to route requests, errors and the actual HttpServletRequest
and HttpServletResponse
are stored there. The RequestContext
extends ConcurrentHashMap
, so anything can be stored in the context. FilterConstants
contains the keys that are used by the filters installed by Spring Cloud Netflix (more on these later).
Spring Cloud Netflix installs a number of filters based on which annotation was used to enable Zuul. @EnableZuulProxy
is a superset of @EnableZuulServer
. In other words, @EnableZuulProxy
contains all filters installed by @EnableZuulServer
. The additional filters in the "proxy" enable routing functionality. If you want a "blank" Zuul, you should use @EnableZuulServer
.
Creates a SimpleRouteLocator
that loads route definitions from Spring Boot configuration files.
The following filters are installed (as normal Spring Beans):
Pre filters:
ServletDetectionFilter
: Detects if the request is through the Spring Dispatcher. Sets boolean with key FilterConstants.IS_DISPATCHER_SERVLET_REQUEST_KEY
.FormBodyWrapperFilter
: Parses form data and reencodes it for downstream requests.DebugFilter
: if the debug
request parameter is set, this filter sets RequestContext.setDebugRouting()
and RequestContext.setDebugRequest()
to true.Route filters:
SendForwardFilter
: This filter forwards requests using the Servlet RequestDispatcher
. The forwarding location is stored in the RequestContext
attribute FilterConstants.FORWARD_TO_KEY
. This is useful for forwarding to endpoints in the current application.Post filters:
SendResponseFilter
: Writes responses from proxied requests to the current response.Error filters:
SendErrorFilter
: Forwards to /error (by default) if RequestContext.getThrowable()
is not null. The default forwarding path (/error
) can be changed by setting the error.path
property.Creates a DiscoveryClientRouteLocator
that loads route definitions from a DiscoveryClient
(like Eureka), as well as from properties. A route is created for each serviceId
from the DiscoveryClient
. As new services are added, the routes will be refreshed.
In addition to the filters described above, the following filters are installed (as normal Spring Beans):
Pre filters:
PreDecorationFilter
: This filter determines where and how to route based on the supplied RouteLocator
. It also sets various proxy-related headers for downstream requests.Route filters:
RibbonRoutingFilter
: This filter uses Ribbon, Hystrix and pluggable HTTP clients to send requests. Service ids are found in the RequestContext
attribute FilterConstants.SERVICE_ID_KEY
. This filter can use different HTTP clients. They are:
HttpClient
. This is the default client.OkHttpClient
v3. This is enabled by having the com.squareup.okhttp3:okhttp
library on the classpath and setting ribbon.okhttp.enabled=true
.ribbon.restclient.enabled=true
. This client has limitations, such as it doesn’t support the PATCH method, but also has built-in retry.SimpleHostRoutingFilter
: This filter sends requests to predetermined URLs via an Apache HttpClient. URLs are found in RequestContext.getRouteHost()
.Most of the following "How to Write" examples below are included Sample Zuul Filters project. There are also examples of manipulating the request or response body in that repository.
Pre filters are used to set up data in the RequestContext
for use in filters downstream. The main use case is to set information required for route filters.
public class QueryParamPreFilter extends ZuulFilter { @Override public int filterOrder() { return PRE_DECORATION_FILTER_ORDER - 1; // run before PreDecoration } @Override public String filterType() { return PRE_TYPE; } @Override public boolean shouldFilter() { RequestContext ctx = RequestContext.getCurrentContext(); return !ctx.containsKey(FORWARD_TO_KEY) // a filter has already forwarded && !ctx.containsKey(SERVICE_ID_KEY); // a filter has already determined serviceId } @Override public Object run() { RequestContext ctx = RequestContext.getCurrentContext(); HttpServletRequest request = ctx.getRequest(); if (request.getParameter("foo") != null) { // put the serviceId in `RequestContext` ctx.put(SERVICE_ID_KEY, request.getParameter("foo")); } return null; } }
The filter above populates SERVICE_ID_KEY
from the foo
request parameter. In reality, it’s not a good idea to do that kind of direct mapping, but the service id should be looked up from the value of foo
instead.
Now that SERVICE_ID_KEY
is populated, PreDecorationFilter
won’t run and RibbonRoutingFilter
will. If you wanted to route to a full URL instead, call ctx.setRouteHost(url)
instead.
To modify the path that routing filters will forward to, set the REQUEST_URI_KEY
.
Route filters are run after pre filters and are used to make requests to other services. Much of the work here is to translate request and response data to and from the client required model.
public class OkHttpRoutingFilter extends ZuulFilter { @Autowired private ProxyRequestHelper helper; @Override public String filterType() { return ROUTE_TYPE; } @Override public int filterOrder() { return SIMPLE_HOST_ROUTING_FILTER_ORDER - 1; } @Override public boolean shouldFilter() { return RequestContext.getCurrentContext().getRouteHost() != null && RequestContext.getCurrentContext().sendZuulResponse(); } @Override public Object run() { OkHttpClient httpClient = new OkHttpClient.Builder() // customize .build(); RequestContext context = RequestContext.getCurrentContext(); HttpServletRequest request = context.getRequest(); String method = request.getMethod(); String uri = this.helper.buildZuulRequestURI(request); Headers.Builder headers = new Headers.Builder(); Enumeration<String> headerNames = request.getHeaderNames(); while (headerNames.hasMoreElements()) { String name = headerNames.nextElement(); Enumeration<String> values = request.getHeaders(name); while (values.hasMoreElements()) { String value = values.nextElement(); headers.add(name, value); } } InputStream inputStream = request.getInputStream(); RequestBody requestBody = null; if (inputStream != null && HttpMethod.permitsRequestBody(method)) { MediaType mediaType = null; if (headers.get("Content-Type") != null) { mediaType = MediaType.parse(headers.get("Content-Type")); } requestBody = RequestBody.create(mediaType, StreamUtils.copyToByteArray(inputStream)); } Request.Builder builder = new Request.Builder() .headers(headers.build()) .url(uri) .method(method, requestBody); Response response = httpClient.newCall(builder.build()).execute(); LinkedMultiValueMap<String, String> responseHeaders = new LinkedMultiValueMap<>(); for (Map.Entry<String, List<String>> entry : response.headers().toMultimap().entrySet()) { responseHeaders.put(entry.getKey(), entry.getValue()); } this.helper.setResponse(response.code(), response.body().byteStream(), responseHeaders); context.setRouteHost(null); // prevent SimpleHostRoutingFilter from running return null; } }
The above filter translates Servlet request information into OkHttp3 request information, executes an HTTP request, then translates OkHttp3 reponse information to the Servlet response. WARNING: this filter might have bugs and not function correctly.
Post filters typically manipulate the response. In the filter below, we add a random UUID
as the X-Foo
header. Other manipulations, such as transforming the response body, are much more complex and compute-intensive.
public class AddResponseHeaderFilter extends ZuulFilter { @Override public String filterType() { return POST_TYPE; } @Override public int filterOrder() { return SEND_RESPONSE_FILTER_ORDER - 1; } @Override public boolean shouldFilter() { return true; } @Override public Object run() { RequestContext context = RequestContext.getCurrentContext(); HttpServletResponse servletResponse = context.getResponse(); servletResponse.addHeader("X-Foo", UUID.randomUUID().toString()); return null; } }
If an exception is thrown during any portion of the Zuul filter lifecycle, the error filters are executed. The SendErrorFilter
is only run if RequestContext.getThrowable()
is not null
. It then sets specific javax.servlet.error.*
attributes in the request and forwards the request to the Spring Boot error page.
Zuul internally uses Ribbon for calling the remote url’s and Ribbon clients are by default lazily loaded up by Spring Cloud on first call. This behavior can be changed for Zuul using the following configuration and will result in the child Ribbon related Application contexts being eagerly loaded up at application startup time.
application.yml.
zuul: ribbon: eager-load: enabled: true
Do you have non-jvm languages you want to take advantage of Eureka, Ribbon and Config Server? The Spring Cloud Netflix Sidecar was inspired by Netflix Prana. It includes a simple http api to get all of the instances (ie host and port) for a given service. You can also proxy service calls through an embedded Zuul proxy which gets its route entries from Eureka. The Spring Cloud Config Server can be accessed directly via host lookup or through the Zuul Proxy. The non-jvm app should implement a health check so the Sidecar can report to eureka if the app is up or down.
To include Sidecar in your project use the dependency with group org.springframework.cloud
and artifact id spring-cloud-netflix-sidecar
.
To enable the Sidecar, create a Spring Boot application with @EnableSidecar
.
This annotation includes @EnableCircuitBreaker
, @EnableDiscoveryClient
,
and @EnableZuulProxy
. Run the resulting application on the same host as the
non-jvm application.
To configure the side car add sidecar.port
and sidecar.health-uri
to application.yml
.
The sidecar.port
property is the port the non-jvm app is listening on. This
is so the Sidecar can properly register the app with Eureka. The sidecar.health-uri
is a uri accessible on the non-jvm app that mimicks a Spring Boot health
indicator. It should return a json document like the following:
health-uri-document.
{ "status":"UP" }
Here is an example application.yml for a Sidecar application:
application.yml.
server: port: 5678 spring: application: name: sidecar sidecar: port: 8000 health-uri: http://localhost:8000/health.json
The api for the DiscoveryClient.getInstances()
method is /hosts/{serviceId}
.
Here is an example response for /hosts/customers
that returns two instances on
different hosts. This api is accessible to the non-jvm app (if the sidecar is
on port 5678) at http://localhost:5678/hosts/{serviceId}
.
/hosts/customers.
[ { "host": "myhost", "port": 9000, "uri": "http://myhost:9000", "serviceId": "CUSTOMERS", "secure": false }, { "host": "myhost2", "port": 9000, "uri": "http://myhost2:9000", "serviceId": "CUSTOMERS", "secure": false } ]
The Zuul proxy automatically adds routes for each service known in eureka to
/<serviceId>
, so the customers service is available at /customers
. The
Non-jvm app can access the customer service via http://localhost:5678/customers
(assuming the sidecar is listening on port 5678).
If the Config Server is registered with Eureka, non-jvm application can access
it via the Zuul proxy. If the serviceId of the ConfigServer is configserver
and the Sidecar is on port 5678, then it can be accessed at
http://localhost:5678/configserver
Non-jvm app can take advantage of the Config Server’s ability to return YAML documents. For example, a call to http://sidecar.local.spring.io:5678/configserver/default-master.yml might result in a YAML document like the following
eureka: client: serviceUrl: defaultZone: http://localhost:8761/eureka/ password: password info: description: Spring Cloud Samples url: https://github.com/spring-cloud-samples
Spring Cloud Netflix includes RxJava.
RxJava is a Java VM implementation of Reactive Extensions: a library for composing asynchronous and event-based programs by using observable sequences.
Spring Cloud Netflix provides support for returning rx.Single
objects from Spring MVC Controllers. It also supports using rx.Observable
objects for Server-sent events (SSE). This can be very convenient if your internal APIs are already built using RxJava (see Section 17.4, “Feign Hystrix Support” for examples).
Here are some examples of using rx.Single
:
@RequestMapping(method = RequestMethod.GET, value = "/single") public Single<String> single() { return Single.just("single value"); } @RequestMapping(method = RequestMethod.GET, value = "/singleWithResponse") public ResponseEntity<Single<String>> singleWithResponse() { return new ResponseEntity<>(Single.just("single value"), HttpStatus.NOT_FOUND); } @RequestMapping(method = RequestMethod.GET, value = "/singleCreatedWithResponse") public Single<ResponseEntity<String>> singleOuterWithResponse() { return Single.just(new ResponseEntity<>("single value", HttpStatus.CREATED)); } @RequestMapping(method = RequestMethod.GET, value = "/throw") public Single<Object> error() { return Single.error(new RuntimeException("Unexpected")); }
If you have an Observable
, rather than a single, you can use .toSingle()
or .toList().toSingle()
. Here are some examples:
@RequestMapping(method = RequestMethod.GET, value = "/single") public Single<String> single() { return Observable.just("single value").toSingle(); } @RequestMapping(method = RequestMethod.GET, value = "/multiple") public Single<List<String>> multiple() { return Observable.just("multiple", "values").toList().toSingle(); } @RequestMapping(method = RequestMethod.GET, value = "/responseWithObservable") public ResponseEntity<Single<String>> responseWithObservable() { Observable<String> observable = Observable.just("single value"); HttpHeaders headers = new HttpHeaders(); headers.setContentType(APPLICATION_JSON_UTF8); return new ResponseEntity<>(observable.toSingle(), headers, HttpStatus.CREATED); } @RequestMapping(method = RequestMethod.GET, value = "/timeout") public Observable<String> timeout() { return Observable.timer(1, TimeUnit.MINUTES).map(new Func1<Long, String>() { @Override public String call(Long aLong) { return "single value"; } }); }
If you have a streaming endpoint and client, SSE could be an option. To convert rx.Observable
to a Spring SseEmitter
use RxResponse.sse()
. Here are some examples:
@RequestMapping(method = RequestMethod.GET, value = "/sse") public SseEmitter single() { return RxResponse.sse(Observable.just("single value")); } @RequestMapping(method = RequestMethod.GET, value = "/messages") public SseEmitter messages() { return RxResponse.sse(Observable.just("message 1", "message 2", "message 3")); } @RequestMapping(method = RequestMethod.GET, value = "/events") public SseEmitter event() { return RxResponse.sse(APPLICATION_JSON_UTF8, Observable.just(new EventDto("Spring io", getDate(2016, 5, 19)), new EventDto("SpringOnePlatform", getDate(2016, 8, 1)))); }
When used together, Spectator/Servo and Atlas provide a near real-time operational insight platform.
Spectator and Servo are Netflix’s metrics collection libraries. Atlas is a Netflix metrics backend to manage dimensional time series data.
Servo served Netflix for several years and is still usable, but is gradually being phased out in favor of Spectator, which is only designed to work with Java 8. Spring Cloud Netflix provides support for both, but Java 8 based applications are encouraged to use Spectator.
Spring Boot Actuator metrics are hierarchical and metrics are separated only by name. These names often follow a naming convention that embeds key/value attribute pairs (dimensions) into the name separated by periods. Consider the following metrics for two endpoints, root and star-star:
{ "counter.status.200.root": 20, "counter.status.400.root": 3, "counter.status.200.star-star": 5, }
The first metric gives us a normalized count of successful requests against the root endpoint per unit of time. But what if the system had 20 endpoints and you want to get a count of successful requests against all the endpoints? Some hierarchical metrics backends would allow you to specify a wild card such as counter.status.200.*
that would read all 20 metrics and aggregate the results. Alternatively, you could provide a HandlerInterceptorAdapter
that intercepts and records a metric like counter.status.200.all
for all successful requests irrespective of the endpoint, but now you must write 20+1 different metrics. Similarly if you want to know the total number of successful requests for all endpoints in the service, you could specify a wild card such as counter.status.2*.*
.
Even in the presence of wildcarding support on a hierarchical metrics backend, naming consistency can be difficult. Specifically the position of these tags in the name string can slip with time, breaking queries. For example, suppose we add an additional dimension to the hierarchical metrics above for HTTP method. Then counter.status.200.root
becomes counter.status.200.method.get.root
, etc. Our counter.status.200.*
suddenly no longer has the same semantic meaning. Furthermore, if the new dimension is not applied uniformly across the codebase, certain queries may become impossible. This can quickly get out of hand.
Netflix metrics are tagged (a.k.a. dimensional). Each metric has a name, but this single named metric can contain multiple statistics and 'tag' key/value pairs that allows more querying flexibility. In fact, the statistics themselves are recorded in a special tag.
Recorded with Netflix Servo or Spectator, a timer for the root endpoint described above contains 4 statistics per status code, where the count statistic is identical to Spring Boot Actuator’s counter. In the event that we have encountered an HTTP 200 and 400 thus far, there will be 8 available data points:
{ "root(status=200,stastic=count)": 20, "root(status=200,stastic=max)": 0.7265630630000001, "root(status=200,stastic=totalOfSquares)": 0.04759702862580789, "root(status=200,stastic=totalTime)": 0.2093076914666667, "root(status=400,stastic=count)": 1, "root(status=400,stastic=max)": 0, "root(status=400,stastic=totalOfSquares)": 0, "root(status=400,stastic=totalTime)": 0, }
Without any additional dependencies or configuration, a Spring Cloud based service will autoconfigure a Servo MonitorRegistry
and begin collecting metrics on every Spring MVC request. By default, a Servo timer with the name rest
will be recorded for each MVC request which is tagged with:
netflix.metrics.rest.callerHeader
is set on the request. There is no default key for netflix.metrics.rest.callerHeader
. You must add it to your application properties if you wish to collect caller information.Set the netflix.metrics.rest.metricName
property to change the name of the metric from rest
to a name you provide.
If Spring AOP is enabled and org.aspectj:aspectjweaver
is present on your runtime classpath, Spring Cloud will also collect metrics on every client call made with RestTemplate
. A Servo timer with the name of restclient
will be recorded for each MVC request which is tagged with:
IOException
occurred during the execution of the RestTemplate
method![]() | Warning |
---|---|
Avoid using hardcoded url parameters within |
// recommended String orderid = "1"; restTemplate.getForObject("http://testeurekabrixtonclient/orders/{orderid}", String.class, orderid) // avoid restTemplate.getForObject("http://testeurekabrixtonclient/orders/1", String.class)
To enable Spectator metrics, include a dependency on spring-boot-starter-spectator
:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-spectator</artifactId> </dependency>
In Spectator parlance, a meter is a named, typed, and tagged configuration and a metric represents the value of a given meter at a point in time. Spectator meters are created and controlled by a registry, which currently has several different implementations. Spectator provides 4 meter types: counter, timer, gauge, and distribution summary.
Spring Cloud Spectator integration configures an injectable com.netflix.spectator.api.Registry
instance for you. Specifically, it configures a ServoRegistry
instance in order to unify the collection of REST metrics and the exporting of metrics to the Atlas backend under a single Servo API. Practically, this means that your code may use a mixture of Servo monitors and Spectator meters and both will be scooped up by Spring Boot Actuator MetricReader
instances and both will be shipped to the Atlas backend.
A counter is used to measure the rate at which some event is occurring.
// create a counter with a name and a set of tags Counter counter = registry.counter("counterName", "tagKey1", "tagValue1", ...); counter.increment(); // increment when an event occurs counter.increment(10); // increment by a discrete amount
The counter records a single time-normalized statistic.
A timer is used to measure how long some event is taking. Spring Cloud automatically records timers for Spring MVC requests and conditionally RestTemplate
requests, which can later be used to create dashboards for request related metrics like latency:
// create a timer with a name and a set of tags Timer timer = registry.timer("timerName", "tagKey1", "tagValue1", ...); // execute an operation and time it at the same time T result = timer.record(() -> fooReturnsT()); // alternatively, if you must manually record the time Long start = System.nanoTime(); T result = fooReturnsT(); timer.record(System.nanoTime() - start, TimeUnit.NANOSECONDS);
The timer simultaneously records 4 statistics: count, max, totalOfSquares, and totalTime. The count statistic will always match the single normalized value provided by a counter if you had called increment()
once on the counter for each time you recorded a timing, so it is rarely necessary to count and time separately for a single operation.
For long running operations, Spectator provides a special LongTaskTimer
.
Gauges are used to determine some current value like the size of a queue or number of threads in a running state. Since gauges are sampled, they provide no information about how these values fluctuate between samples.
The normal use of a gauge involves registering the gauge once in initialization with an id, a reference to the object to be sampled, and a function to get or compute a numeric value based on the object. The reference to the object is passed in separately and the Spectator registry will keep a weak reference to the object. If the object is garbage collected, then Spectator will automatically drop the registration. See the note in Spectator’s documentation about potential memory leaks if this API is misused.
// the registry will automatically sample this gauge periodically registry.gauge("gaugeName", pool, Pool::numberOfRunningThreads); // manually sample a value in code at periodic intervals -- last resort! registry.gauge("gaugeName", Arrays.asList("tagKey1", "tagValue1", ...), 1000);
A distribution summary is used to track the distribution of events. It is similar to a timer, but more general in that the size does not have to be a period of time. For example, a distribution summary could be used to measure the payload sizes of requests hitting a server.
// the registry will automatically sample this gauge periodically DistributionSummary ds = registry.distributionSummary("dsName", "tagKey1", "tagValue1", ...); ds.record(request.sizeInBytes());
![]() | Warning |
---|---|
If your code is compiled on Java 8, please use Spectator instead of Servo as Spectator is destined to replace Servo entirely in the long term. |
In Servo parlance, a monitor is a named, typed, and tagged configuration and a metric represents the value of a given monitor at a point in time. Servo monitors are logically equivalent to Spectator meters. Servo monitors are created and controlled by a MonitorRegistry
. In spite of the above warning, Servo does have a wider array of monitor options than Spectator has meters.
Spring Cloud integration configures an injectable com.netflix.servo.MonitorRegistry
instance for you. Once you have created the appropriate Monitor
type in Servo, the process of recording data is wholly similar to Spectator.
If you are using the Servo MonitorRegistry
instance provided by Spring Cloud (specifically, an instance of DefaultMonitorRegistry
), Servo provides convenience classes for retrieving counters and timers. These convenience classes ensure that only one Monitor
is registered for each unique combination of name and tags.
To manually create a Monitor type in Servo, especially for the more exotic monitor types for which convenience methods are not provided, instantiate the appropriate type by providing a MonitorConfig
instance:
MonitorConfig config = MonitorConfig.builder("timerName").withTag("tagKey1", "tagValue1").build(); // somewhere we should cache this Monitor by MonitorConfig Timer timer = new BasicTimer(config); monitorRegistry.register(timer);
Atlas was developed by Netflix to manage dimensional time series data for near real-time operational insight. Atlas features in-memory data storage, allowing it to gather and report very large numbers of metrics, very quickly.
Atlas captures operational intelligence. Whereas business intelligence is data gathered for analyzing trends over time, operational intelligence provides a picture of what is currently happening within a system.
Spring Cloud provides a spring-cloud-starter-atlas
that has all the dependencies you need. Then just annotate your Spring Boot application with @EnableAtlas
and provide a location for your running Atlas server with the netflix.atlas.uri
property.
Spring Cloud enables you to add tags to every metric sent to the Atlas backend. Global tags can be used to separate metrics by application name, environment, region, etc.
Each bean implementing AtlasTagProvider
will contribute to the global tag list:
@Bean AtlasTagProvider atlasCommonTags( @Value("${spring.application.name}") String appName) { return () -> Collections.singletonMap("app", appName); }
To bootstrap a in-memory standalone Atlas instance:
$ curl -LO https://github.com/Netflix/atlas/releases/download/v1.4.2/atlas-1.4.2-standalone.jar $ java -jar atlas-1.4.2-standalone.jar
![]() | Tip |
---|---|
An Atlas standalone node running on an r3.2xlarge (61GB RAM) can handle roughly 2 million metrics per minute for a given 6 hour window. |
Once running and you have collected a handful of metrics, verify that your setup is correct by listing tags on the Atlas server:
$ curl http://ATLAS/api/v1/tags
![]() | Tip |
---|---|
After executing several requests against your service, you can gather some very basic information on the request latency of every request by pasting the following url in your browser: |
The Atlas wiki contains a compilation of sample queries for various scenarios.
Make sure to check out the alerting philosophy and docs on using double exponential smoothing to generate dynamic alert thresholds.
Spring Cloud Netflix offers a variety of ways to make HTTP requests. You can use a load balanced
RestTemplate
, Ribbon, or Feign. No matter how you choose to your HTTP requests, there is always
a chance the request may fail. When a request fails you may want to have the request retried
automatically. To accomplish this when using Sping Cloud Netflix you need to include
Spring Retry on your application’s classpath.
When Spring Retry is present load balanced RestTemplates
, Feign, and Zuul will automatically
retry any failed requests (assuming you configuration allows it to).
Anytime Ribbon is used with Spring Retry you can control the retry functionality by configuring
certain Ribbon properties. The properties you can use are
client.ribbon.MaxAutoRetries
, client.ribbon.MaxAutoRetriesNextServer
, and
client.ribbon.OkToRetryOnAllOperations
. See the Ribbon documentation
for a description of what there properties do.
In addition you may want to retry requests when certain status codes are returned in the
response. You can list the response codes you would like the Ribbon client to retry using the
property clientName.ribbon.retryableStatusCodes
. For example
clientName: ribbon: retryableStatusCodes: 404,502
You can also create a bean of type LoadBalancedRetryPolicy
and implement the retryableStatusCode
method to determine whether you want to retry a request given the status code.
This section goes into more detail about how you can work with Spring Cloud Stream. It covers topics such as creating and running stream applications.
Spring Cloud Stream is a framework for building message-driven microservice applications. Spring Cloud Stream builds upon Spring Boot to create standalone, production-grade Spring applications, and uses Spring Integration to provide connectivity to message brokers. It provides opinionated configuration of middleware from several vendors, introducing the concepts of persistent publish-subscribe semantics, consumer groups, and partitions.
You can add the @EnableBinding
annotation to your application to get immediate connectivity to a message broker, and you can add @StreamListener
to a method to cause it to receive events for stream processing.
The following is a simple sink application which receives external messages.
@SpringBootApplication @EnableBinding(Sink.class) public class VoteRecordingSinkApplication { public static void main(String[] args) { SpringApplication.run(VoteRecordingSinkApplication.class, args); } @StreamListener(Sink.INPUT) public void processVote(Vote vote) { votingService.recordVote(vote); } }
The @EnableBinding
annotation takes one or more interfaces as parameters (in this case, the parameter is a single Sink
interface).
An interface declares input and/or output channels.
Spring Cloud Stream provides the interfaces Source
, Sink
, and Processor
; you can also define your own interfaces.
The following is the definition of the Sink
interface:
public interface Sink { String INPUT = "input"; @Input(Sink.INPUT) SubscribableChannel input(); }
The @Input
annotation identifies an input channel, through which received messages enter the application; the @Output
annotation identifies an output channel, through which published messages leave the application.
The @Input
and @Output
annotations can take a channel name as a parameter; if a name is not provided, the name of the annotated method will be used.
Spring Cloud Stream will create an implementation of the interface for you. You can use this in the application by autowiring it, as in the following example of a test case.
@RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = VoteRecordingSinkApplication.class) @WebAppConfiguration @DirtiesContext public class StreamApplicationTests { @Autowired private Sink sink; @Test public void contextLoads() { assertNotNull(this.sink.input()); } }
Spring Cloud Stream provides a number of abstractions and primitives that simplify the writing of message-driven microservice applications. This section gives an overview of the following:
A Spring Cloud Stream application consists of a middleware-neutral core. The application communicates with the outside world through input and output channels injected into it by Spring Cloud Stream. Channels are connected to external brokers through middleware-specific Binder implementations.
Spring Cloud Stream provides Binder implementations for Kafka and Rabbit MQ. Spring Cloud Stream also includes a TestSupportBinder, which leaves a channel unmodified so that tests can interact with channels directly and reliably assert on what is received. You can use the extensible API to write your own Binder.
Spring Cloud Stream uses Spring Boot for configuration, and the Binder abstraction makes it possible for a Spring Cloud Stream application to be flexible in how it connects to middleware.
For example, deployers can dynamically choose, at runtime, the destinations (e.g., the Kafka topics or RabbitMQ exchanges) to which channels connect.
Such configuration can be provided through external configuration properties and in any form supported by Spring Boot (including application arguments, environment variables, and application.yml
or application.properties
files).
In the sink example from the Chapter 23, Introducing Spring Cloud Stream section, setting the application property spring.cloud.stream.bindings.input.destination
to raw-sensor-data
will cause it to read from the raw-sensor-data
Kafka topic, or from a queue bound to the raw-sensor-data
RabbitMQ exchange.
Spring Cloud Stream automatically detects and uses a binder found on the classpath. You can easily use different types of middleware with the same code: just include a different binder at build time. For more complex use cases, you can also package multiple binders with your application and have it choose the binder, and even whether to use different binders for different channels, at runtime.
Communication between applications follows a publish-subscribe model, where data is broadcast through shared topics. This can be seen in the following figure, which shows a typical deployment for a set of interacting Spring Cloud Stream applications.
Data reported by sensors to an HTTP endpoint is sent to a common destination named raw-sensor-data
.
From the destination, it is independently processed by a microservice application that computes time-windowed averages and by another microservice application that ingests the raw data into HDFS.
In order to process the data, both applications declare the topic as their input at runtime.
The publish-subscribe communication model reduces the complexity of both the producer and the consumer, and allows new applications to be added to the topology without disruption of the existing flow. For example, downstream from the average-calculating application, you can add an application that calculates the highest temperature values for display and monitoring. You can then add another application that interprets the same flow of averages for fault detection. Doing all communication through shared topics rather than point-to-point queues reduces coupling between microservices.
While the concept of publish-subscribe messaging is not new, Spring Cloud Stream takes the extra step of making it an opinionated choice for its application model. By using native middleware support, Spring Cloud Stream also simplifies use of the publish-subscribe model across different platforms.
While the publish-subscribe model makes it easy to connect applications through shared topics, the ability to scale up by creating multiple instances of a given application is equally important. When doing this, different instances of an application are placed in a competing consumer relationship, where only one of the instances is expected to handle a given message.
Spring Cloud Stream models this behavior through the concept of a consumer group.
(Spring Cloud Stream consumer groups are similar to and inspired by Kafka consumer groups.)
Each consumer binding can use the spring.cloud.stream.bindings.<channelName>.group
property to specify a group name.
For the consumers shown in the following figure, this property would be set as spring.cloud.stream.bindings.<channelName>.group=hdfsWrite
or spring.cloud.stream.bindings.<channelName>.group=average
.
All groups which subscribe to a given destination receive a copy of published data, but only one member of each group receives a given message from that destination. By default, when a group is not specified, Spring Cloud Stream assigns the application to an anonymous and independent single-member consumer group that is in a publish-subscribe relationship with all other consumer groups.
Consistent with the opinionated application model of Spring Cloud Stream, consumer group subscriptions are durable. That is, a binder implementation ensures that group subscriptions are persistent, and once at least one subscription for a group has been created, the group will receive messages, even if they are sent while all applications in the group are stopped.
![]() | Note |
---|---|
Anonymous subscriptions are non-durable by nature. For some binder implementations (e.g., RabbitMQ), it is possible to have non-durable group subscriptions. |
In general, it is preferable to always specify a consumer group when binding an application to a given destination. When scaling up a Spring Cloud Stream application, you must specify a consumer group for each of its input bindings. This prevents the application’s instances from receiving duplicate messages (unless that behavior is desired, which is unusual).
Spring Cloud Stream provides support for partitioning data between multiple instances of a given application. In a partitioned scenario, the physical communication medium (e.g., the broker topic) is viewed as being structured into multiple partitions. One or more producer application instances send data to multiple consumer application instances and ensure that data identified by common characteristics are processed by the same consumer instance.
Spring Cloud Stream provides a common abstraction for implementing partitioned processing use cases in a uniform fashion. Partitioning can thus be used whether the broker itself is naturally partitioned (e.g., Kafka) or not (e.g., RabbitMQ).
Partitioning is a critical concept in stateful processing, where it is critiical, for either performance or consistency reasons, to ensure that all related data is processed together. For example, in the time-windowed average calculation example, it is important that all measurements from any given sensor are processed by the same application instance.
![]() | Note |
---|---|
To set up a partitioned processing scenario, you must configure both the data-producing and the data-consuming ends. |
This section describes Spring Cloud Stream’s programming model. Spring Cloud Stream provides a number of predefined annotations for declaring bound input and output channels as well as how to listen to channels.
You can turn a Spring application into a Spring Cloud Stream application by applying the @EnableBinding
annotation to one of the application’s configuration classes.
The @EnableBinding
annotation itself is meta-annotated with @Configuration
and triggers the configuration of Spring Cloud Stream infrastructure:
... @Import(...) @Configuration @EnableIntegration public @interface EnableBinding { ... Class<?>[] value() default {}; }
The @EnableBinding
annotation can take as parameters one or more interface classes that contain methods which represent bindable components (typically message channels).
![]() | Note |
---|---|
In Spring Cloud Stream 1.0, the only supported bindable components are the Spring Messaging |
A Spring Cloud Stream application can have an arbitrary number of input and output channels defined in an interface as @Input
and @Output
methods:
public interface Barista { @Input SubscribableChannel orders(); @Output MessageChannel hotDrinks(); @Output MessageChannel coldDrinks(); }
Using this interface as a parameter to @EnableBinding
will trigger the creation of three bound channels named orders
, hotDrinks
, and coldDrinks
, respectively.
@EnableBinding(Barista.class) public class CafeConfiguration { ... }
Using the @Input
and @Output
annotations, you can specify a customized channel name for the channel, as shown in the following example:
public interface Barista { ... @Input("inboundOrders") SubscribableChannel orders(); }
In this example, the created bound channel will be named inboundOrders
.
For easy addressing of the most common use cases, which involve either an input channel, an output channel, or both, Spring Cloud Stream provides three predefined interfaces out of the box.
Source
can be used for an application which has a single outbound channel.
public interface Source { String OUTPUT = "output"; @Output(Source.OUTPUT) MessageChannel output(); }
Sink
can be used for an application which has a single inbound channel.
public interface Sink { String INPUT = "input"; @Input(Sink.INPUT) SubscribableChannel input(); }
Processor
can be used for an application which has both an inbound channel and an outbound channel.
public interface Processor extends Source, Sink { }
Spring Cloud Stream provides no special handling for any of these interfaces; they are only provided out of the box.
For each bound interface, Spring Cloud Stream will generate a bean that implements the interface.
Invoking a @Input
-annotated or @Output
-annotated method of one of these beans will return the relevant bound channel.
The bean in the following example sends a message on the output channel when its hello
method is invoked.
It invokes output()
on the injected Source
bean to retrieve the target channel.
@Component public class SendingBean { private Source source; @Autowired public SendingBean(Source source) { this.source = source; } public void sayHello(String name) { source.output().send(MessageBuilder.withPayload(name).build()); } }
Bound channels can be also injected directly:
@Component public class SendingBean { private MessageChannel output; @Autowired public SendingBean(MessageChannel output) { this.output = output; } public void sayHello(String name) { output.send(MessageBuilder.withPayload(name).build()); } }
If the name of the channel is customized on the declaring annotation, that name should be used instead of the method name. Given the following declaration:
public interface CustomSource { ... @Output("customOutput") MessageChannel output(); }
The channel will be injected as shown in the following example:
@Component public class SendingBean { private MessageChannel output; @Autowired public SendingBean(@Qualifier("customOutput") MessageChannel output) { this.output = output; } public void sayHello(String name) { this.output.send(MessageBuilder.withPayload(name).build()); } }
You can write a Spring Cloud Stream application using either Spring Integration annotations or Spring Cloud Stream’s @StreamListener
annotation.
The @StreamListener
annotation is modeled after other Spring Messaging annotations (such as @MessageMapping
, @JmsListener
, @RabbitListener
, etc.) but adds content type management and type coercion features.
Because Spring Cloud Stream is based on Spring Integration, Stream completely inherits Integration’s foundation and infrastructure as well as the component itself.
For example, you can attach the output channel of a Source
to a MessageSource
:
@EnableBinding(Source.class) public class TimerSource { @Value("${format}") private String format; @Bean @InboundChannelAdapter(value = Source.OUTPUT, poller = @Poller(fixedDelay = "${fixedDelay}", maxMessagesPerPoll = "1")) public MessageSource<String> timerMessageSource() { return () -> new GenericMessage<>(new SimpleDateFormat(format).format(new Date())); } }
Or you can use a processor’s channels in a transformer:
@EnableBinding(Processor.class) public class TransformProcessor { @Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT) public Object transform(String message) { return message.toUpperCase(); } }
Spring Cloud Stream supports publishing error messages received by the Spring Integration global
error channel. Error messages sent to the errorChannel
can be published to a specific destination
at the broker by configuring a binding for the outbound target named error
. For example, to
publish error messages to a broker destination named "myErrors", provide the following property:
spring.cloud.stream.bindings.error.destination=myErrors
Complementary to its Spring Integration support, Spring Cloud Stream provides its own @StreamListener
annotation, modeled after other Spring Messaging annotations (e.g. @MessageMapping
, @JmsListener
, @RabbitListener
, etc.).
The @StreamListener
annotation provides a simpler model for handling inbound messages, especially when dealing with use cases that involve content type management and type coercion.
Spring Cloud Stream provides an extensible MessageConverter
mechanism for handling data conversion by bound channels and for, in this case, dispatching to methods annotated with @StreamListener
.
The following is an example of an application which processes external Vote
events:
@EnableBinding(Sink.class) public class VoteHandler { @Autowired VotingService votingService; @StreamListener(Sink.INPUT) public void handle(Vote vote) { votingService.record(vote); } }
The distinction between @StreamListener
and a Spring Integration @ServiceActivator
is seen when considering an inbound Message
that has a String
payload and a contentType
header of application/json
.
In the case of @StreamListener
, the MessageConverter
mechanism will use the contentType
header to parse the String
payload into a Vote
object.
As with other Spring Messaging methods, method arguments can be annotated with @Payload
, @Headers
and @Header
.
![]() | Note |
---|---|
For methods which return data, you must use the @EnableBinding(Processor.class) public class TransformProcessor { @Autowired VotingService votingService; @StreamListener(Processor.INPUT) @SendTo(Processor.OUTPUT) public VoteResult handle(Vote vote) { return votingService.record(vote); } } |
Since version 1.2, Spring Cloud Stream supports dispatching messages to multiple @StreamListener
methods registered on an input channel, based on a condition.
In order to be eligible to support conditional dispatching, a method must satisfy the follow conditions:
The condition is specified via a SpEL expression in the condition
attribute of the annotation and is evaluated for each message.
All the handlers that match the condition will be invoked in the same thread and no assumption must be made about the order in which the invocations take place.
An example of using @StreamListener
with dispatching conditions can be seen below.
In this example, all the messages bearing a header type
with the value foo
will be dispatched to the receiveFoo
method, and all the messages bearing a header type
with the value bar
will be dispatched to the receiveBar
method.
@EnableBinding(Sink.class) @EnableAutoConfiguration public static class TestPojoWithAnnotatedArguments { @StreamListener(target = Sink.INPUT, condition = "headers['type']=='foo'") public void receiveFoo(@Payload FooPojo fooPojo) { // handle the message } @StreamListener(target = Sink.INPUT, condition = "headers['type']=='bar'") public void receiveBar(@Payload BarPojo barPojo) { // handle the message } }
![]() | Note |
---|---|
Dispatching via |
Spring Cloud Stream also supports the use of reactive APIs where incoming and outgoing data is handled as continuous data flows.
Support for reactive APIs is available via the spring-cloud-stream-reactive
, which needs to be added explicitly to your project.
The programming model with reactive APIs is declarative, where instead of specifying how each individual message should be handled, you can use operators that describe functional transformations from inbound to outbound data flows.
Spring Cloud Stream supports the following reactive APIs:
In the future, it is intended to support a more generic model based on Reactive Streams.
The reactive programming model is also using the @StreamListener
annotation for setting up reactive handlers. The differences are that:
@StreamListener
annotation must not specify an input or output, as they are provided as arguments and return values from the method;@Input
and @Output
indicating which input or output will the incoming and respectively outgoing data flows connect to;@Output
, indicating the input where data shall be sent.![]() | Note |
---|---|
Reactive programming support requires Java 1.8. |
![]() | Note |
---|---|
As of Spring Cloud Stream 1.1.1 and later (starting with release train Brooklyn.SR2), reactive programming support requires the use of Reactor 3.0.4.RELEASE and higher.
Earlier Reactor versions (including 3.0.1.RELEASE, 3.0.2.RELEASE and 3.0.3.RELEASE) are not supported.
|
![]() | Note |
---|---|
The use of term |
A Reactor based handler can have the following argument types:
@Input
, it supports the Reactor type Flux
.
The parameterization of the inbound Flux follows the same rules as in the case of individual message handling: it can be the entire Message
, a POJO which can be the Message
payload, or a POJO which is the result of a transformation based on the Message
content-type header. Multiple inputs are provided;Output
, it supports the type FluxSender
which connects a Flux
produced by the method with an output. Generally speaking, specifying outputs as arguments is only recommended when the method can have multiple outputs;A Reactor based handler supports a return type of Flux
, case in which it must be annotated with @Output
. We recommend using the return value of the method when a single output flux is available.
Here is an example of a simple Reactor-based Processor.
@EnableBinding(Processor.class) @EnableAutoConfiguration public static class UppercaseTransformer { @StreamListener @Output(Processor.OUTPUT) public Flux<String> receive(@Input(Processor.INPUT) Flux<String> input) { return input.map(s -> s.toUpperCase()); } }
The same processor using output arguments looks like this:
@EnableBinding(Processor.class) @EnableAutoConfiguration public static class UppercaseTransformer { @StreamListener public void receive(@Input(Processor.INPUT) Flux<String> input, @Output(Processor.OUTPUT) FluxSender output) { output.send(input.map(s -> s.toUpperCase())); } }
RxJava 1.x handlers follow the same rules as Reactor-based one, but will use Observable
and ObservableSender
arguments and return types.
So the first example above will become:
@EnableBinding(Processor.class) @EnableAutoConfiguration public static class UppercaseTransformer { @StreamListener @Output(Processor.OUTPUT) public Observable<String> receive(@Input(Processor.INPUT) Observable<String> input) { return input.map(s -> s.toUpperCase()); } }
The second example above will become:
@EnableBinding(Processor.class) @EnableAutoConfiguration public static class UppercaseTransformer { @StreamListener public void receive(@Input(Processor.INPUT) Observable<String> input, @Output(Processor.OUTPUT) ObservableSender output) { output.send(input.map(s -> s.toUpperCase())); } }
Spring Cloud Stream provides support for aggregating multiple applications together, connecting their input and output channels directly and avoiding the additional cost of exchanging messages via a broker. As of version 1.0 of Spring Cloud Stream, aggregation is supported only for the following types of applications:
output
, typically having a single binding of the type org.springframework.cloud.stream.messaging.Source
input
, typically having a single binding of the type org.springframework.cloud.stream.messaging.Sink
input
and a single output channel named output
, typically having a single binding of the type org.springframework.cloud.stream.messaging.Processor
.They can be aggregated together by creating a sequence of interconnected applications, in which the output channel of an element in the sequence is connected to the input channel of the next element, if it exists. A sequence can start with either a source or a processor, it can contain an arbitrary number of processors and must end with either a processor or a sink.
Depending on the nature of the starting and ending element, the sequence may have one or more bindable channels, as follows:
input
channel of the aggregate and will be bound accordinglyoutput
channel of the aggregate and will be bound accordinglyAggregation is performed using the AggregateApplicationBuilder
utility class, as in the following example.
Let’s consider a project in which we have source, processor and a sink, which may be defined in the project, or may be contained in one of the project’s dependencies.
![]() | Note |
---|---|
Each component (source, sink or processor) in an aggregate application must be provided in a separate package if the configuration classes use |
package com.app.mysink; @SpringBootApplication @EnableBinding(Sink.class) public class SinkApplication { private static Logger logger = LoggerFactory.getLogger(SinkApplication.class); @ServiceActivator(inputChannel=Sink.INPUT) public void loggerSink(Object payload) { logger.info("Received: " + payload); } }
package com.app.myprocessor; @SpringBootApplication @EnableBinding(Processor.class) public class ProcessorApplication { @Transformer public String loggerSink(String payload) { return payload.toUpperCase(); } }
package com.app.mysource; @SpringBootApplication @EnableBinding(Source.class) public class SourceApplication { @Bean @InboundChannelAdapter(value = Source.OUTPUT) public String timerMessageSource() { return new SimpleDateFormat().format(new Date()); } }
Each configuration can be used for running a separate component, but in this case they can be aggregated together as follows:
package com.app; @SpringBootApplication public class SampleAggregateApplication { public static void main(String[] args) { new AggregateApplicationBuilder() .from(SourceApplication.class).args("--fixedDelay=5000") .via(ProcessorApplication.class) .to(SinkApplication.class).args("--debug=true").run(args); } }
The starting component of the sequence is provided as argument to the from()
method.
The ending component of the sequence is provided as argument to the to()
method.
Intermediate processors are provided as argument to the via()
method.
Multiple processors of the same type can be chained together (e.g. for pipelining transformations with different configurations).
For each component, the builder can provide runtime arguments for Spring Boot configuration.
Spring Cloud Stream supports passing properties for the individual applications inside the aggregate application using 'namespace' as prefix.
The namespace can be set for applications as follows:
@SpringBootApplication public class SampleAggregateApplication { public static void main(String[] args) { new AggregateApplicationBuilder() .from(SourceApplication.class).namespace("source").args("--fixedDelay=5000") .via(ProcessorApplication.class).namespace("processor1") .to(SinkApplication.class).namespace("sink").args("--debug=true").run(args); } }
Once the 'namespace' is set for the individual applications, the application properties with the namespace
as prefix can be passed to the aggregate application using any supported property source (commandline, environment properties etc.,)
For instance, to override the default fixedDelay
and debug
properties of 'source' and 'sink' applications:
java -jar target/MyAggregateApplication-0.0.1-SNAPSHOT.jar --source.fixedDelay=10000 --sink.debug=false
The non self-contained aggregate application is bound to external broker via either or both the inbound/outbound components (typically, message channels) of the aggregate application while the applications inside the aggregate application are directly bound. For example: a source application’s output and a processor application’s input are directly bound while the processor’s output channel is bound to an external destination at the broker. When passing the binding service properties for non-self contained aggregate application, it is required to pass the binding service properties to the aggregate application instead of setting them as 'args' to individual child application. For instance,
@SpringBootApplication public class SampleAggregateApplication { public static void main(String[] args) { new AggregateApplicationBuilder() .from(SourceApplication.class).namespace("source").args("--fixedDelay=5000") .via(ProcessorApplication.class).namespace("processor1").args("--debug=true").run(args); } }
The binding properties like --spring.cloud.stream.bindings.output.destination=processor-output
need to be specified as one of the external configuration properties (cmdline arg etc.,).
Spring Cloud Stream provides a Binder abstraction for use in connecting to physical destinations at the external middleware. This section provides information about the main concepts behind the Binder SPI, its main components, and implementation-specific details.
A producer is any component that sends messages to a channel.
The channel can be bound to an external message broker via a Binder implementation for that broker.
When invoking the bindProducer()
method, the first parameter is the name of the destination within the broker, the second parameter is the local channel instance to which the producer will send messages, and the third parameter contains properties (such as a partition key expression) to be used within the adapter that is created for that channel.
A consumer is any component that receives messages from a channel.
As with a producer, the consumer’s channel can be bound to an external message broker.
When invoking the bindConsumer()
method, the first parameter is the destination name, and a second parameter provides the name of a logical group of consumers.
Each group that is represented by consumer bindings for a given destination receives a copy of each message that a producer sends to that destination (i.e., publish-subscribe semantics).
If there are multiple consumer instances bound using the same group name, then messages will be load-balanced across those consumer instances so that each message sent by a producer is consumed by only a single consumer instance within each group (i.e., queueing semantics).
The Binder SPI consists of a number of interfaces, out-of-the box utility classes and discovery strategies that provide a pluggable mechanism for connecting to external middleware.
The key point of the SPI is the Binder
interface which is a strategy for connecting inputs and outputs to external middleware.
public interface Binder<T, C extends ConsumerProperties, P extends ProducerProperties> { Binding<T> bindConsumer(String name, String group, T inboundBindTarget, C consumerProperties); Binding<T> bindProducer(String name, T outboundBindTarget, P producerProperties); }
The interface is parameterized, offering a number of extension points:
MessageChannel
is supported, but this is intended to be used as an extension point in the future;A typical binder implementation consists of the following
Binder
interface;@Configuration
class that creates a bean of the type above along with the middleware connection infrastructure;META-INF/spring.binders
file found on the classpath containing one or more binder definitions, e.g.kafka:\ org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfiguration
Spring Cloud Stream relies on implementations of the Binder SPI to perform the task of connecting channels to message brokers. Each Binder implementation typically connects to one type of messaging system.
By default, Spring Cloud Stream relies on Spring Boot’s auto-configuration to configure the binding process. If a single Binder implementation is found on the classpath, Spring Cloud Stream will use it automatically. For example, a Spring Cloud Stream project that aims to bind only to RabbitMQ can simply add the following dependency:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-binder-rabbit</artifactId> </dependency>
For the specific maven coordinates of other binder dependencies, please refer to the documentation of that binder implementation.
When multiple binders are present on the classpath, the application must indicate which binder is to be used for each channel binding.
Each binder configuration contains a META-INF/spring.binders
, which is a simple properties file:
rabbit:\ org.springframework.cloud.stream.binder.rabbit.config.RabbitServiceAutoConfiguration
Similar files exist for the other provided binder implementations (e.g., Kafka), and custom binder implementations are expected to provide them, as well.
The key represents an identifying name for the binder implementation, whereas the value is a comma-separated list of configuration classes that each contain one and only one bean definition of type org.springframework.cloud.stream.binder.Binder
.
Binder selection can either be performed globally, using the spring.cloud.stream.defaultBinder
property (e.g., spring.cloud.stream.defaultBinder=rabbit
) or individually, by configuring the binder on each channel binding.
For instance, a processor application (that has channels with the names input
and output
for read/write respectively) which reads from Kafka and writes to RabbitMQ can specify the following configuration:
spring.cloud.stream.bindings.input.binder=kafka spring.cloud.stream.bindings.output.binder=rabbit
By default, binders share the application’s Spring Boot auto-configuration, so that one instance of each binder found on the classpath will be created. If your application should connect to more than one broker of the same type, you can specify multiple binder configurations, each with different environment settings.
![]() | Note |
---|---|
Turning on explicit binder configuration will disable the default binder configuration process altogether.
If you do this, all binders in use must be included in the configuration.
Frameworks that intend to use Spring Cloud Stream transparently may create binder configurations that can be referenced by name, but will not affect the default binder configuration.
In order to do so, a binder configuration may have its |
For example, this is the typical configuration for a processor application which connects to two RabbitMQ broker instances:
spring: cloud: stream: bindings: input: destination: foo binder: rabbit1 output: destination: bar binder: rabbit2 binders: rabbit1: type: rabbit environment: spring: rabbitmq: host: <host1> rabbit2: type: rabbit environment: spring: rabbitmq: host: <host2>
The following properties are available when creating custom binder configurations.
They must be prefixed with spring.cloud.stream.binders.<configurationName>
.
The binder type.
It typically references one of the binders found on the classpath, in particular a key in a META-INF/spring.binders
file.
By default, it has the same value as the configuration name.
Whether the configuration will inherit the environment of the application itself.
Default true
.
Root for a set of properties that can be used to customize the environment of the binder. When this is configured, the context in which the binder is being created is not a child of the application context. This allows for complete separation between the binder components and the application components.
Default empty
.
Whether the binder configuration is a candidate for being considered a default binder, or can be used only when explicitly referenced. This allows adding binder configurations without interfering with the default processing.
Default true
.
Spring Cloud Stream supports general configuration options as well as configuration for bindings and binders. Some binders allow additional binding properties to support middleware-specific features.
Configuration options can be provided to Spring Cloud Stream applications via any mechanism supported by Spring Boot. This includes application arguments, environment variables, and YAML or .properties files.
The number of deployed instances of an application. Must be set for partitioning and if using Kafka.
Default: 1
.
0
to instanceCount
-1.
Used for partitioning and with Kafka.
Automatically set in Cloud Foundry to match the application’s instance index.A list of destinations that can be bound dynamically (for example, in a dynamic routing scenario). If set, only listed destinations can be bound.
Default: empty (allowing any destination to be bound).
The default binder to use, if multiple binders are configured. See Multiple Binders on the Classpath.
Default: empty.
This property is only applicable when the cloud
profile is active and Spring Cloud Connectors are provided with the application.
If the property is false (the default), the binder will detect a suitable bound service (e.g. a RabbitMQ service bound in Cloud Foundry for the RabbitMQ binder) and will use it for creating connections (usually via Spring Cloud Connectors).
When set to true, this property instructs binders to completely ignore the bound services and rely on Spring Boot properties (e.g. relying on the spring.rabbitmq.*
properties provided in the environment for the RabbitMQ binder).
The typical usage of this property is to be nested in a customized environment when connecting to multiple systems.
Default: false.
Binding properties are supplied using the format spring.cloud.stream.bindings.<channelName>.<property>=<value>
.
The <channelName>
represents the name of the channel being configured (e.g., output
for a Source
).
To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format spring.cloud.stream.default.<property>=<value>
.
In what follows, we indicate where we have omitted the spring.cloud.stream.bindings.<channelName>.
prefix and focus just on the property name, with the understanding that the prefix will be included at runtime.
The following binding properties are available for both input and output bindings and must be prefixed with spring.cloud.stream.bindings.<channelName>.
, e.g. spring.cloud.stream.bindings.input.destination=ticktock
.
Default values can be set by using the prefix spring.cloud.stream.default
, e.g. spring.cloud.stream.default.contentType=application/json
.
The consumer group of the channel. Applies only to inbound bindings. See Consumer Groups.
Default: null (indicating an anonymous consumer).
The content type of the channel.
Default: null (so that no type coercion is performed).
The binder used by this binding. See Section 26.4, “Multiple Binders on the Classpath” for details.
Default: null (the default binder will be used, if one exists).
The following binding properties are available for input bindings only and must be prefixed with spring.cloud.stream.bindings.<channelName>.consumer.
, e.g. spring.cloud.stream.bindings.input.consumer.concurrency=3
.
Default values can be set by using the prefix spring.cloud.stream.default.consumer
, e.g. spring.cloud.stream.default.consumer.headerMode=raw
.
The concurrency of the inbound consumer.
Default: 1
.
Whether the consumer receives data from a partitioned producer.
Default: false
.
When set to raw
, disables header parsing on input.
Effective only for messaging middleware that does not support message headers natively and requires header embedding.
Useful when inbound data is coming from outside Spring Cloud Stream applications.
Default: embeddedHeaders
.
If processing fails, the number of attempts to process the message (including the first). Set to 1 to disable retry.
Default: 3
.
The backoff initial interval on retry.
Default: 1000
.
The maximum backoff interval.
Default: 10000
.
The backoff multiplier.
Default: 2.0
.
When set to a value greater than equal to zero, allows customizing the instance index of this consumer (if different from spring.cloud.stream.instanceIndex
).
When set to a negative value, it will default to spring.cloud.stream.instanceIndex
.
Default: -1
.
When set to a value greater than equal to zero, allows customizing the instance count of this consumer (if different from spring.cloud.stream.instanceCount
).
When set to a negative value, it will default to spring.cloud.stream.instanceCount
.
Default: -1
.
The following binding properties are available for output bindings only and must be prefixed with spring.cloud.stream.bindings.<channelName>.producer.
, e.g. spring.cloud.stream.bindings.input.producer.partitionKeyExpression=payload.id
.
Default values can be set by using the prefix spring.cloud.stream.default.producer
, e.g. spring.cloud.stream.default.producer.partitionKeyExpression=payload.id
.
A SpEL expression that determines how to partition outbound data.
If set, or if partitionKeyExtractorClass
is set, outbound data on this channel will be partitioned, and partitionCount
must be set to a value greater than 1 to be effective.
The two options are mutually exclusive.
See Section 24.5, “Partitioning Support”.
Default: null.
A PartitionKeyExtractorStrategy
implementation.
If set, or if partitionKeyExpression
is set, outbound data on this channel will be partitioned, and partitionCount
must be set to a value greater than 1 to be effective.
The two options are mutually exclusive.
See Section 24.5, “Partitioning Support”.
Default: null.
A PartitionSelectorStrategy
implementation.
Mutually exclusive with partitionSelectorExpression
.
If neither is set, the partition will be selected as the hashCode(key) % partitionCount
, where key
is computed via either partitionKeyExpression
or partitionKeyExtractorClass
.
Default: null.
A SpEL expression for customizing partition selection.
Mutually exclusive with partitionSelectorClass
.
If neither is set, the partition will be selected as the hashCode(key) % partitionCount
, where key
is computed via either partitionKeyExpression
or partitionKeyExtractorClass
.
Default: null.
The number of target partitions for the data, if partitioning is enabled. Must be set to a value greater than 1 if the producer is partitioned. On Kafka, interpreted as a hint; the larger of this and the partition count of the target topic is used instead.
Default: 1
.
When set to raw
, disables header embedding on output.
Effective only for messaging middleware that does not support message headers natively and requires header embedding.
Useful when producing data for non-Spring Cloud Stream applications.
Default: embeddedHeaders
.
When set to true
, the outbound message is serialized directly by client library, which must be configured correspondingly (e.g. setting an appropriate Kafka producer value serializer).
When this configuration is being used, the outbound message marshalling is not based on the contentType
of the binding.
When native encoding is used, it is the responsibility of the consumer to use appropriate decoder (ex: Kafka consumer value de-serializer) to deserialize the inbound message.
Also, when native encoding/decoding is used the headerMode
property is ignored and headers will not be embedded into the message.
Default: false
.
Besides the channels defined via @EnableBinding
, Spring Cloud Stream allows applications to send messages to dynamically bound destinations.
This is useful, for example, when the target destination needs to be determined at runtime.
Applications can do so by using the BinderAwareChannelResolver
bean, registered automatically by the @EnableBinding
annotation.
The property 'spring.cloud.stream.dynamicDestinations' can be used for restricting the dynamic destination names to a set known beforehand (whitelisting). If the property is not set, any destination can be bound dynamicaly.
The BinderAwareChannelResolver
can be used directly as in the following example, in which a REST controller uses a path variable to decide the target channel.
@EnableBinding @Controller public class SourceWithDynamicDestination { @Autowired private BinderAwareChannelResolver resolver; @RequestMapping(path = "/{target}", method = POST, consumes = "*/*") @ResponseStatus(HttpStatus.ACCEPTED) public void handleRequest(@RequestBody String body, @PathVariable("target") target, @RequestHeader(HttpHeaders.CONTENT_TYPE) Object contentType) { sendMessage(body, target, contentType); } private void sendMessage(String body, String target, Object contentType) { resolver.resolveDestination(target).send(MessageBuilder.createMessage(body, new MessageHeaders(Collections.singletonMap(MessageHeaders.CONTENT_TYPE, contentType)))); } }
After starting the application on the default port 8080, when sending the following data:
curl -H "Content-Type: application/json" -X POST -d "customer-1" http://localhost:8080/customers curl -H "Content-Type: application/json" -X POST -d "order-1" http://localhost:8080/orders
The destinations 'customers' and 'orders' are created in the broker (for example: exchange in case of Rabbit or topic in case of Kafka) with the names 'customers' and 'orders', and the data is published to the appropriate destinations.
The BinderAwareChannelResolver
is a general purpose Spring Integration DestinationResolver
and can be injected in other components.
For example, in a router using a SpEL expression based on the target
field of an incoming JSON message.
@EnableBinding @Controller public class SourceWithDynamicDestination { @Autowired private BinderAwareChannelResolver resolver; @RequestMapping(path = "/", method = POST, consumes = "application/json") @ResponseStatus(HttpStatus.ACCEPTED) public void handleRequest(@RequestBody String body, @RequestHeader(HttpHeaders.CONTENT_TYPE) Object contentType) { sendMessage(body, contentType); } private void sendMessage(Object body, Object contentType) { routerChannel().send(MessageBuilder.createMessage(body, new MessageHeaders(Collections.singletonMap(MessageHeaders.CONTENT_TYPE, contentType)))); } @Bean(name = "routerChannel") public MessageChannel routerChannel() { return new DirectChannel(); } @Bean @ServiceActivator(inputChannel = "routerChannel") public ExpressionEvaluatingRouter router() { ExpressionEvaluatingRouter router = new ExpressionEvaluatingRouter(new SpelExpressionParser().parseExpression("payload.target")); router.setDefaultOutputChannelName("default-output"); router.setChannelResolver(resolver); return router; } }
To allow you to propagate information about the content type of produced messages, Spring Cloud Stream attaches, by default, a contentType
header to outbound messages.
For middleware that does not directly support headers, Spring Cloud Stream provides its own mechanism of automatically wrapping outbound messages in an envelope of its own.
For middleware that does support headers, Spring Cloud Stream applications may receive messages with a given content type from non-Spring Cloud Stream applications.
Spring Cloud Stream can handle messages based on this information in two ways:
contentType
settings on inbound and outbound channels@StreamListener
Spring Cloud Stream allows you to declaratively configure type conversion for inputs and outputs using the spring.cloud.stream.bindings.<channelName>.content-type
property of a binding.
Note that general type conversion may also be accomplished easily by using a transformer inside your application.
Currently, Spring Cloud Stream natively supports the following type conversions commonly used in streams:
Where JSON represents either a byte array or String payload containing JSON. Currently, Objects may be converted from a JSON byte array or String. Converting to JSON always produces a String.
If no content-type
property is set on an outbound channel, Spring Cloud Stream will serialize the payload using a serializer based on the Kryo serialization framework.
Deserializing messages at the destination requires the payload class to be present on the receiver’s classpath.
content-type
values are parsed as media types, e.g., application/json
or text/plain;charset=UTF-8
.
MIME types are especially useful for indicating how to convert to String or byte[] content.
Spring Cloud Stream also uses MIME type format to represent Java types, using the general type application/x-java-object
with a type
parameter.
For example, application/x-java-object;type=java.util.Map
or application/x-java-object;type=com.bar.Foo
can be set as the content-type
property of an input binding.
In addition, Spring Cloud Stream provides custom MIME types, notably, application/x-spring-tuple
to specify a Tuple.
The type conversions Spring Cloud Stream provides out of the box are summarized in the following table: 'Source Payload' means the payload before conversion and 'Target Payload' means the 'payload' after conversion. The type conversion can occur either on the 'producer' side (output) or at the 'consumer' side (input).
Source Payload | Target Payload | content-type header (source message) | content-type header (after conversion) | Comments |
---|---|---|---|---|
POJO | JSON String | ignored | application/json | |
Tuple | JSON String | ignored | application/json | JSON is tailored for Tuple |
POJO | String (toString()) | ignored | text/plain, java.lang.String | |
POJO | byte[] (java.io serialized) | ignored | application/x-java-serialized-object | |
JSON byte[] or String | POJO | application/json (or none) | application/x-java-object | |
byte[] or String | Serializable | application/x-java-serialized-object | application/x-java-object | |
JSON byte[] or String | Tuple | application/json (or none) | application/x-spring-tuple | |
byte[] | String | any | text/plain, java.lang.String | will apply any Charset specified in the content-type header |
String | byte[] | any | application/octet-stream | will apply any Charset specified in the content-type header |
![]() | Note |
---|---|
Conversion applies to payloads that require type conversion. For example, if an application produces an XML string with outputType=application/json, the payload will not be converted from XML to JSON. This is because the payload send to the outbound channel is already a String so no conversion will be applied at runtime. It is also important to note that when using the default serialization mechanism, the payload class must be shared between the sending and receiving application, and compatible with the binary content. This can create issues when application code changes independently in the two applications, as the binary format and code may become incompatible. |
![]() | Tip |
---|---|
While conversion is supported for both inbound and outbound channels, it is especially recommended to be used for the conversion of outbound messages.
For the conversion of inbound messages, especially when the target is a POJO, the |
Besides the conversions that it supports out of the box, Spring Cloud Stream also supports registering your own message conversion implementations.
This allows you to send and receive data in a variety of custom formats, including binary, and associate them with specific contentTypes
.
Spring Cloud Stream registers all the beans of type org.springframework.messaging.converter.MessageConverter
as custom message converters along with the out of the box message converters.
If your message converter needs to work with a specific content-type
and target class (for both input and output), then the message converter needs to extend org.springframework.messaging.converter.AbstractMessageConverter
.
For conversion when using @StreamListener
, a message converter that implements org.springframework.messaging.converter.MessageConverter
would suffice.
Here is an example of creating a message converter bean (with the content-type application/bar
) inside a Spring Cloud Stream application:
@EnableBinding(Sink.class) @SpringBootApplication public static class SinkApplication { ... @Bean public MessageConverter customMessageConverter() { return new MyCustomMessageConverter(); }
public class MyCustomMessageConverter extends AbstractMessageConverter { public MyCustomMessageConverter() { super(new MimeType("application", "bar")); } @Override protected boolean supports(Class<?> clazz) { return (Bar.class == clazz); } @Override protected Object convertFromInternal(Message<?> message, Class<?> targetClass, Object conversionHint) { Object payload = message.getPayload(); return (payload instanceof Bar ? payload : new Bar((byte[]) payload)); } }
Spring Cloud Stream also provides support for Avro-based converters and schema evolution. See the specific section for details.
The @StreamListener
annotation provides a convenient way for converting incoming messages without the need to specify the content type of an input channel.
During the dispatching process to methods annotated with @StreamListener
, a conversion will be applied automatically if the argument requires it.
For example, let’s consider a message with the String content {"greeting":"Hello, world"}
and a content-type
header of application/json
is received on the input channel.
Let us consider the following application that receives it:
public class GreetingMessage { String greeting; public String getGreeting() { return greeting; } public void setGreeting(String greeting) { this.greeting = greeting; } } @EnableBinding(Sink.class) @EnableAutoConfiguration public static class GreetingSink { @StreamListener(Sink.INPUT) public void receive(Greeting greeting) { // handle Greeting } }
The argument of the method will be populated automatically with the POJO containing the unmarshalled form of the JSON String.
Spring Cloud Stream provides support for schema-based message converters through its spring-cloud-stream-schema
module.
Currently, the only serialization format supported out of the box for schema-based message converters is Apache Avro, with more formats to be added in future versions.
The spring-cloud-stream-schema
module contains two types of message converters that can be used for Apache Avro serialization:
The AvroSchemaMessageConverter
supports serializing and deserializing messages either using a predefined schema or by using the schema information available in the class (either reflectively, or contained in the SpecificRecord
).
If the target type of the conversion is a GenericRecord
, then a schema must be set.
For using it, you can simply add it to the application context, optionally specifying one ore more MimeTypes
to associate it with.
The default MimeType
is application/avro
.
Here is an example of configuring it in a sink application registering the Apache Avro MessageConverter
, without a predefined schema:
@EnableBinding(Sink.class) @SpringBootApplication public static class SinkApplication { ... @Bean public MessageConverter userMessageConverter() { return new AvroSchemaMessageConverter(MimeType.valueOf("avro/bytes")); } }
Conversely, here is an application that registers a converter with a predefined schema, to be found on the classpath:
@EnableBinding(Sink.class) @SpringBootApplication public static class SinkApplication { ... @Bean public MessageConverter userMessageConverter() { AvroSchemaMessageConverter converter = new AvroSchemaMessageConverter(MimeType.valueOf("avro/bytes")); converter.setSchemaLocation(new ClassPathResource("schemas/User.avro")); return converter; } }
In order to understand the schema registry client converter, we will describe the schema registry support first.
Most serialization models, especially the ones that aim for portability across different platforms and languages, rely on a schema that describes how the data is serialized in the binary payload. In order to serialize the data and then to interpret it, both the sending and receiving sides must have access to a schema that describes the binary format. In certain cases, the schema can be inferred from the payload type on serialization, or from the target type on deserialization, but in a lot of cases applications benefit from having access to an explicit schema that describes the binary data format. A schema registry allows you to store schema information in a textual format (typically JSON) and makes that information accessible to various applications that need it to receive and send data in binary format. A schema is referenceable as a tuple consisting of:
Spring Cloud Stream provides a schema registry server implementation.
In order to use it, you can simply add the spring-cloud-stream-schema-server
artifact to your project and use the @EnableSchemaRegistryServer
annotation, adding the schema registry server REST controller to your application.
This annotation is intended to be used with Spring Boot web applications, and the listening port of the server is controlled by the server.port
setting.
The spring.cloud.stream.schema.server.path
setting can be used to control the root path of the schema server (especially when it is embedded in other applications).
The spring.cloud.stream.schema.server.allowSchemaDeletion
boolean setting enables the deletion of schema. By default this is disabled.
The schema registry server uses a relational database to store the schemas. By default, it uses an embedded database. You can customize the schema storage using the Spring Boot SQL database and JDBC configuration options.
A Spring Boot application enabling the schema registry looks as follows:
@SpringBootApplication @EnableSchemaRegistryServer public class SchemaRegistryServerApplication { public static void main(String[] args) { SpringApplication.run(SchemaRegistryServerApplication.class, args); } }
The Schema Registry Server API consists of the following operations:
Register a new schema.
Accepts JSON payload with the following fields:
subject
the schema subject;format
the schema format;definition
the schema definition.Response is a schema object in JSON format, with the following fields:
id
the schema id;subject
the schema subject;format
the schema format;version
the schema version;definition
the schema definition.Retrieve an existing schema by its subject, format and version.
Response is a schema object in JSON format, with the following fields:
id
the schema id;subject
the schema subject;format
the schema format;version
the schema version;definition
the schema definition.Retrieve a list of existing schema by its subject and format.
Response is a list of schemas with each schema object in JSON format, with the following fields:
id
the schema id;subject
the schema subject;format
the schema format;version
the schema version;definition
the schema definition.Retrieve an existing schema by its id.
Response is a schema object in JSON format, with the following fields:
id
the schema id;subject
the schema subject;format
the schema format;version
the schema version;definition
the schema definition.Delete existing schemas by their subject.
![]() | Note |
---|---|
This note applies to users of Spring Cloud Stream 1.1.0.RELEASE only.
Spring Cloud Stream 1.1.0.RELEASE used the table name |
The client-side abstraction for interacting with schema registry servers is the SchemaRegistryClient
interface, with the following structure:
public interface SchemaRegistryClient { SchemaRegistrationResponse register(String subject, String format, String schema); String fetch(SchemaReference schemaReference); String fetch(Integer id); }
Spring Cloud Stream provides out of the box implementations for interacting with its own schema server, as well as for interacting with the Confluent Schema Registry.
A client for the Spring Cloud Stream schema registry can be configured using the @EnableSchemaRegistryClient
as follows:
@EnableBinding(Sink.class) @SpringBootApplication @EnableSchemaRegistryClient public static class AvroSinkApplication { ... }
![]() | Note |
---|---|
The default converter is optimized to cache not only the schemas from the remote server but also the |
The Schema Registry Client supports the following properties:
http
or https
) , port and context path.http://localhost:8990/
false
, as the caching happens in the message converter.
Clients using the schema registry client should set this to true
.true
For Spring Boot applications that have a SchemaRegistryClient
bean registered with the application context, Spring Cloud Stream will auto-configure an Apache Avro message converter that uses the schema registry client for schema management.
This eases schema evolution, as applications that receive messages can get easy access to a writer schema that can be reconciled with their own reader schema.
For outbound messages, the MessageConverter
will be activated if the content type of the channel is set to application/*+avro
, e.g.:
spring.cloud.stream.bindings.output.contentType=application/*+avro
During the outbound conversion, the message converter will try to infer the schemas of the outbound messages based on their type and register them to a subject based on the payload type using the SchemaRegistryClient
.
If an identical schema is already found, then a reference to it will be retrieved.
If not, the schema will be registered and a new version number will be provided.
The message will be sent with a contentType
header using the scheme application/[prefix].[subject].v[version]+avro
, where prefix
is configurable and subject
is deduced from the payload type.
For example, a message of the type User
may be sent as a binary payload with a content type of application/vnd.user.v2+avro
, where user
is the subject and 2
is the version number.
When receiving messages, the converter will infer the schema reference from the header of the incoming message and will try to retrieve it. The schema will be used as the writer schema in the deserialization process.
If you have enabled Avro based schema registry client by setting spring.cloud.stream.bindings.output.contentType=application/*+avro
you can customize the behavior of the registration with the following properties.
false
null
.avsc
files listed in this property with the Schema Server.empty
vnd
To better understand how Spring Cloud Stream registers and resolves new schemas, as well as its use of Avro schema comparison features, we will provide two separate subsections below: one for the registration, and one for the resolution of schemas.
The first part of the registration process is extracting a schema from the payload that is being sent over a channel.
Avro types such as SpecificRecord
or GenericRecord
already contain a schema, which can be retrieved immediately from the instance.
In the case of POJOs a schema will be inferred if the property spring.cloud.stream.schema.avro.dynamicSchemaGenerationEnabled
is set to true
(the default).
Once a schema is obtained, the converter will then load its metadata (version) from the remote server. First it queries a local cache, and if not found it then submits the data to the server that will reply with versioning information. The converter will always cache the results to avoid the overhead of querying the Schema Server for every new message that needs to be serialized.
With the schema version information, the converter sets the contentType
header of the message to carry the version information such as application/vnd.user.v1+avro
When reading messages that contain version information (i.e. a contentType
header with a scheme like above), the converter will query the Schema server to fetch the writer schema of the message.
Once it has found the correct schema of the incoming message, it then retrieves the reader schema and using Avro’s schema resolution support reads it into the reader definition (setting defaults and missing properties).
![]() | Note |
---|---|
It’s important to understand the difference between a writer schema (the application that wrote the message) and a reader schema (the receiving application). Please take a moment to read the Avro terminology and understand the process. Spring Cloud Stream will always fetch the writer schema to determine how to read a message. If you want to get Avro’s schema evolution support working you need to make sure that a readerSchema was properly set for your application. |
While Spring Cloud Stream makes it easy for individual Spring Boot applications to connect to messaging systems, the typical scenario for Spring Cloud Stream is the creation of multi-application pipelines, where microservice applications send data to each other. You can achieve this scenario by correlating the input and output destinations of adjacent applications.
Supposing that a design calls for the Time Source application to send data to the Log Sink application, you can use a common destination named ticktock
for bindings within both applications.
Time Source (that has the channel name output
) will set the following property:
spring.cloud.stream.bindings.output.destination=ticktock
Log Sink (that has the channel name input
) will set the following property:
spring.cloud.stream.bindings.input.destination=ticktock
When scaling up Spring Cloud Stream applications, each instance can receive information about how many other instances of the same application exist and what its own instance index is.
Spring Cloud Stream does this through the spring.cloud.stream.instanceCount
and spring.cloud.stream.instanceIndex
properties.
For example, if there are three instances of a HDFS sink application, all three instances will have spring.cloud.stream.instanceCount
set to 3
, and the individual applications will have spring.cloud.stream.instanceIndex
set to 0
, 1
, and 2
, respectively.
When Spring Cloud Stream applications are deployed via Spring Cloud Data Flow, these properties are configured automatically; when Spring Cloud Stream applications are launched independently, these properties must be set correctly.
By default, spring.cloud.stream.instanceCount
is 1
, and spring.cloud.stream.instanceIndex
is 0
.
In a scaled-up scenario, correct configuration of these two properties is important for addressing partitioning behavior (see below) in general, and the two properties are always required by certain binders (e.g., the Kafka binder) in order to ensure that data are split correctly across multiple consumer instances.
An output binding is configured to send partitioned data by setting one and only one of its partitionKeyExpression
or partitionKeyExtractorClass
properties, as well as its partitionCount
property.
For example, the following is a valid and typical configuration:
spring.cloud.stream.bindings.output.producer.partitionKeyExpression=payload.id spring.cloud.stream.bindings.output.producer.partitionCount=5
Based on the above example configuration, data will be sent to the target partition using the following logic.
A partition key’s value is calculated for each message sent to a partitioned output channel based on the partitionKeyExpression
.
The partitionKeyExpression
is a SpEL expression which is evaluated against the outbound message for extracting the partitioning key.
If a SpEL expression is not sufficient for your needs, you can instead calculate the partition key value by setting the property partitionKeyExtractorClass
to a class which implements the org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy
interface.
While the SpEL expression should usually suffice, more complex cases may use the custom implementation strategy.
In that case, the property 'partitionKeyExtractorClass' can be set as follows:
spring.cloud.stream.bindings.output.producer.partitionKeyExtractorClass=com.example.MyKeyExtractor spring.cloud.stream.bindings.output.producer.partitionCount=5
Once the message key is calculated, the partition selection process will determine the target partition as a value between 0
and partitionCount - 1
.
The default calculation, applicable in most scenarios, is based on the formula key.hashCode() % partitionCount
.
This can be customized on the binding, either by setting a SpEL expression to be evaluated against the 'key' (via the partitionSelectorExpression
property) or by setting a org.springframework.cloud.stream.binder.PartitionSelectorStrategy
implementation (via the partitionSelectorClass
property).
The binding level properties for 'partitionSelectorExpression' and 'partitionSelectorClass' can be specified similar to the way 'partitionKeyExpression' and 'partitionKeyExtractorClass' properties are specified in the above examples. Additional properties can be configured for more advanced scenarios, as described in the following section.
In the example above, a custom strategy such as MyKeyExtractor
is instantiated by the Spring Cloud Stream directly.
In some cases, it is necessary for such a custom strategy implementation to be created as a Spring bean, for being able to be managed by Spring, so that it can perform dependency injection, property binding, etc.
This can be done by configuring it as a @Bean in the application context and using the fully qualified class name as the bean’s name, as in the following example.
@Bean(name="com.example.MyKeyExtractor") public MyKeyExtractor extractor() { return new MyKeyExtractor(); }
As a Spring bean, the custom strategy benefits from the full lifecycle of a Spring bean. For example, if the implementation need access to the application context directly, it can make implement 'ApplicationContextAware'.
An input binding (with the channel name input
) is configured to receive partitioned data by setting its partitioned
property, as well as the instanceIndex
and instanceCount
properties on the application itself, as in the following example:
spring.cloud.stream.bindings.input.consumer.partitioned=true spring.cloud.stream.instanceIndex=3 spring.cloud.stream.instanceCount=5
The instanceCount
value represents the total number of application instances between which the data need to be partitioned, and the instanceIndex
must be a unique value across the multiple instances, between 0
and instanceCount - 1
.
The instance index helps each application instance to identify the unique partition (or, in the case of Kafka, the partition set) from which it receives data.
It is important to set both values correctly in order to ensure that all of the data is consumed and that the application instances receive mutually exclusive datasets.
While a scenario which using multiple instances for partitioned data processing may be complex to set up in a standalone case, Spring Cloud Dataflow can simplify the process significantly by populating both the input and output values correctly as well as relying on the runtime infrastructure to provide information about the instance index and instance count.
Spring Cloud Stream provides support for testing your microservice applications without connecting to a messaging system.
You can do that by using the TestSupportBinder
provided by the spring-cloud-stream-test-support
library, which can be added as a test dependency to the application:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-test-support</artifactId> <scope>test</scope> </dependency>
![]() | Note |
---|---|
The |
The TestSupportBinder
allows users to interact with the bound channels and inspect what messages are sent and received by the application
For outbound message channels, the TestSupportBinder
registers a single subscriber and retains the messages emitted by the application in a MessageCollector
.
They can be retrieved during tests and have assertions made against them.
The user can also send messages to inbound message channels, so that the consumer application can consume the messages. The following example shows how to test both input and output channels on a processor.
@RunWith(SpringRunner.class) @SpringBootTest(webEnvironment= SpringBootTest.WebEnvironment.RANDOM_PORT) public class ExampleTest { @Autowired private Processor processor; @Autowired private MessageCollector messageCollector; @Test @SuppressWarnings("unchecked") public void testWiring() { Message<String> message = new GenericMessage<>("hello"); processor.input().send(message); Message<String> received = (Message<String>) messageCollector.forChannel(processor.output()).poll(); assertThat(received.getPayload(), equalTo("hello world")); } @SpringBootApplication @EnableBinding(Processor.class) public static class MyProcessor { @Autowired private Processor channels; @Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT) public String transform(String in) { return in + " world"; } } }
In the example above, we are creating an application that has an input and an output channel, bound through the Processor
interface.
The bound interface is injected into the test so we can have access to both channels.
We are sending a message on the input channel and we are using the MessageCollector
provided by Spring Cloud Stream’s test support to capture the message has been sent to the output channel as a result.
Once we have received the message, we can validate that the component functions correctly.
Spring Cloud Stream provides a health indicator for binders.
It is registered under the name of binders
and can be enabled or disabled by setting the management.health.binders.enabled
property.
Spring Cloud Stream provides a module called spring-cloud-stream-metrics
that can be used to emit any available metric from Spring Boot metrics endpoint to a named channel.
This module allow operators to collect metrics from stream applications without relying on polling their endpoints.
The module is activated when you set the destination name for metrics binding, e.g. spring.cloud.stream.bindings.applicationMetrics.destination=<DESTINATION_NAME>
.
applicationMetrics
can be configured in a similar fashion to any other producer binding.
The default contentType
setting of applicationMetrics
is application/json
.
The following properties can be used for customizing the emission of metrics:
${spring.application.name:${vcap.application.name:${spring.config.name:application}}}
Prefix string to be prepended to the metrics key.
Default: ``
Just like the includes
option, it allows white listing application properties that will be added to the metrics payload
Default: null.
A detailed overview of the metrics export process can be found in the Spring Boot reference documentation.
Spring Cloud Stream provides a metric exporter named application
that can be configured via regular Spring Boot metrics configuration properties.
The exporter can be configured either by using the global Spring Boot configuration settings for exporters, or by using exporter-specific properties.
For using the global configuration settings, the properties should be prefixed by spring.metric.export
(e.g. spring.metric.export.includes=integration**
).
These configuration options will apply to all exporters (unless they have been configured differently).
Alternatively, if it is intended to use configuration settings that are different from the other exporters (e.g. for restricting the number of metrics published), the Spring Cloud Stream provided metrics exporter can be configured using the prefix spring.metrics.export.triggers.application
(e.g. spring.metrics.export.triggers.application.includes=integration**
).
![]() | Note |
---|---|
Due to Spring Boot’s relaxed binding the value of a property being included can be slightly different than the original value. As a rule of thumb, the metric exporter will attempt to normalize all the properties in a consistent format using the dot notation (e.g. The goal of normalization is to make downstream consumers of those metrics capable of receiving property names consistently, regardless of how they are set on the monitored application ( |
Below is a sample of the data published to the channel in JSON format by the following command:
java -jar time-source.jar \ --spring.cloud.stream.bindings.applicationMetrics.destination=someMetrics \ --spring.cloud.stream.metrics.properties=spring.application** \ --spring.metrics.export.includes=integration.channel.input**,integration.channel.output**
The resulting JSON is:
{ "name":"time-source", "metrics":[ { "name":"integration.channel.output.errorRate.mean", "value":0.0, "timestamp":"2017-04-11T16:56:35.790Z" }, { "name":"integration.channel.output.errorRate.max", "value":0.0, "timestamp":"2017-04-11T16:56:35.790Z" }, { "name":"integration.channel.output.errorRate.min", "value":0.0, "timestamp":"2017-04-11T16:56:35.790Z" }, { "name":"integration.channel.output.errorRate.stdev", "value":0.0, "timestamp":"2017-04-11T16:56:35.790Z" }, { "name":"integration.channel.output.errorRate.count", "value":0.0, "timestamp":"2017-04-11T16:56:35.790Z" }, { "name":"integration.channel.output.sendCount", "value":6.0, "timestamp":"2017-04-11T16:56:35.790Z" }, { "name":"integration.channel.output.sendRate.mean", "value":0.994885872292989, "timestamp":"2017-04-11T16:56:35.790Z" }, { "name":"integration.channel.output.sendRate.max", "value":1.006247080013156, "timestamp":"2017-04-11T16:56:35.790Z" }, { "name":"integration.channel.output.sendRate.min", "value":1.0012035220116378, "timestamp":"2017-04-11T16:56:35.790Z" }, { "name":"integration.channel.output.sendRate.stdev", "value":6.505181111084848E-4, "timestamp":"2017-04-11T16:56:35.790Z" }, { "name":"integration.channel.output.sendRate.count", "value":6.0, "timestamp":"2017-04-11T16:56:35.790Z" } ], "createdTime":"2017-04-11T20:56:35.790Z", "properties":{ "spring.application.name":"time-source", "spring.application.index":"0" } }
For Spring Cloud Stream samples, please refer to the spring-cloud-stream-samples repository on GitHub.
To get started with creating Spring Cloud Stream applications, visit the Spring Initializr and create a new Maven project named "GreetingSource".
Select Spring Boot {supported-spring-boot-version} in the dropdown.
In the Search for dependencies text box type Stream Rabbit
or Stream Kafka
depending on what binder you want to use.
Next, create a new class, GreetingSource
, in the same package as the GreetingSourceApplication
class.
Give it the following code:
import org.springframework.cloud.stream.annotation.EnableBinding; import org.springframework.cloud.stream.messaging.Source; import org.springframework.integration.annotation.InboundChannelAdapter; @EnableBinding(Source.class) public class GreetingSource { @InboundChannelAdapter(Source.OUTPUT) public String greet() { return "hello world " + System.currentTimeMillis(); } }
The @EnableBinding
annotation is what triggers the creation of Spring Integration infrastructure components.
Specifically, it will create a Kafka connection factory, a Kafka outbound channel adapter, and the message channel defined inside the Source interface:
public interface Source { String OUTPUT = "output"; @Output(Source.OUTPUT) MessageChannel output(); }
The auto-configuration also creates a default poller, so that the greet()
method will be invoked once per second.
The standard Spring Integration @InboundChannelAdapter
annotation sends a message to the source’s output channel, using the return value as the payload of the message.
To test-drive this setup, run a Kafka message broker. An easy way to do this is to use a Docker image:
# On OS X $ docker run -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=`docker-machine ip \`docker-machine active\`` --env ADVERTISED_PORT=9092 spotify/kafka # On Linux $ docker run -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=localhost --env ADVERTISED_PORT=9092 spotify/kafka
Build the application:
./mvnw clean package
The consumer application is coded in a similar manner.
Go back to Initializr and create another project, named LoggingSink.
Then create a new class, LoggingSink
, in the same package as the class LoggingSinkApplication
and with the following code:
import org.springframework.cloud.stream.annotation.EnableBinding; import org.springframework.cloud.stream.annotation.StreamListener; import org.springframework.cloud.stream.messaging.Sink; @EnableBinding(Sink.class) public class LoggingSink { @StreamListener(Sink.INPUT) public void log(String message) { System.out.println(message); } }
Build the application:
./mvnw clean package
To connect the GreetingSource application to the LoggingSink application, each application must share the same destination name. Starting up both applications as shown below, you will see the consumer application printing "hello world" and a timestamp to the console:
cd GreetingSource java -jar target/GreetingSource-0.0.1-SNAPSHOT.jar --spring.cloud.stream.bindings.output.destination=mydest cd LoggingSink java -jar target/LoggingSink-0.0.1-SNAPSHOT.jar --server.port=8090 --spring.cloud.stream.bindings.input.destination=mydest
(The different server port prevents collisions of the HTTP port used to service the Spring Boot Actuator endpoints in the two applications.)
The output of the LoggingSink application will look something like the following:
[ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8090 (http) [ main] com.example.LoggingSinkApplication : Started LoggingSinkApplication in 6.828 seconds (JVM running for 7.371) hello world 1458595076731 hello world 1458595077732 hello world 1458595078733 hello world 1458595079734 hello world 1458595080735
For using the Apache Kafka binder, you just need to add it to your Spring Cloud Stream application, using the following Maven coordinates:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-binder-kafka</artifactId> </dependency>
Alternatively, you can also use the Spring Cloud Stream Kafka Starter.
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-stream-kafka</artifactId> </dependency>
A simplified diagram of how the Apache Kafka binder operates can be seen below.
The Apache Kafka Binder implementation maps each destination to an Apache Kafka topic. The consumer group maps directly to the same Apache Kafka concept. Partitioning also maps directly to Apache Kafka partitions as well.
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to binder, refer to the core documentation.
A list of brokers to which the Kafka binder will connect.
Default: localhost
.
brokers
allows hosts specified with or without port information (e.g., host1,host2:port2
).
This sets the default port when no port is configured in the broker list.
Default: 9092
.
A list of ZooKeeper nodes to which the Kafka binder can connect.
Default: localhost
.
zkNodes
allows hosts specified with or without port information (e.g., host1,host2:port2
).
This sets the default port when no port is configured in the node list.
Default: 2181
.
Key/Value map of client properties (both producers and consumer) passed to all clients created by the binder. Due to the fact that these properties will be used by both producers and consumers, usage should be restricted to common properties, especially security settings.
Default: Empty map.
The list of custom headers that will be transported by the binder.
Default: empty.
The frequency, in milliseconds, with which offsets are saved.
Ignored if 0
.
Default: 10000
.
The frequency, in number of updates, which which consumed offsets are persisted.
Ignored if 0
.
Mutually exclusive with offsetUpdateTimeWindow
.
Default: 0
.
The number of required acks on the broker.
Default: 1
.
Effective only if autoCreateTopics
or autoAddPartitions
is set.
The global minimum number of partitions that the binder will configure on topics on which it produces/consumes data.
It can be superseded by the partitionCount
setting of the producer or by the value of instanceCount
* concurrency
settings of the producer (if either is larger).
Default: 1
.
The replication factor of auto-created topics if autoCreateTopics
is active.
Default: 1
.
If set to true
, the binder will create new topics automatically.
If set to false
, the binder will rely on the topics being already configured.
In the latter case, if the topics do not exist, the binder will fail to start.
Of note, this setting is independent of the auto.topic.create.enable
setting of the broker and it does not influence it: if the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
Default: true
.
If set to true
, the binder will create add new partitions if required.
If set to false
, the binder will rely on the partition size of the topic being already configured.
If the partition count of the target topic is smaller than the expected value, the binder will fail to start.
Default: false
.
Size (in bytes) of the socket buffer to be used by the Kafka consumers.
Default: 2097152
.
The following properties are available for Kafka consumers only and
must be prefixed with spring.cloud.stream.kafka.bindings.<channelName>.consumer.
.
When true
, topic partitions will be automatically rebalanced between the members of a consumer group.
When false
, each consumer will be assigned a fixed set of partitions based on spring.cloud.stream.instanceCount
and spring.cloud.stream.instanceIndex
.
This requires both spring.cloud.stream.instanceCount
and spring.cloud.stream.instanceIndex
properties to be set appropriately on each launched instance.
The property spring.cloud.stream.instanceCount
must typically be greater than 1 in this case.
Default: true
.
Whether to autocommit offsets when a message has been processed.
If set to false
, a header with the key kafka_acknowledgment
of the type org.springframework.kafka.support.Acknowledgment
header will be present in the inbound message.
Applications may use this header for acknowledging messages.
See the examples section for details.
When this property is set to false
, Kafka binder will set the ack mode to org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL
.
Default: true
.
Effective only if autoCommitOffset
is set to true
.
If set to false
it suppresses auto-commits for messages that result in errors, and will commit only for successful messages, allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
If set to true
, it will always auto-commit (if auto-commit is enabled).
If not set (default), it effectively has the same value as enableDlq
, auto-committing erroneous messages if they are sent to a DLQ, and not committing them otherwise.
Default: not set.
The interval between connection recovery attempts, in milliseconds.
Default: 5000
.
Whether to reset offsets on the consumer to the value provided by startOffset
.
Default: false
.
The starting offset for new groups, or when resetOffsets
is true
.
Allowed values: earliest
, latest
.
If the consumer group is set explicitly for the consumer 'binding' (via spring.cloud.stream.bindings.<channelName>.group
), then 'startOffset' is set to earliest
; otherwise it is set to latest
for the anonymous
consumer group.
Default: null (equivalent to earliest
).
When set to true, it will send enable DLQ behavior for the consumer.
By default, messages that result in errors will be forwarded to a topic named error.<destination>.<group>
.
The DLQ topic name can be configurable via the property dlqName
.
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
Default: false
.
Map with a key/value pair containing generic Kafka consumer properties.
Default: Empty map.
The name of the DLQ topic to receive the error messages.
Default: null (If not specified, messages that result in errors will be forwarded to a topic named error.<destination>.<group>
).
The following properties are available for Kafka producers only and
must be prefixed with spring.cloud.stream.kafka.bindings.<channelName>.producer.
.
Upper limit, in bytes, of how much data the Kafka producer will attempt to batch before sending.
Default: 16384
.
Whether the producer is synchronous.
Default: false
.
How long the producer will wait before sending in order to allow more messages to accumulate in the same batch. (Normally the producer does not wait at all, and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
Default: 0
.
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message.
For example headers.key
or payload.myKey
.
Default: none
.
Map with a key/value pair containing generic Kafka producer properties.
Default: Empty map.
![]() | Note |
---|---|
The Kafka binder will use the |
In this section, we illustrate the use of the above properties for specific scenarios.
This example illustrates how one may manually acknowledge offsets in a consumer application.
This example requires that spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset
is set to false.
Use the corresponding input channel name for your example.
@SpringBootApplication @EnableBinding(Sink.class) public class ManuallyAcknowdledgingConsumer { public static void main(String[] args) { SpringApplication.run(ManuallyAcknowdledgingConsumer.class, args); } @StreamListener(Sink.INPUT) public void process(Message<?> message) { Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class); if (acknowledgment != null) { System.out.println("Acknowledgment provided"); acknowledgment.acknowledge(); } } }
Apache Kafka 0.9 supports secure connections between client and brokers.
To take advantage of this feature, follow the guidelines in the Apache Kafka Documentation as well as the Kafka 0.9 security guidelines from the Confluent documentation.
Use the spring.cloud.stream.kafka.binder.configuration
option to set security properties for all clients created by the binder.
For example, for setting security.protocol
to SASL_SSL
, set:
spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
All the other security properties can be set in a similar manner.
When using Kerberos, follow the instructions in the reference documentation for creating and referencing the JAAS configuration.
Spring Cloud Stream supports passing JAAS configuration information to the application using a JAAS configuration file and using Spring Boot properties.
The JAAS, and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties. Here is an example of launching a Spring Cloud Stream application with SASL and Kerberos using a JAAS configuration file:
java -Djava.security.auth.login.config=/path.to/kafka_client_jaas.conf -jar log.jar \ --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \ --spring.cloud.stream.kafka.binder.zkNodes=secure.zookeeper:2181 \ --spring.cloud.stream.bindings.input.destination=stream.ticktock \ --spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT
As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications using Spring Boot properties.
The following properties can be used for configuring the login context of the Kafka client.
The login module name. Not necessary to be set in normal cases.
Default: com.sun.security.auth.module.Krb5LoginModule
.
The control flag of the login module.
Default: required
.
Map with a key/value pair containing the login module options.
Default: Empty map.
Here is an example of launching a Spring Cloud Stream application with SASL and Kerberos using Spring Boot configuration properties:
java --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \ --spring.cloud.stream.kafka.binder.zkNodes=secure.zookeeper:2181 \ --spring.cloud.stream.bindings.input.destination=stream.ticktock \ --spring.cloud.stream.kafka.binder.autoCreateTopics=false \ --spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT \ --spring.cloud.stream.kafka.binder.jaas.options.useKeyTab=true \ --spring.cloud.stream.kafka.binder.jaas.options.storeKey=true \ --spring.cloud.stream.kafka.binder.jaas.options.keyTab=/etc/security/keytabs/kafka_client.keytab \ --spring.cloud.stream.kafka.binder.jaas.options.principal=kafka-client-1@EXAMPLE.COM
This represents the equivalent of the following JAAS file:
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/etc/security/keytabs/kafka_client.keytab" principal="[email protected]"; };
If the topics required already exist on the broker, or will be created by an administrator, autocreation can be turned off and only client JAAS properties need to be sent. As an alternative to setting spring.cloud.stream.kafka.binder.autoCreateTopics
you can simply remove the broker dependency from the application. See the section called “Excluding Kafka broker jar from the classpath of the binder based application” for details.
![]() | Note |
---|---|
Do not mix JAAS configuration files and Spring Boot properties in the same application.
If the |
![]() | Note |
---|---|
Exercise caution when using the |
The default Kafka support in Spring Cloud Stream Kafka binder is for Kafka version 0.10.1.1. The binder also supports connecting to other 0.10 based versions and 0.9 clients.
In order to do this, when you create the project that contains your application, include spring-cloud-starter-stream-kafka
as you normally would do for the default binder.
Then add these dependencies at the top of the <dependencies>
section in the pom.xml file to override the dependencies.
Here is an example for downgrading your application to 0.10.0.1. Since it is still on the 0.10 line, the default spring-kafka
and spring-integration-kafka
versions can be retained.
<dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.11</artifactId> <version>0.10.0.1</version> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>0.10.0.1</version> </dependency>
Here is another example of using 0.9.0.1 version.
<dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> <version>1.0.5.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.integration</groupId> <artifactId>spring-integration-kafka</artifactId> <version>2.0.1.RELEASE</version> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.11</artifactId> <version>0.9.0.1</version> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>0.9.0.1</version> </dependency>
![]() | Note |
---|---|
The versions above are provided only for the sake of the example. For best results, we recommend using the most recent 0.10-compatible versions of the projects. |
The Apache Kafka Binder uses the administrative utilities which are part of the Apache Kafka server library to create and reconfigure topics. If the inclusion of the Apache Kafka server library and its dependencies is not necessary at runtime because the application will rely on the topics being configured administratively, the Kafka binder allows for Apache Kafka server dependency to be excluded from the application.
If you use non default versions for Kafka dependencies as advised above, all you have to do is not to include the kafka broker dependency.
If you use the default Kafka version, then ensure that you exclude the kafka broker jar from the spring-cloud-starter-stream-kafka
dependency as following.
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-stream-kafka</artifactId> <exclusions> <exclusion> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.11</artifactId> </exclusion> </exclusions> </dependency>
If you exclude the Apache Kafka server dependency and the topic is not present on the server, then the Apache Kafka broker will create the topic if auto topic creation is enabled on the server. Please keep in mind that if you are relying on this, then the Kafka server will use the default number of partitions and replication factors. On the other hand, if auto topic creation is disabled on the server, then care must be taken before running the application to create the topic with the desired number of partitions.
If you want to have full control over how partitions are allocated, then leave the default settings as they are, i.e. do not exclude the kafka broker jar and ensure that spring.cloud.stream.kafka.binder.autoCreateTopics
is set to true
, which is the default.
Because it can’t be anticipated how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them.
If the reason for the dead-lettering is transient, you may wish to route the messages back to the original topic.
However, if the problem is a permanent issue, that could cause an infinite loop.
The following spring-boot
application is an example of how to route those messages back to the original topic, but moves them to a third "parking lot" topic after three attempts.
The application is simply another spring-cloud-stream application that reads from the dead-letter topic.
It terminates when no messages are received for 5 seconds.
The examples assume the original destination is so8400out
and the consumer group is so8400
.
There are several considerations.
headerMode=raw
.
In that case, consider adding some data to the payload (that can be ignored by the main application).x-retries
has to be added to the headers
property spring.cloud.stream.kafka.binder.headers=x-retries
on both this, and the main application so that the header is transported between the applications.application.properties.
spring.cloud.stream.bindings.input.group=so8400replay spring.cloud.stream.bindings.input.destination=error.so8400out.so8400 spring.cloud.stream.bindings.output.destination=so8400out spring.cloud.stream.bindings.output.producer.partitioned=true spring.cloud.stream.bindings.parkingLot.destination=so8400in.parkingLot spring.cloud.stream.bindings.parkingLot.producer.partitioned=true spring.cloud.stream.kafka.binder.configuration.auto.offset.reset=earliest spring.cloud.stream.kafka.binder.headers=x-retries
Application.
@SpringBootApplication @EnableBinding(TwoOutputProcessor.class) public class ReRouteDlqKApplication implements CommandLineRunner { private static final String X_RETRIES_HEADER = "x-retries"; public static void main(String[] args) { SpringApplication.run(ReRouteDlqKApplication.class, args).close(); } private final AtomicInteger processed = new AtomicInteger(); @Autowired private MessageChannel parkingLot; @StreamListener(Processor.INPUT) @SendTo(Processor.OUTPUT) public Message<?> reRoute(Message<?> failed) { processed.incrementAndGet(); Integer retries = failed.getHeaders().get(X_RETRIES_HEADER, Integer.class); if (retries == null) { System.out.println("First retry for " + failed); return MessageBuilder.fromMessage(failed) .setHeader(X_RETRIES_HEADER, new Integer(1)) .setHeader(BinderHeaders.PARTITION_OVERRIDE, failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID)) .build(); } else if (retries.intValue() < 3) { System.out.println("Another retry for " + failed); return MessageBuilder.fromMessage(failed) .setHeader(X_RETRIES_HEADER, new Integer(retries.intValue() + 1)) .setHeader(BinderHeaders.PARTITION_OVERRIDE, failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID)) .build(); } else { System.out.println("Retries exhausted for " + failed); parkingLot.send(MessageBuilder.fromMessage(failed) .setHeader(BinderHeaders.PARTITION_OVERRIDE, failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID)) .build()); } return null; } @Override public void run(String... args) throws Exception { while (true) { int count = this.processed.get(); Thread.sleep(5000); if (count == this.processed.get()) { System.out.println("Idle, terminating"); return; } } } public interface TwoOutputProcessor extends Processor { @Output("parkingLot") MessageChannel parkingLot(); } }
For using the RabbitMQ binder, you just need to add it to your Spring Cloud Stream application, using the following Maven coordinates:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-binder-rabbit</artifactId> </dependency>
Alternatively, you can also use the Spring Cloud Stream RabbitMQ Starter.
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-stream-rabbit</artifactId> </dependency>
A simplified diagram of how the RabbitMQ binder operates can be seen below.
The RabbitMQ Binder implementation maps each destination to a TopicExchange
.
For each consumer group, a Queue
will be bound to that TopicExchange
.
Each consumer instance have a corresponding RabbitMQ Consumer
instance for its group’s Queue
.
For partitioned producers/consumers the queues are suffixed with the partition index and use the partition index as routing key.
Using the autoBindDlq
option, you can optionally configure the binder to create and configure dead-letter queues (DLQs) (and a dead-letter exchange DLX
).
The dead letter queue has the name of the destination, appended with .dlq
.
If retry is enabled (maxAttempts > 1
) failed messages will be delivered to the DLQ.
If retry is disabled (maxAttempts = 1
), you should set requeueRejected
to false
(default) so that a failed message will be routed to the DLQ, instead of being requeued.
In addition, republishToDlq
causes the binder to publish a failed message to the DLQ (instead of rejecting it); this enables additional information to be added to the message in headers, such as the stack trace in the x-exception-stacktrace
header.
This option does not need retry enabled; you can republish a failed message after just one attempt.
Starting with version 1.2, you can configure the delivery mode of republished messsages; see property republishDeliveryMode
.
![]() | Important |
---|---|
Setting |
See Section 37.3.1, “RabbitMQ Binder Properties” for more information about these properties.
The framework does not provide any standard mechanism to consume dead-letter messages (or to re-route them back to the primary queue). Some options are described in Section 37.5, “Dead-Letter Queue Processing”.
![]() | Note |
---|---|
When multiple RabbitMQ binders are used in a Spring Cloud Stream application, it is important to disable 'RabbitAutoConfiguration' to avoid the same configuration from RabbitAutoConfiguration being applied to the two binders. |
This section contains settings specific to the RabbitMQ Binder and bound channels.
For general binding configuration options and properties, please refer to the Spring Cloud Stream core documentation.
By default, the RabbitMQ binder uses Spring Boot’s ConnectionFactory
, and it therefore supports all Spring Boot configuration options for RabbitMQ.
(For reference, consult the Spring Boot documentation.)
RabbitMQ configuration options use the spring.rabbitmq
prefix.
In addition to Spring Boot options, the RabbitMQ binder supports the following properties:
A comma-separated list of RabbitMQ management plugin URLs.
Only used when nodes
contains more than one entry.
Each entry in this list must have a corresponding entry in spring.rabbitmq.addresses
.
Default: empty.
A comma-separated list of RabbitMQ node names.
When more than one entry, used to locate the server address where a queue is located.
Each entry in this list must have a corresponding entry in spring.rabbitmq.addresses
.
Default: empty.
Compression level for compressed bindings.
See java.util.zip.Deflater
.
Default: 1
(BEST_LEVEL).
The following properties are available for Rabbit consumers only and
must be prefixed with spring.cloud.stream.rabbit.bindings.<channelName>.consumer.
.
The acknowledge mode.
Default: AUTO
.
Whether to automatically declare the DLQ and bind it to the binder DLX.
Default: false
.
The routing key with which to bind the queue to the exchange (if bindQueue
is true
).
for partitioned destinations -<instanceIndex>
will be appended.
Default: #
.
Whether to bind the queue to the destination exchange; set to false
if you have set up your own infrastructure and have previously created/bound the queue.
Default: true
.
name of the DLQ
Default: prefix+destination.dlq
a DLX to assign to the queue; if autoBindDlq is true
Default: 'prefix+DLX'
a dead letter routing key to assign to the queue; if autoBindDlq is true
Default: destination
Whether to declare the exchange for the destination.
Default: true
.
Whether to declare the exchange as a Delayed Message Exchange
- requires the delayed message exchange plugin on the broker.
The x-delayed-type
argument is set to the exchangeType
.
Default: false
.
if a DLQ is declared, a DLX to assign to that queue
Default: none
if a DLQ is declared, a dead letter routing key to assign to that queue; default none
Default: none
how long before an unused dead letter queue is deleted (ms)
Default: no expiration
maximum number of messages in the dead letter queue
Default: no limit
maximum number of total bytes in the dead letter queue from all messages
Default: no limit
maximum priority of messages in the dead letter queue (0-255)
Default: none
default time to live to apply to the dead letter queue when declared (ms)
Default: no limit
Whether subscription should be durable.
Only effective if group
is also set.
Default: true
.
If declareExchange
is true, whether the exchange should be auto-delete (removed after the last queue is removed).
Default: true
.
If declareExchange
is true, whether the exchange should be durable (survives broker restart).
Default: true
.
The exchange type; direct
, fanout
or topic
for non-partitioned destinations; direct
or topic
for partitioned destinations.
Default: topic
.
how long before an unused queue is deleted (ms)
Default: no expiration
Patterns for headers to be mapped from inbound messages.
Default: ['*']
(all headers).
the maximum number of consumers
Default: 1
.
maximum number of messages in the queue
Default: no limit
maximum number of total bytes in the queue from all messages
Default: no limit
none
Prefetch count.
Default: 1
.
A prefix to be added to the name of the destination
and queues.
Default: "".
The interval between connection recovery attempts, in milliseconds.
Default: 5000
.
Whether delivery failures should be requeued when retry is disabled or republishToDlq is false.
Default: false
.
When republishToDlq
is true
, specify the delivery mode of the republished message.
Default: DeliveryMode.PERSISTENT
By default, messages which fail after retries are exhausted are rejected.
If a dead-letter queue (DLQ) is configured, RabbitMQ will route the failed message (unchanged) to the DLQ.
If set to true
, the binder will republish failed messages to the DLQ with additional headers, including the exception message and stack trace from the cause of the final failure.
Default: false
Whether to use transacted channels.
Default: false
.
default time to live to apply to the queue when declared (ms)
Default: no limit
The number of deliveries between acks.
Default: 1
.
The following properties are available for Rabbit producers only and
must be prefixed with spring.cloud.stream.rabbit.bindings.<channelName>.producer.
.
Whether to automatically declare the DLQ and bind it to the binder DLX.
Default: false
.
Whether to enable message batching by producers.
Default: false
.
The number of messages to buffer when batching is enabled.
Default: 100
.
10000
.5000
.The routing key with which to bind the queue to the exchange (if bindQueue
is true
).
Only applies to non-partitioned destinations.
Only applies if requiredGroups
are provided and then only to those groups.
Default: #
.
Whether to bind the queue to the destination exchange; set to false
if you have set up your own infrastructure and have previously created/bound the queue.
Only applies if requiredGroups
are provided and then only to those groups.
Default: true
.
Whether data should be compressed when sent.
Default: false
.
name of the DLQ
Only applies if requiredGroups
are provided and then only to those groups.
Default: prefix+destination.dlq
a DLX to assign to the queue; if autoBindDlq is true
Only applies if requiredGroups
are provided and then only to those groups.
Default: 'prefix+DLX'
a dead letter routing key to assign to the queue; if autoBindDlq is true
Only applies if requiredGroups
are provided and then only to those groups.
Default: destination
Whether to declare the exchange for the destination.
Default: true
.
A SpEL expression to evaluate the delay to apply to the message (x-delay
header) - has no effect if the exchange is not a delayed message exchange.
Default: No x-delay
header is set.
Whether to declare the exchange as a Delayed Message Exchange
- requires the delayed message exchange plugin on the broker.
The x-delayed-type
argument is set to the exchangeType
.
Default: false
.
Delivery mode.
Default: PERSISTENT
.
if a DLQ is declared, a DLX to assign to that queue
Only applies if requiredGroups
are provided and then only to those groups.
Default: none
if a DLQ is declared, a dead letter routing key to assign to that queue; default none
Only applies if requiredGroups
are provided and then only to those groups.
Default: none
how long before an unused dead letter queue is deleted (ms)
Only applies if requiredGroups
are provided and then only to those groups.
Default: no expiration
maximum number of messages in the dead letter queue
Only applies if requiredGroups
are provided and then only to those groups.
Default: no limit
maximum number of total bytes in the dead letter queue from all messages
Only applies if requiredGroups
are provided and then only to those groups.
Default: no limit
maximum priority of messages in the dead letter queue (0-255)
Only applies if requiredGroups
are provided and then only to those groups.
Default: none
default time to live to apply to the dead letter queue when declared (ms)
Only applies if requiredGroups
are provided and then only to those groups.
Default: no limit
If declareExchange
is true, whether the exchange should be auto-delete (removed after the last queue is removed).
Default: true
.
If declareExchange
is true, whether the exchange should be durable (survives broker restart).
Default: true
.
The exchange type; direct
, fanout
or topic
for non-partitioned destinations; direct
or topic
for partitioned destinations.
Default: topic
.
how long before an unused queue is deleted (ms)
Only applies if requiredGroups
are provided and then only to those groups.
Default: no expiration
Patterns for headers to be mapped to outbound messages.
Default: ['*']
(all headers).
maximum number of messages in the queue
Only applies if requiredGroups
are provided and then only to those groups.
Default: no limit
maximum number of total bytes in the queue from all messages
Only applies if requiredGroups
are provided and then only to those groups.
Default: no limit
requiredGroups
are provided and then only to those groups.none
A prefix to be added to the name of the destination
exchange.
Default: "".
A SpEL expression to determine the routing key to use when publishing messages.
Default: destination
or destination-<partition>
for partitioned destinations.
Whether to use transacted channels.
Default: false
.
default time to live to apply to the queue when declared (ms)
Only applies if requiredGroups
are provided and then only to those groups.
Default: no limit
![]() | Note |
---|---|
In the case of RabbitMQ, content type headers can be set by external applications. Spring Cloud Stream supports them as part of an extended internal protocol used for any type of transport (including transports, such as Kafka, that do not normally support headers). |
When retry is enabled within the binder, the listener container thread is suspended for any back off periods that are configured. This might be important when strict ordering is required with a single consumer but for other use cases it prevents other messages from being processed on that thread. An alternative to using binder retry is to set up dead lettering with time to live on the dead-letter queue (DLQ), as well as dead-letter configuration on the DLQ itself. See Section 37.3.1, “RabbitMQ Binder Properties” for more information about the properties discussed here. Example configuration to enable this feature:
autoBindDlq
to true
- the binder will create a DLQ; you can optionally specify a name in deadLetterQueueName
dlqTtl
to the back off time you want to wait between redeliveriesdlqDeadLetterExchange
to the default exchange - expired messages from the DLQ will be routed to the original queue since the default deadLetterRoutingKey
is the queue name (destination.group
)To force a message to be dead-lettered, either throw an AmqpRejectAndDontRequeueException
, or set requeueRejected
to true
and throw any exception.
The loop will continue without end, which is fine for transient problems but you may want to give up after some number of attempts.
Fortunately, RabbitMQ provides the x-death
header which allows you to determine how many cycles have occurred.
To acknowledge a message after giving up, throw an ImmediateAcknowledgeAmqpException
.
--- spring.cloud.stream.bindings.input.destination=myDestination spring.cloud.stream.bindings.input.group=consumerGroup #disable binder retries spring.cloud.stream.bindings.input.consumer.max-attempts=1 #dlx/dlq setup spring.cloud.stream.rabbit.bindings.input.consumer.auto-bind-dlq=true spring.cloud.stream.rabbit.bindings.input.consumer.dlq-ttl=5000 spring.cloud.stream.rabbit.bindings.input.consumer.dlq-dead-letter-exchange= ---
This configuration creates an exchange myDestination
with queue myDestination.consumerGroup
bound to a topic exchange with a wildcard routing key #
.
It creates a DLQ bound to a direct exchange DLX
with routing key myDestination.consumerGroup
.
When messages are rejected, they are routed to the DLQ.
After 5 seconds, the message expires and is routed to the original queue using the queue name as the routing key.
Spring Boot application.
@SpringBootApplication @EnableBinding(Sink.class) public class XDeathApplication { public static void main(String[] args) { SpringApplication.run(XDeathApplication.class, args); } @StreamListener(Sink.INPUT) public void listen(String in, @Header(name = "x-death", required = false) Map<?,?> death) { if (death != null && death.get("count").equals(3L)) { // giving up - don't send to DLX throw new ImmediateAcknowledgeAmqpException("Failed after 4 attempts"); } throw new AmqpRejectAndDontRequeueException("failed"); } }
Notice that the count property in the x-death
header is a Long
.
Because it can’t be anticipated how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them.
If the reason for the dead-lettering is transient, you may wish to route the messages back to the original queue.
However, if the problem is a permanent issue, that could cause an infinite loop.
The following spring-boot
application is an example of how to route those messages back to the original queue, but moves them to a third "parking lot" queue after three attempts.
The second example utilizes the RabbitMQ Delayed Message Exchange to introduce a delay to the requeued message.
In this example, the delay increases for each attempt.
These examples use a @RabbitListener
to receive messages from the DLQ, you could also use RabbitTemplate.receive()
in a batch process.
The examples assume the original destination is so8400in
and the consumer group is so8400
.
The first two examples are when the destination is not partitioned.
@SpringBootApplication public class ReRouteDlqApplication { private static final String ORIGINAL_QUEUE = "so8400in.so8400"; private static final String DLQ = ORIGINAL_QUEUE + ".dlq"; private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot"; private static final String X_RETRIES_HEADER = "x-retries"; public static void main(String[] args) throws Exception { ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args); System.out.println("Hit enter to terminate"); System.in.read(); context.close(); } @Autowired private RabbitTemplate rabbitTemplate; @RabbitListener(queues = DLQ) public void rePublish(Message failedMessage) { Integer retriesHeader = (Integer) failedMessage.getMessageProperties().getHeaders().get(X_RETRIES_HEADER); if (retriesHeader == null) { retriesHeader = Integer.valueOf(0); } if (retriesHeader < 3) { failedMessage.getMessageProperties().getHeaders().put(X_RETRIES_HEADER, retriesHeader + 1); this.rabbitTemplate.send(ORIGINAL_QUEUE, failedMessage); } else { this.rabbitTemplate.send(PARKING_LOT, failedMessage); } } @Bean public Queue parkingLot() { return new Queue(PARKING_LOT); } }
@SpringBootApplication public class ReRouteDlqApplication { private static final String ORIGINAL_QUEUE = "so8400in.so8400"; private static final String DLQ = ORIGINAL_QUEUE + ".dlq"; private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot"; private static final String X_RETRIES_HEADER = "x-retries"; private static final String DELAY_EXCHANGE = "dlqReRouter"; public static void main(String[] args) throws Exception { ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args); System.out.println("Hit enter to terminate"); System.in.read(); context.close(); } @Autowired private RabbitTemplate rabbitTemplate; @RabbitListener(queues = DLQ) public void rePublish(Message failedMessage) { Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders(); Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER); if (retriesHeader == null) { retriesHeader = Integer.valueOf(0); } if (retriesHeader < 3) { headers.put(X_RETRIES_HEADER, retriesHeader + 1); headers.put("x-delay", 5000 * retriesHeader); this.rabbitTemplate.send(DELAY_EXCHANGE, ORIGINAL_QUEUE, failedMessage); } else { this.rabbitTemplate.send(PARKING_LOT, failedMessage); } } @Bean public DirectExchange delayExchange() { DirectExchange exchange = new DirectExchange(DELAY_EXCHANGE); exchange.setDelayed(true); return exchange; } @Bean public Binding bindOriginalToDelay() { return BindingBuilder.bind(new Queue(ORIGINAL_QUEUE)).to(delayExchange()).with(ORIGINAL_QUEUE); } @Bean public Queue parkingLot() { return new Queue(PARKING_LOT); } }
With partitioned destinations, there is one DLQ for all partitions and we determine the original queue from the headers.
When republishToDlq
is false
, RabbitMQ publishes the message to the DLX/DLQ with an x-death
header containing information about the original destination.
@SpringBootApplication public class ReRouteDlqApplication { private static final String ORIGINAL_QUEUE = "so8400in.so8400"; private static final String DLQ = ORIGINAL_QUEUE + ".dlq"; private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot"; private static final String X_DEATH_HEADER = "x-death"; private static final String X_RETRIES_HEADER = "x-retries"; public static void main(String[] args) throws Exception { ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args); System.out.println("Hit enter to terminate"); System.in.read(); context.close(); } @Autowired private RabbitTemplate rabbitTemplate; @SuppressWarnings("unchecked") @RabbitListener(queues = DLQ) public void rePublish(Message failedMessage) { Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders(); Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER); if (retriesHeader == null) { retriesHeader = Integer.valueOf(0); } if (retriesHeader < 3) { headers.put(X_RETRIES_HEADER, retriesHeader + 1); List<Map<String, ?>> xDeath = (List<Map<String, ?>>) headers.get(X_DEATH_HEADER); String exchange = (String) xDeath.get(0).get("exchange"); List<String> routingKeys = (List<String>) xDeath.get(0).get("routing-keys"); this.rabbitTemplate.send(exchange, routingKeys.get(0), failedMessage); } else { this.rabbitTemplate.send(PARKING_LOT, failedMessage); } } @Bean public Queue parkingLot() { return new Queue(PARKING_LOT); } }
When republishToDlq
is true
, the republishing recoverer adds the original exchange and routing key to headers.
@SpringBootApplication public class ReRouteDlqApplication { private static final String ORIGINAL_QUEUE = "so8400in.so8400"; private static final String DLQ = ORIGINAL_QUEUE + ".dlq"; private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot"; private static final String X_RETRIES_HEADER = "x-retries"; private static final String X_ORIGINAL_EXCHANGE_HEADER = RepublishMessageRecoverer.X_ORIGINAL_EXCHANGE; private static final String X_ORIGINAL_ROUTING_KEY_HEADER = RepublishMessageRecoverer.X_ORIGINAL_ROUTING_KEY; public static void main(String[] args) throws Exception { ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args); System.out.println("Hit enter to terminate"); System.in.read(); context.close(); } @Autowired private RabbitTemplate rabbitTemplate; @RabbitListener(queues = DLQ) public void rePublish(Message failedMessage) { Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders(); Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER); if (retriesHeader == null) { retriesHeader = Integer.valueOf(0); } if (retriesHeader < 3) { headers.put(X_RETRIES_HEADER, retriesHeader + 1); String exchange = (String) headers.get(X_ORIGINAL_EXCHANGE_HEADER); String originalRoutingKey = (String) headers.get(X_ORIGINAL_ROUTING_KEY_HEADER); this.rabbitTemplate.send(exchange, originalRoutingKey, failedMessage); } else { this.rabbitTemplate.send(PARKING_LOT, failedMessage); } } @Bean public Queue parkingLot() { return new Queue(PARKING_LOT); } }
Spring Cloud Bus links nodes of a distributed system with a lightweight message broker. This can then be used to broadcast state changes (e.g. configuration changes) or other management instructions. A key idea is that the Bus is like a distributed Actuator for a Spring Boot application that is scaled out, but it can also be used as a communication channel between apps. Starters are provided for an AMQP broker as the transport or for Kafka, but the same basic feature set (and some more depending on the transport) is on the roadmap for other transports.
![]() | Note |
---|---|
Spring Cloud is released under the non-restrictive Apache 2.0 license. If you would like to contribute to this section of the documentation or if you find an error, please find the source code and issue trackers in the project at github. |
Spring Cloud Bus works by adding Spring Boot autconfiguration if it detects itself on the classpath. All you need to do to enable the bus is to add spring-cloud-starter-bus-amqp
or spring-cloud-starter-bus-kafka
to your dependency management and Spring Cloud takes care of the rest. Make sure the broker (RabbitMQ or Kafka) is available and configured: running on localhost you shouldn’t have to do anything, but if you are running remotely use Spring Cloud Connectors, or Spring Boot conventions to define the broker credentials, e.g. for Rabbit
application.yml.
spring: rabbitmq: host: mybroker.com port: 5672 username: user password: secret
The bus currently supports sending messages to all nodes listening or all nodes for a particular service (as defined by Eureka). More selector criteria may be added in the future (ie. only service X nodes in data center Y, etc…). There are also some http endpoints under the /bus/*
actuator namespace. There are currently two implemented. The first, /bus/env
, sends key/value pairs to update each node’s Spring Environment. The second, /bus/refresh
, will reload each application’s configuration, just as if they had all been pinged on their /refresh
endpoint.
![]() | Note |
---|---|
The Bus starters cover Rabbit and Kafka, because those are the two most common implementations, but Spring Cloud Stream is quite flexible and binder will work combined with |
The HTTP endpoints accept a "destination" parameter, e.g. "/bus/refresh?destination=customers:9000", where the destination is an ApplicationContext
ID. If the ID is owned by an instance on the Bus then it will process the message and all other instances will ignore it. Spring Boot sets the ID for you in the ContextIdApplicationContextInitializer
to a combination of the spring.application.name
, active profiles and server.port
by default.
The "destination" parameter is used in a Spring PathMatcher
(with the path separator as a colon :
) to determine if an instance will process the message. Using the example from above, "/bus/refresh?destination=customers:**" will target all instances of the "customers" service regardless of the profiles and ports set as the ApplicationContext
ID.
The bus tries to eliminate processing an event twice, once from the original ApplicationEvent
and once from the queue. To do this, it checks the sending application context id againts the current application context id. If multiple instances of a service have the same application context id, events will not be processed. Running on a local machine, each service will be on a different port and that will be part of the application context id. Cloud Foundry supplies an index to differentiate. To ensure that the application context id is the unique, set spring.application.index
to something unique for each instance of a service. For example, in lattice, set spring.application.index=${INSTANCE_INDEX}
in application.properties (or bootstrap.properties if using configserver).
Spring Cloud Bus uses
Spring Cloud Stream to
broadcast the messages so to get messages to flow you only need to
include the binder implementation of your choice in the
classpath. There are convenient starters specifically for the bus with
AMQP (RabbitMQ) and Kafka
(spring-cloud-starter-bus-[amqp,kafka]
). Generally speaking
Spring Cloud Stream relies on Spring Boot autoconfiguration
conventions for configuring middleware, so for instance the AMQP
broker address can be changed with spring.rabbitmq.*
configuration properties. Spring Cloud Bus has a handful of native
configuration properties in spring.cloud.bus.*
(e.g. spring.cloud.bus.destination
is the name of the topic to use
the the externall middleware). Normally the defaults will suffice.
To lean more about how to customize the message broker settings consult the Spring Cloud Stream documentation.
Bus events (subclasses of RemoteApplicationEvent
) can be traced by
setting spring.cloud.bus.trace.enabled=true
. If you do this then the
Spring Boot TraceRepository
(if it is present) will show each event
sent and all the acks from each service instance. Example (from the
/trace
endpoint):
{ "timestamp": "2015-11-26T10:24:44.411+0000", "info": { "signal": "spring.cloud.bus.ack", "type": "RefreshRemoteApplicationEvent", "id": "c4d374b7-58ea-4928-a312-31984def293b", "origin": "stores:8081", "destination": "*:**" } }, { "timestamp": "2015-11-26T10:24:41.864+0000", "info": { "signal": "spring.cloud.bus.sent", "type": "RefreshRemoteApplicationEvent", "id": "c4d374b7-58ea-4928-a312-31984def293b", "origin": "customers:9000", "destination": "*:**" } }, { "timestamp": "2015-11-26T10:24:41.862+0000", "info": { "signal": "spring.cloud.bus.ack", "type": "RefreshRemoteApplicationEvent", "id": "c4d374b7-58ea-4928-a312-31984def293b", "origin": "customers:9000", "destination": "*:**" } }
This trace shows that a RefreshRemoteApplicationEvent
was sent from
customers:9000
, broadcast to all services, and it was received
(acked) by customers:9000
and stores:8081
.
To handle the ack signals yourself you could add an @EventListener
for the AckRemoteApplicationEvent
and SentApplicationEvent
types
to your app (and enable tracing). Or you could tap into the
TraceRepository
and mine the data from there.
![]() | Note |
---|---|
Any Bus application can trace acks, but sometimes it will be useful to do this in a central service that can do more complex queries on the data. Or forward it to a specialized tracing service. |
The Bus can carry any event of type RemoteApplicationEvent
, but the
default transport is JSON and the deserializer needs to know which
types are going to be used ahead of time. To register a new type it
needs to be in a subpackage of org.springframework.cloud.bus.event
.
To customise the event name you can use @JsonTypeName
on your custom class
or rely on the default strategy which is to use the simple name of the class.
Note that both the producer and the consumer will need access to the class
definition.
If you cannot or don’t want to use a subpackage of org.springframework.cloud.bus.event
for your custom events, you must specify which packages to scan for events of
type RemoteApplicationEvent
using @RemoteApplicationEventScan
. Packages
specified with @RemoteApplicationEventScan
include subpackages.
For example, if you have a custom event called FooEvent
:
package com.acme; public class FooEvent extends RemoteApplicationEvent { ... }
you can register this event with the deserializer in the following way:
package com.acme; @Configuration @RemoteApplicationEventScan public class BusConfiguration { ... }
Without specifying a value, the package of the class where @RemoteApplicationEventScan
is used will be registered. In this example com.acme
will be registered using the
package of BusConfiguration
.
You can also explicitly specify the packages to scan using the value
, basePackages
or
basePackageClasses
properties on @RemoteApplicationEventScan
. For example:
package com.acme; @Configuration //@RemoteApplicationEventScan({"com.acme", "foo.bar"}) //@RemoteApplicationEventScan(basePackages = {"com.acme", "foo.bar", "fizz.buzz"}) @RemoteApplicationEventScan(basePackageClasses = BusConfiguration.class) public class BusConfiguration { ... }
All examples of @RemoteApplicationEventScan
above are equivalent,
in that the com.acme
package will be registered by explicitly specifying the
packages on @RemoteApplicationEventScan
. Note, you can specify multiple base
packages to scan.
Adrian Cole, Spencer Gibb, Marcin Grzejszczak, Dave Syer
Dalston.SR5
Spring Cloud Sleuth implements a distributed tracing solution for Spring Cloud.
Spring Cloud Sleuth borrows Dapper’s terminology.
Span: The basic unit of work. For example, sending an RPC is a new span, as is sending a response to an RPC. Span’s are identified by a unique 64-bit ID for the span and another 64-bit ID for the trace the span is a part of. Spans also have other data, such as descriptions, timestamped events, key-value annotations (tags), the ID of the span that caused them, and process ID’s (normally IP address).
Spans are started and stopped, and they keep track of their timing information. Once you create a span, you must stop it at some point in the future.
![]() | Tip |
---|---|
The initial span that starts a trace is called a |
Trace: A set of spans forming a tree-like structure. For example, if you are running a distributed big-data store, a trace might be formed by a put request.
Annotation: is used to record existence of an event in time. Some of the core annotations used to define the start and stop of a request are:
Visualization of what Span and Trace will look in a system together with the Zipkin annotations:
Each color of a note signifies a span (7 spans - from A to G). If you have such information in the note:
Trace Id = X Span Id = D Client Sent
That means that the current span has Trace-Id set to X, Span-Id set to D. It also has emitted Client Sent event.
This is how the visualization of the parent / child relationship of spans would look like:
In the following sections the example from the image above will be taken into consideration.
Altogether there are 7 spans . If you go to traces in Zipkin you will see this number in the second trace:
However if you pick a particular trace then you will see 4 spans:
![]() | Note |
---|---|
When picking a particular trace you will see merged spans. That means that if there were 2 spans sent to Zipkin with Server Received and Server Sent / Client Received and Client Sent annotations then they will presented as a single span. |
Why is there a difference between the 7 and 4 spans in this case?
http:/start
span. It has the Server Received (SR) and Server Sent (SS) annotations.service1
to service2
to the http:/foo
endpoint. It has the Client Sent (CS)
and Client Received (CR) annotations on service1
side. It also has Server Received (SR) and Server Sent (SS) annotations
on the service2
side. Physically there are 2 spans but they form 1 logical span related to an RPC call.service2
to service3
to the http:/bar
endpoint. It has the Client Sent (CS)
and Client Received (CR) annotations on service2
side. It also has Server Received (SR) and Server Sent (SS) annotations
on the service3
side. Physically there are 2 spans but they form 1 logical span related to an RPC call.service2
to service4
to the http:/baz
endpoint. It has the Client Sent (CS)
and Client Received (CR) annotations on service2
side. It also has Server Received (SR) and Server Sent (SS) annotations
on the service4
side. Physically there are 2 spans but they form 1 logical span related to an RPC call.So if we count the physical spans we have 1 from http:/start
, 2 from service1
calling service2
, 2 form service2
calling service3
and 2 from service2
calling service4
. Altogether 7 spans.
Logically we see the information of Total Spans: 4 because we have 1 span related to the incoming request
to service1
and 3 spans related to RPC calls.
Zipkin allows you to visualize errors in your trace. When an exception was thrown and wasn’t caught then we’re setting proper tags on the span which Zipkin can properly colorize. You could see in the list of traces one trace that was in red color. That’s because there was an exception thrown.
If you click that trace then you’ll see a similar picture
Then if you click on one of the spans you’ll see the following
As you can see you can easily see the reason for an error and the whole stacktrace related to it.
The dependency graph in Zipkin would look like this:
When grepping the logs of those four applications by trace id equal to e.g. 2485ec27856c56f4
one would get the following:
service1.log:2016-02-26 11:15:47.561 INFO [service1,2485ec27856c56f4,2485ec27856c56f4,true] 68058 --- [nio-8081-exec-1] i.s.c.sleuth.docs.service1.Application : Hello from service1. Calling service2 service2.log:2016-02-26 11:15:47.710 INFO [service2,2485ec27856c56f4,9aa10ee6fbde75fa,true] 68059 --- [nio-8082-exec-1] i.s.c.sleuth.docs.service2.Application : Hello from service2. Calling service3 and then service4 service3.log:2016-02-26 11:15:47.895 INFO [service3,2485ec27856c56f4,1210be13194bfe5,true] 68060 --- [nio-8083-exec-1] i.s.c.sleuth.docs.service3.Application : Hello from service3 service2.log:2016-02-26 11:15:47.924 INFO [service2,2485ec27856c56f4,9aa10ee6fbde75fa,true] 68059 --- [nio-8082-exec-1] i.s.c.sleuth.docs.service2.Application : Got response from service3 [Hello from service3] service4.log:2016-02-26 11:15:48.134 INFO [service4,2485ec27856c56f4,1b1845262ffba49d,true] 68061 --- [nio-8084-exec-1] i.s.c.sleuth.docs.service4.Application : Hello from service4 service2.log:2016-02-26 11:15:48.156 INFO [service2,2485ec27856c56f4,9aa10ee6fbde75fa,true] 68059 --- [nio-8082-exec-1] i.s.c.sleuth.docs.service2.Application : Got response from service4 [Hello from service4] service1.log:2016-02-26 11:15:48.182 INFO [service1,2485ec27856c56f4,2485ec27856c56f4,true] 68058 --- [nio-8081-exec-1] i.s.c.sleuth.docs.service1.Application : Got response from service2 [Hello from service2, response from service3 [Hello from service3] and from service4 [Hello from service4]]
If you’re using a log aggregating tool like Kibana, Splunk etc. you can order the events that took place. An example of Kibana would look like this:
If you want to use Logstash here is the Grok pattern for Logstash:
filter { # pattern matching logback pattern grok { match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}\s+%{LOGLEVEL:severity}\s+\[%{DATA:service},%{DATA:trace},%{DATA:span},%{DATA:exportable}\]\s+%{DATA:pid}\s+---\s+\[%{DATA:thread}\]\s+%{DATA:class}\s+:\s+%{GREEDYDATA:rest}" } } }
![]() | Note |
---|---|
If you want to use Grok together with the logs from Cloud Foundry you have to use this pattern: |
filter { # pattern matching logback pattern grok { match => { "message" => "(?m)OUT\s+%{TIMESTAMP_ISO8601:timestamp}\s+%{LOGLEVEL:severity}\s+\[%{DATA:service},%{DATA:trace},%{DATA:span},%{DATA:exportable}\]\s+%{DATA:pid}\s+---\s+\[%{DATA:thread}\]\s+%{DATA:class}\s+:\s+%{GREEDYDATA:rest}" } } }
Often you do not want to store your logs in a text file but in a JSON file that Logstash can immediately pick. To do that you have to do the following (for readability
we’re passing the dependencies in the groupId:artifactId:version
notation.
Dependencies setup
ch.qos.logback:logback-core
)4.6
: net.logstash.logback:logstash-logback-encoder:4.6
Logback setup
Below you can find an example of a Logback configuration (file named logback-spring.xml) that:
build/${spring.application.name}.json
file<?xml version="1.0" encoding="UTF-8"?> <configuration> <include resource="org/springframework/boot/logging/logback/defaults.xml"/> <springProperty scope="context" name="springAppName" source="spring.application.name"/> <!-- Example for logging into the build folder of your project --> <property name="LOG_FILE" value="${BUILD_FOLDER:-build}/${springAppName}"/> <!-- You can override this to have a custom pattern --> <property name="CONSOLE_LOG_PATTERN" value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/> <!-- Appender to log to console --> <appender name="console" class="ch.qos.logback.core.ConsoleAppender"> <filter class="ch.qos.logback.classic.filter.ThresholdFilter"> <!-- Minimum logging level to be presented in the console logs--> <level>DEBUG</level> </filter> <encoder> <pattern>${CONSOLE_LOG_PATTERN}</pattern> <charset>utf8</charset> </encoder> </appender> <!-- Appender to log to file --> <appender name="flatfile" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${LOG_FILE}</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.gz</fileNamePattern> <maxHistory>7</maxHistory> </rollingPolicy> <encoder> <pattern>${CONSOLE_LOG_PATTERN}</pattern> <charset>utf8</charset> </encoder> </appender> <!-- Appender to log to file in a JSON format --> <appender name="logstash" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${LOG_FILE}.json</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>${LOG_FILE}.json.%d{yyyy-MM-dd}.gz</fileNamePattern> <maxHistory>7</maxHistory> </rollingPolicy> <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder"> <providers> <timestamp> <timeZone>UTC</timeZone> </timestamp> <pattern> <pattern> { "severity": "%level", "service": "${springAppName:-}", "trace": "%X{X-B3-TraceId:-}", "span": "%X{X-B3-SpanId:-}", "parent": "%X{X-B3-ParentSpanId:-}", "exportable": "%X{X-Span-Export:-}", "pid": "${PID:-}", "thread": "%thread", "class": "%logger{40}", "rest": "%message" } </pattern> </pattern> </providers> </encoder> </appender> <root level="INFO"> <appender-ref ref="console"/> <!-- uncomment this to have also JSON logs --> <!--<appender-ref ref="logstash"/>--> <!--<appender-ref ref="flatfile"/>--> </root> </configuration>
![]() | Note |
---|---|
If you’re using a custom |
The span context is the state that must get propagated to any child Spans across process boundaries. Part of the Span Context is the Baggage. The trace and span IDs are a required part of the span context. Baggage is an optional part.
Baggage is a set of key:value pairs stored in the span context. Baggage travels together with the trace
and is attached to every span. Spring Cloud Sleuth will understand that a header is baggage related if the HTTP
header is prefixed with baggage-
and for messaging it starts with baggage_
.
![]() | Important |
---|---|
There’s currently no limitation of the count or size of baggage items. However, keep in mind that too many can decrease system throughput or increase RPC latency. In extreme cases, it could crash the app due to exceeding transport-level message or header capacity. |
Example of setting baggage on a span:
Span initialSpan = this.tracer.createSpan("span"); initialSpan.setBaggageItem("foo", "bar"); initialSpan.setBaggageItem("UPPER_CASE", "someValue");
Baggage travels with the trace (i.e. every child span contains the baggage of its parent). Zipkin has no knowledge of baggage and will not even receive that information.
Tags are attached to a specific span - they are presented for that particular span only. However you can search by tag to find the trace, where there exists a span having the searched tag value.
If you want to be able to lookup a span based on baggage, you should add corresponding entry as a tag in the root span.
@Autowired Tracer tracer; Span span = tracer.getCurrentSpan(); String baggageKey = "key"; String baggageValue = "foo"; span.setBaggageItem(baggageKey, baggageValue); tracer.addTag(baggageKey, baggageValue);
If you want to profit only from Spring Cloud Sleuth without the Zipkin integration just add
the spring-cloud-starter-sleuth
module to your project.
Maven.
<dependencyManagement><dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${release.train.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependency>
<groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-sleuth</artifactId> </dependency>
In order not to pick versions by yourself it’s much better if you add the dependency management via the Spring BOM | |
Add the dependency to |
Gradle.
dependencyManagement {imports { mavenBom "org.springframework.cloud:spring-cloud-dependencies:${releaseTrainVersion}" } } dependencies {
compile "org.springframework.cloud:spring-cloud-starter-sleuth" }
If you want both Sleuth and Zipkin just add the spring-cloud-starter-zipkin
dependency.
Maven.
<dependencyManagement><dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${release.train.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependency>
<groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-zipkin</artifactId> </dependency>
In order not to pick versions by yourself it’s much better if you add the dependency management via the Spring BOM | |
Add the dependency to |
Gradle.
dependencyManagement {imports { mavenBom "org.springframework.cloud:spring-cloud-dependencies:${releaseTrainVersion}" } } dependencies {
compile "org.springframework.cloud:spring-cloud-starter-zipkin" }
If you want both Sleuth and Zipkin just add the spring-cloud-sleuth-stream
dependency.
Maven.
<dependencyManagement><dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${release.train.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependency>
<groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-sleuth-stream</artifactId> </dependency> <dependency>
<groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-sleuth</artifactId> </dependency> <!-- EXAMPLE FOR RABBIT BINDING --> <dependency>
<groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-binder-rabbit</artifactId> </dependency>
In order not to pick versions by yourself it’s much better if you add the dependency management via the Spring BOM | |
Add the dependency to | |
Add the dependency to | |
Add a binder (e.g. Rabbit binder) to tell Spring Cloud Stream what it should bind to |
Gradle.
dependencyManagement {imports { mavenBom "org.springframework.cloud:spring-cloud-dependencies:${releaseTrainVersion}" } } dependencies { compile "org.springframework.cloud:spring-cloud-sleuth-stream"
compile "org.springframework.cloud:spring-cloud-starter-sleuth"
// Example for Rabbit binding compile "org.springframework.cloud:spring-cloud-stream-binder-rabbit"
}
In order not to pick versions by yourself it’s much better if you add the dependency management via the Spring BOM | |
Add the dependency to | |
Add the dependency to | |
Add a binder (e.g. Rabbit binder) to tell Spring Cloud Stream what it should bind to |
If you want to start a Spring Cloud Sleuth Stream Zipkin collector just add the spring-cloud-sleuth-zipkin-stream
dependency
Maven.
<dependencyManagement><dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${release.train.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependency>
<groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-sleuth-zipkin-stream</artifactId> </dependency> <dependency>
<groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-sleuth</artifactId> </dependency> <!-- EXAMPLE FOR RABBIT BINDING --> <dependency>
<groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-binder-rabbit</artifactId> </dependency>
In order not to pick versions by yourself it’s much better if you add the dependency management via the Spring BOM | |
Add the dependency to | |
Add the dependency to | |
Add a binder (e.g. Rabbit binder) to tell Spring Cloud Stream what it should bind to |
Gradle.
dependencyManagement {imports { mavenBom "org.springframework.cloud:spring-cloud-dependencies:${releaseTrainVersion}" } } dependencies { compile "org.springframework.cloud:spring-cloud-sleuth-zipkin-stream"
compile "org.springframework.cloud:spring-cloud-starter-sleuth"
// Example for Rabbit binding compile "org.springframework.cloud:spring-cloud-stream-binder-rabbit"
}
In order not to pick versions by yourself it’s much better if you add the dependency management via the Spring BOM | |
Add the dependency to | |
Add the dependency to | |
Add a binder (e.g. Rabbit binder) to tell Spring Cloud Stream what it should bind to |
and then just annotate your main class with @EnableZipkinStreamServer
annotation:
package example; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.sleuth.zipkin.stream.EnableZipkinStreamServer; @SpringBootApplication @EnableZipkinStreamServer public class ZipkinStreamServerApplication { public static void main(String[] args) throws Exception { SpringApplication.run(ZipkinStreamServerApplication.class, args); } }
Marcin Grzejszczak talking about Spring Cloud Sleuth and Zipkin
Adds trace and span ids to the Slf4J MDC, so you can extract all the logs from a given trace or span in a log aggregator. Example logs:
2016-02-02 15:30:57.902 INFO [bar,6bfd228dc00d216b,6bfd228dc00d216b,false] 23030 --- [nio-8081-exec-3] ... 2016-02-02 15:30:58.372 ERROR [bar,6bfd228dc00d216b,6bfd228dc00d216b,false] 23030 --- [nio-8081-exec-3] ... 2016-02-02 15:31:01.936 INFO [bar,46ab0d418373cbc9,46ab0d418373cbc9,false] 23030 --- [nio-8081-exec-4] ...
notice the [appname,traceId,spanId,exportable]
entries from the MDC:
Sleuth records timing information to aid in latency analysis. Using sleuth, you can pinpoint causes of latency in your applications. Sleuth is written to not log too much, and to not cause your production application to crash.
SpanInjector
and SpanExtractor
implementations.spring-cloud-sleuth-zipkin
then the app will generate and collect Zipkin-compatible traces.
By default it sends them via HTTP to a Zipkin server on localhost (port 9411).
Configure the location of the service using spring.zipkin.baseUrl
.spring-cloud-sleuth-stream
then the app will generate and collect traces via Spring Cloud Stream.
Your app automatically becomes a producer of tracer messages that are sent over your broker of choice
(e.g. RabbitMQ, Apache Kafka, Redis).![]() | Important |
---|---|
If using Zipkin or Stream, configure the percentage of spans exported using |
![]() | Note |
---|---|
the SLF4J MDC is always set and logback users will immediately see the trace and span ids in logs per the example
above. Other logging systems have to configure their own formatter to get the same result. The default is
|
In distributed tracing the data volumes can be very high so sampling
can be important (you usually don’t need to export all spans to get a
good picture of what is happening). Spring Cloud Sleuth has a
Sampler
strategy that you can implement to take control of the
sampling algorithm. Samplers do not stop span (correlation) ids from
being generated, but they do prevent the tags and events being
attached and exported. By default you get a strategy that continues to
trace if a span is already active, but new ones are always marked as
non-exportable. If all your apps run with this sampler you will see
traces in logs, but not in any remote store. For testing the default
is often enough, and it probably is all you need if you are only using
the logs (e.g. with an ELK aggregator). If you are exporting span data
to Zipkin or Spring Cloud Stream, there is also an AlwaysSampler
that exports everything and a PercentageBasedSampler
that samples a
fixed fraction of spans.
![]() | Note |
---|---|
the |
A sampler can be installed just by creating a bean definition, e.g:
@Bean public Sampler defaultSampler() { return new AlwaysSampler(); }
![]() | Tip |
---|---|
You can set the HTTP header |
Spring Cloud Sleuth instruments all your Spring application
automatically, so you shouldn’t have to do anything to activate
it. The instrumentation is added using a variety of technologies
according to the stack that is available, e.g. for a servlet web
application we use a Filter
, and for Spring Integration we use
ChannelInterceptors
.
You can customize the keys used in span tags. To limit the volume of
span data, by default an HTTP request will be tagged only with a
handful of metadata like the status code, host and URL. You can add
request headers by configuring spring.sleuth.keys.http.headers
(a
list of header names).
![]() | Note |
---|---|
Remember that tags are only collected and exported if there is a
|
![]() | Note |
---|---|
Currently the instrumentation in Spring Cloud Sleuth is eager - it means that we’re actively trying to pass the tracing context between threads. Also timing events are captured even when sleuth isn’t exporting data to a tracing system. This approach may change in the future towards being lazy on this matter. |
You can do the following operations on the Span by means of org.springframework.cloud.sleuth.Tracer interface:
![]() | Tip |
---|---|
Spring creates the instance of |
You can manually create spans by using the Tracer interface.
// Start a span. If there was a span present in this thread it will become // the `newSpan`'s parent. Span newSpan = this.tracer.createSpan("calculateTax"); try { // ... // You can tag a span this.tracer.addTag("taxValue", taxValue); // ... // You can log an event on a span newSpan.logEvent("taxCalculated"); } finally { // Once done remember to close the span. This will allow collecting // the span to send it to Zipkin this.tracer.close(newSpan); }
In this example we could see how to create a new instance of span. Assuming that there already was a span present in this thread then it would become the parent of that span.
![]() | Important |
---|---|
Always clean after you create a span! Don’t forget to close a span if you want to send it to Zipkin. |
![]() | Important |
---|---|
If your span contains a name greater than 50 chars, then that name will be truncated to 50 chars. Your names have to be explicit and concrete. Big names lead to latency issues and sometimes even thrown exceptions. |
Sometimes you don’t want to create a new span but you want to continue one. Example of such a situation might be (of course it all depends on the use-case):
The continued instance of span is equal to the one that it continues:
Span continuedSpan = this.tracer.continueSpan(spanToContinue);
assertThat(continuedSpan).isEqualTo(spanToContinue);
To continue a span you can use the Tracer interface.
// let's assume that we're in a thread Y and we've received // the `initialSpan` from thread X Span continuedSpan = this.tracer.continueSpan(initialSpan); try { // ... // You can tag a span this.tracer.addTag("taxValue", taxValue); // ... // You can log an event on a span continuedSpan.logEvent("taxCalculated"); } finally { // Once done remember to detach the span. That way you'll // safely remove it from the current thread without closing it this.tracer.detach(continuedSpan); }
![]() | Important |
---|---|
Always clean after you create a span! Don’t forget to detach a span if some work was done started in one thread (e.g. thread X) and it’s waiting for other threads (e.g. Y, Z) to finish. Then the spans in the threads Y, Z should be detached at the end of their work. When the results are collected the span in thread X should be closed. |
There is a possibility that you want to start a new span and provide an explicit parent of that span.
Let’s assume that the parent of a span is in one thread and you want to start a new span in another thread. The
startSpan
method of the Tracer
interface is the method you are looking for.
// let's assume that we're in a thread Y and we've received // the `initialSpan` from thread X. `initialSpan` will be the parent // of the `newSpan` Span newSpan = this.tracer.createSpan("calculateCommission", initialSpan); try { // ... // You can tag a span this.tracer.addTag("commissionValue", commissionValue); // ... // You can log an event on a span newSpan.logEvent("commissionCalculated"); } finally { // Once done remember to close the span. This will allow collecting // the span to send it to Zipkin. The tags and events set on the // newSpan will not be present on the parent this.tracer.close(newSpan); }
![]() | Important |
---|---|
After having created such a span remember to close it. Otherwise you will see a lot of warnings in your logs related to the fact that you have a span present in the current thread other than the one you’re trying to close. What’s worse your spans won’t get closed properly thus will not get collected to Zipkin. |
Picking a span name is not a trivial task. Span name should depict an operation name. The name should be low cardinality (e.g. not include identifiers).
Since there is a lot of instrumentation going on some of the span names will be artificial like:
controller-method-name
when received by a Controller with a method name conrollerMethodName
async
for asynchronous operations done via wrapped Callable
and Runnable
.@Scheduled
annotated methods will return the simple name of the class.Fortunately, for the asynchronous processing you can provide explicit naming.
You can name the span explicitly via the @SpanName
annotation.
@SpanName("calculateTax") class TaxCountingRunnable implements Runnable { @Override public void run() { // perform logic } }
In this case, when processed in the following manner:
Runnable runnable = new TraceRunnable(tracer, spanNamer, new TaxCountingRunnable()); Future<?> future = executorService.submit(runnable); // ... some additional logic ... future.get();
The span will be named calculateTax
.
It’s pretty rare to create separate classes for Runnable
or Callable
. Typically one creates an anonymous
instance of those classes. You can’t annotate such classes thus to override that, if there is no @SpanName
annotation present,
we’re checking if the class has a custom implementation of the toString()
method.
So executing such code:
Runnable runnable = new TraceRunnable(tracer, spanNamer, new Runnable() { @Override public void run() { // perform logic } @Override public String toString() { return "calculateTax"; } }); Future<?> future = executorService.submit(runnable); // ... some additional logic ... future.get();
will lead in creating a span named calculateTax
.
The main arguments for this features are
api-agnostic means to collaborate with a span
reduced surface area for basic span operations.
collaboration with runtime generated code
If you really don’t want to take care of creating local spans manually you can profit from the
@NewSpan
annotation. Also we give you the @SpanTag
annotation to add tags in an automated
fashion.
Let’s look at some examples of usage.
@NewSpan void testMethod();
Annotating the method without any parameter will lead to a creation of a new span whose name will be equal to annotated method name.
@NewSpan("customNameOnTestMethod4") void testMethod4();
If you provide the value in the annotation (either directly or via the name
parameter) then
the created span will have the name as the provided value.
// method declaration @NewSpan(name = "customNameOnTestMethod5") void testMethod5(@SpanTag("testTag") String param); // and method execution this.testBean.testMethod5("test");
You can combine both the name and a tag. Let’s focus on the latter. In this case whatever the value of
the annotated method’s parameter runtime value will be - that will be the value of the tag. In our sample
the tag key will be testTag
and the tag value will be test
.
@NewSpan(name = "customNameOnTestMethod3") @Override public void testMethod3() { }
You can place the @NewSpan
annotation on both the class and an interface. If you override the
interface’s method and provide a different value of the @NewSpan
annotation then the most
concrete one wins (in this case customNameOnTestMethod3
will be set).
If you want to just add tags and annotations to an existing span it’s enough
to use the @ContinueSpan
annotation as presented below. Note that in contrast
with the @NewSpan
annotation you can also add logs via the log
parameter:
// method declaration @ContinueSpan(log = "testMethod11") void testMethod11(@SpanTag("testTag11") String param); // method execution this.testBean.testMethod11("test");
That way the span will get continued and:
testMethod11.before
and testMethod11.after
will be createdtestMethod11.afterFailure
will also be createdtestTag11
and value test
will be createdThere are 3 different ways to add tags to a span. All of them are controlled by the SpanTag
annotation.
Precedence is:
TagValueResolver
type and provided nameTagValueExpressionResolver
bean.
The default implementation uses SPEL expression resolution.toString()
value of the parameterThe value of the tag for following method will be computed by an implementation of TagValueResolver
interface.
Its class name has to be passed as the value of the resolver
attribute.
Having such an annotated method:
@NewSpan public void getAnnotationForTagValueResolver(@SpanTag(key = "test", resolver = TagValueResolver.class) String test) { }
and such a TagValueResolver
bean implementation
@Bean(name = "myCustomTagValueResolver") public TagValueResolver tagValueResolver() { return parameter -> "Value from myCustomTagValueResolver"; }
Will lead to setting of a tag value equal to Value from myCustomTagValueResolver
.
Having such an annotated method:
@NewSpan public void getAnnotationForTagValueExpression(@SpanTag(key = "test", expression = "length() + ' characters'") String test) { }
and no custom implementation of a TagValueExpressionResolver
will lead to evaluation of the SPEL expression and a tag with value 4 characters
will be set on the span.
If you want to use some other expression resolution mechanism you can create your own implementation
of the bean.
Thanks to the SpanInjector
and SpanExtractor
you can customize the way spans
are created and propagated.
There are currently two built-in ways to pass tracing information between processes:
Span ids are extracted from Zipkin-compatible (B3) headers (either Message
or HTTP headers), to start or join an existing trace. Trace information is
injected into any outbound requests so the next hop can extract them.
The key change in comparison to the previous versions of Sleuth is that Sleuth is implementing
the Open Tracing’s TextMap
notion. In Sleuth it’s called SpanTextMap
. Basically the idea
is that any means of communication (e.g. message, http request, etc.) can be abstracted via
a SpanTextMap
. This abstraction defines how one can insert data into the carrier and
how to retrieve it from there. Thanks to this if you want to instrument a new HTTP library
that uses a FooRequest
as a mean of sending HTTP requests then you have to create an
implementation of a SpanTextMap
that delegates calls to FooRequest
in terms of retrieval
and insertion of HTTP headers.
For Spring Integration there are 2 interfaces responsible for creation of a Span from a Message
.
These are:
MessagingSpanTextMapExtractor
MessagingSpanTextMapInjector
You can override them by providing your own implementation.
For HTTP there are 2 interfaces responsible for creation of a Span from a Message
.
These are:
HttpSpanExtractor
HttpSpanInjector
You can override them by providing your own implementation.
Let’s assume that instead of the standard Zipkin compatible tracing HTTP header names you have
correlationId
mySpanId
This is a an example of a SpanExtractor
static class CustomHttpSpanExtractor implements HttpSpanExtractor { @Override public Span joinTrace(SpanTextMap carrier) { Map<String, String> map = TextMapUtil.asMap(carrier); long traceId = Span.hexToId(map.get("correlationid")); long spanId = Span.hexToId(map.get("myspanid")); // extract all necessary headers Span.SpanBuilder builder = Span.builder().traceId(traceId).spanId(spanId); // build rest of the Span return builder.build(); } } static class CustomHttpSpanInjector implements HttpSpanInjector { @Override public void inject(Span span, SpanTextMap carrier) { carrier.put("correlationId", span.traceIdString()); carrier.put("mySpanId", Span.idToHex(span.getSpanId())); } }
And you could register it like this:
@Bean HttpSpanInjector customHttpSpanInjector() { return new CustomHttpSpanInjector(); } @Bean HttpSpanExtractor customHttpSpanExtractor() { return new CustomHttpSpanExtractor(); }
Spring Cloud Sleuth does not add trace/span related headers to the Http Response for security reasons. If you need the headers then a custom SpanInjector
that injects the headers into the Http Response and a Servlet filter which makes use of this can be added the following way:
static class CustomHttpServletResponseSpanInjector extends ZipkinHttpSpanInjector { @Override public void inject(Span span, SpanTextMap carrier) { super.inject(span, carrier); carrier.put(Span.TRACE_ID_NAME, span.traceIdString()); carrier.put(Span.SPAN_ID_NAME, Span.idToHex(span.getSpanId())); } } static class HttpResponseInjectingTraceFilter extends GenericFilterBean { private final Tracer tracer; private final HttpSpanInjector spanInjector; public HttpResponseInjectingTraceFilter(Tracer tracer, HttpSpanInjector spanInjector) { this.tracer = tracer; this.spanInjector = spanInjector; } @Override public void doFilter(ServletRequest request, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException { HttpServletResponse response = (HttpServletResponse) servletResponse; Span currentSpan = this.tracer.getCurrentSpan(); this.spanInjector.inject(currentSpan, new HttpServletResponseTextMap(response)); filterChain.doFilter(request, response); } class HttpServletResponseTextMap implements SpanTextMap { private final HttpServletResponse delegate; HttpServletResponseTextMap(HttpServletResponse delegate) { this.delegate = delegate; } @Override public Iterator<Map.Entry<String, String>> iterator() { Map<String, String> map = new HashMap<>(); for (String header : this.delegate.getHeaderNames()) { map.put(header, this.delegate.getHeader(header)); } return map.entrySet().iterator(); } @Override public void put(String key, String value) { this.delegate.addHeader(key, value); } } }
And you could register them like this:
@Bean HttpSpanInjector customHttpServletResponseSpanInjector() { return new CustomHttpServletResponseSpanInjector(); } @Bean HttpResponseInjectingTraceFilter responseInjectingTraceFilter(Tracer tracer) { return new HttpResponseInjectingTraceFilter(tracer, customHttpServletResponseSpanInjector()); }
You can also modify the behaviour of the TraceFilter
- the component that is responsible
for processing the input HTTP request and adding tags basing on the HTTP response. You can customize
the tags, or modify the response headers by registering your own instance of the TraceFilter
bean.
In the following example we will register the TraceFilter
bean and we will add the
ZIPKIN-TRACE-ID
response header containing the current Span’s trace id. Also we will
add to the Span a tag with key custom
and a value tag
.
@Bean TraceFilter myTraceFilter(BeanFactory beanFactory, final Tracer tracer) { return new TraceFilter(beanFactory) { @Override protected void addResponseTags(HttpServletResponse response, Throwable e) { // execute the default behaviour super.addResponseTags(response, e); // for readability we're returning trace id in a hex form response.addHeader("ZIPKIN-TRACE-ID", Span.idToHex(tracer.getCurrentSpan().getTraceId())); // we can also add some custom tags tracer.addTag("custom", "tag"); } }; }
Sometimes you want to create a manual Span that will wrap a call to an external service which is not instrumented.
What you can do is to create a span with the peer.service
tag that will contain a value of the service that you want to call.
Below you can see an example of a call to Redis that is wrapped in such a span.
org.springframework.cloud.sleuth.Span newSpan = tracer.createSpan("redis"); try { newSpan.tag("redis.op", "get"); newSpan.tag("lc", "redis"); newSpan.logEvent(org.springframework.cloud.sleuth.Span.CLIENT_SEND); // call redis service e.g // return (SomeObj) redisTemplate.opsForHash().get("MYHASH", someObjKey); } finally { newSpan.tag("peer.service", "redisService"); newSpan.tag("peer.ipv4", "1.2.3.4"); newSpan.tag("peer.port", "1234"); newSpan.logEvent(org.springframework.cloud.sleuth.Span.CLIENT_RECV); tracer.close(newSpan); }
![]() | Important |
---|---|
Remember not to add both |
By default Sleuth assumes that when you send a span to Zipkin, you want the span’s service name
to be equal to spring.application.name
value. That’s not always the case though. There
are situations in which you want to explicitly provide a different service name for all spans coming
from your application. To achieve that it’s enough to just pass the following property
to your application to override that value (example for foo
service name):
spring.zipkin.service.name: foo
Before reporting spans to e.g. Zipkin you can be interested in modifying that span in some way.
You can achieve that by using the SpanAdjuster
interface.
Example of usage:
In Sleuth we’re generating spans with a fixed name. Some users want to modify the name depending on values
of tags. Implementation of the SpanAdjuster
interface can be used to alter that name. Example:
@Bean SpanAdjuster customSpanAdjuster() { return span -> span.toBuilder().name(scrub(span.getName())).build(); }
This will lead in changing the name of the reported span just before it gets sent to Zipkin.
![]() | Important |
---|---|
Your |
In order to define the host that is corresponding to a particular span we need to resolve the host name and port. The default approach is to take it from server properties. If those for some reason are not set then we’re trying to retrieve the host name from the network interfaces.
If you have the discovery client enabled and prefer to retrieve the host address from the registered instance in a service registry then you have to set the property (it’s applicable for both HTTP and Stream based span reporting).
spring.zipkin.locator.discovery.enabled: true
You can accumulate and send span data over
Spring Cloud Stream by
including the spring-cloud-sleuth-stream
jar as a dependency, and
adding a Channel Binder implementation
(e.g. spring-cloud-starter-stream-rabbit
for RabbitMQ or
spring-cloud-starter-stream-kafka
for Kafka). This will
automatically turn your app into a producer of messages with payload
type Spans
. The channel name to which the spans will be sent
is called sleuth
.
There is a special convenience annotation for setting up a message consumer
for the Span data and pushing it into a Zipkin SpanStore
. This application
@SpringBootApplication @EnableZipkinStreamServer public class Consumer { public static void main(String[] args) { SpringApplication.run(Consumer.class, args); } }
will listen for the Span data on whatever transport you provide via a
Spring Cloud Stream Binder
(e.g. include
spring-cloud-starter-stream-rabbit
for RabbitMQ, and similar
starters exist for Redis and Kafka). If you add the following UI dependency
<groupId>io.zipkin.java</groupId> <artifactId>zipkin-autoconfigure-ui</artifactId>
Then you’ll have your app a Zipkin server, which hosts the UI and api on port 9411.
The default SpanStore
is in-memory (good for demos and getting
started quickly). For a more robust solution you can add MySQL and
spring-boot-starter-jdbc
to your classpath and enable the JDBC
SpanStore
via configuration, e.g.:
spring: rabbitmq: host: ${RABBIT_HOST:localhost} datasource: schema: classpath:/mysql.sql url: jdbc:mysql://${MYSQL_HOST:localhost}/test username: root password: root # Switch this on to create the schema on startup: initialize: true continueOnError: true sleuth: enabled: false zipkin: storage: type: mysql
![]() | Note |
---|---|
The |
A custom consumer can also easily be implemented using
spring-cloud-sleuth-stream
and binding to the SleuthSink
. Example:
@EnableBinding(SleuthSink.class) @SpringBootApplication(exclude = SleuthStreamAutoConfiguration.class) @MessageEndpoint public class Consumer { @ServiceActivator(inputChannel = SleuthSink.INPUT) public void sink(Spans input) throws Exception { // ... process spans } }
![]() | Note |
---|---|
the sample consumer application above explicitly excludes
|
In order to customize the polling mechanism you can create a bean of PollerMetadata
type
with name equal to StreamSpanReporter.POLLER
. Here you can find an example of such a configuration.
@Configuration public static class CustomPollerConfiguration { @Bean(name = StreamSpanReporter.POLLER) PollerMetadata customPoller() { PollerMetadata poller = new PollerMetadata(); poller.setMaxMessagesPerPoll(500); poller.setTrigger(new PeriodicTrigger(5000L)); return poller; } }
Currently Spring Cloud Sleuth registers very simple metrics related to spans. It’s using the Spring Boot’s metrics support to calculate the number of accepted and dropped spans. Each time a span gets sent to Zipkin the number of accepted spans will increase. If there’s an error then the number of dropped spans will get increased.
If you’re wrapping your logic in Runnable
or Callable
it’s enough to wrap those classes in their Sleuth representative.
Example for Runnable
:
Runnable runnable = new Runnable() { @Override public void run() { // do some work } @Override public String toString() { return "spanNameFromToStringMethod"; } }; // Manual `TraceRunnable` creation with explicit "calculateTax" Span name Runnable traceRunnable = new TraceRunnable(tracer, spanNamer, runnable, "calculateTax"); // Wrapping `Runnable` with `Tracer`. The Span name will be taken either from the // `@SpanName` annotation or from `toString` method Runnable traceRunnableFromTracer = tracer.wrap(runnable);
Example for Callable
:
Callable<String> callable = new Callable<String>() { @Override public String call() throws Exception { return someLogic(); } @Override public String toString() { return "spanNameFromToStringMethod"; } }; // Manual `TraceCallable` creation with explicit "calculateTax" Span name Callable<String> traceCallable = new TraceCallable<>(tracer, spanNamer, callable, "calculateTax"); // Wrapping `Callable` with `Tracer`. The Span name will be taken either from the // `@SpanName` annotation or from `toString` method Callable<String> traceCallableFromTracer = tracer.wrap(callable);
That way you will ensure that a new Span is created and closed for each execution.
We’re registering a custom HystrixConcurrencyStrategy
that wraps all Callable
instances into their Sleuth representative -
the TraceCallable
. The strategy either starts or continues a span depending on the fact whether tracing was already going
on before the Hystrix command was called. To disable the custom Hystrix Concurrency Strategy set the spring.sleuth.hystrix.strategy.enabled
to false
.
Assuming that you have the following HystrixCommand
:
HystrixCommand<String> hystrixCommand = new HystrixCommand<String>(setter) { @Override protected String run() throws Exception { return someLogic(); } };
In order to pass the tracing information you have to wrap the same logic in the Sleuth version of the HystrixCommand
which is the
TraceCommand
:
TraceCommand<String> traceCommand = new TraceCommand<String>(tracer, traceKeys, setter) { @Override public String doRun() throws Exception { return someLogic(); } };
We’re registering a custom RxJavaSchedulersHook
that wraps all Action0
instances into their Sleuth representative -
the TraceAction
. The hook either starts or continues a span depending on the fact whether tracing was already going
on before the Action was scheduled. To disable the custom RxJavaSchedulersHook set the spring.sleuth.rxjava.schedulers.hook.enabled
to false
.
You can define a list of regular expressions for thread names, for which you don’t want a Span to be created. Just provide a comma separated list
of regular expressions in the spring.sleuth.rxjava.schedulers.ignoredthreads
property.
Features from this section can be disabled by providing the spring.sleuth.web.enabled
property with value equal to false
.
Via the TraceFilter
all sampled incoming requests result in creation of a Span. That Span’s name is http:
+ the path to which
the request was sent. E.g. if the request was sent to /foo/bar
then the name will be http:/foo/bar
. You can configure which URIs you would
like to skip via the spring.sleuth.web.skipPattern
property. If you have ManagementServerProperties
on classpath then
its value of contextPath
gets appended to the provided skip pattern.
Since we want the span names to be precise we’re using a TraceHandlerInterceptor
that either wraps an
existing HandlerInterceptor
or is added directly to the list of existing HandlerInterceptors
. The
TraceHandlerInterceptor
adds a special request attribute to the given HttpServletRequest
. If the
the TraceFilter
doesn’t see this attribute set it will create a "fallback" span which is an additional
span created on the server side so that the trace is presented properly in the UI. Seeing that most likely
signifies that there is a missing instrumentation. In that case please file an issue in Spring Cloud Sleuth.
We’re injecting a RestTemplate
interceptor that ensures that all the tracing information is passed to the requests. Each time a
call is made a new Span is created. It gets closed upon receiving the response. In order to block the synchronous RestTemplate
features
just set spring.sleuth.web.client.enabled
to false
.
![]() | Important |
---|---|
You have to register |
![]() | Important |
---|---|
A traced version of an |
Custom instrumentation is set to create and close Spans upon sending and receiving requests. You can customize the ClientHttpRequestFactory
and the AsyncClientHttpRequestFactory
by registering your beans. Remember to use tracing compatible implementations (e.g. don’t forget to
wrap ThreadPoolTaskScheduler
in a TraceAsyncListenableTaskExecutor
). Example of custom request factories:
@EnableAutoConfiguration @Configuration public static class TestConfiguration { @Bean ClientHttpRequestFactory mySyncClientFactory() { return new MySyncClientHttpRequestFactory(); } @Bean AsyncClientHttpRequestFactory myAsyncClientFactory() { return new MyAsyncClientHttpRequestFactory(); } }
To block the AsyncRestTemplate
features set spring.sleuth.web.async.client.enabled
to false
.
To disable creation of the default TraceAsyncClientHttpRequestFactoryWrapper
set spring.sleuth.web.async.client.factory.enabled
to false
. If you don’t want to create AsyncRestClient
at all set spring.sleuth.web.async.client.template.enabled
to false
.
Sometimes you need to use multiple implementations of Asynchronous Rest Template. In the following snippet you
can see an example of how to set up such a custom AsyncRestTemplate
.
@Configuration @EnableAutoConfiguration static class Config { @Autowired Tracer tracer; @Autowired HttpTraceKeysInjector httpTraceKeysInjector; @Autowired HttpSpanInjector spanInjector; @Bean(name = "customAsyncRestTemplate") public AsyncRestTemplate traceAsyncRestTemplate(@Qualifier("customHttpRequestFactoryWrapper") TraceAsyncClientHttpRequestFactoryWrapper wrapper) { return new TraceAsyncRestTemplate(wrapper, this.tracer); } @Bean(name = "customHttpRequestFactoryWrapper") public TraceAsyncClientHttpRequestFactoryWrapper traceAsyncClientHttpRequestFactory() { return new TraceAsyncClientHttpRequestFactoryWrapper(this.tracer, this.spanInjector, asyncClientFactory(), clientHttpRequestFactory(), this.httpTraceKeysInjector); } private ClientHttpRequestFactory clientHttpRequestFactory() { ClientHttpRequestFactory clientHttpRequestFactory = new CustomClientHttpRequestFactory(); //CUSTOMIZE HERE return clientHttpRequestFactory; } private AsyncClientHttpRequestFactory asyncClientFactory() { AsyncClientHttpRequestFactory factory = new CustomAsyncClientHttpRequestFactory(); //CUSTOMIZE HERE return factory; } }
If you’re using the Traverson library
it’s enough for you to inject a RestTemplate
as a bean into your Traverson object. Since RestTemplate
is already intercepted, you will get full support of tracing in your client. Below you can find a pseudo code
of how to do that:
@Autowired RestTemplate restTemplate; Traverson traverson = new Traverson(URI.create("http://some/address"), MediaType.APPLICATION_JSON, MediaType.APPLICATION_JSON_UTF8).setRestOperations(restTemplate); // use Traverson
By default Spring Cloud Sleuth provides integration with feign via the TraceFeignClientAutoConfiguration
. You can disable it entirely
by setting spring.sleuth.feign.enabled
to false. If you do so then no Feign related instrumentation will take place.
Part of Feign instrumentation is done via a FeignBeanPostProcessor
. You can disable it by providing the spring.sleuth.feign.processor.enabled
equal to false
.
If you set it like this then Spring Cloud Sleuth will not instrument any of your custom Feign components. All the default instrumentation
however will be still there.
In Spring Cloud Sleuth we’re instrumenting async related components so that the tracing information is passed between threads.
You can disable this behaviour by setting the value of spring.sleuth.async.enabled
to false
.
If you annotate your method with @Async
then we’ll automatically create a new Span with the following characteristics:
In Spring Cloud Sleuth we’re instrumenting scheduled method execution so that the tracing information is passed between threads. You can disable this behaviour
by setting the value of spring.sleuth.scheduled.enabled
to false
.
If you annotate your method with @Scheduled
then we’ll automatically create a new Span with the following characteristics:
If you want to skip Span creation for some @Scheduled
annotated classes you can set the
spring.sleuth.scheduled.skipPattern
with a regular expression that will match the fully qualified name of the
@Scheduled
annotated class.
![]() | Tip |
---|---|
If you are using |
We’re providing LazyTraceExecutor
, TraceableExecutorService
and TraceableScheduledExecutorService
. Those implementations
are creating Spans each time a new task is submitted, invoked or scheduled.
Here you can see an example of how to pass tracing information with TraceableExecutorService
when working with CompletableFuture
:
CompletableFuture<Long> completableFuture = CompletableFuture.supplyAsync(() -> { // perform some logic return 1_000_000L; }, new TraceableExecutorService(executorService, // 'calculateTax' explicitly names the span - this param is optional tracer, traceKeys, spanNamer, "calculateTax"));
![]() | Important |
---|---|
Sleuth doesn’t work with |
Sometimes you need to set up a custom instance of the AsyncExecutor
. In the following snippet you
can see an example of how to set up such a custom Executor
.
@Configuration @EnableAutoConfiguration @EnableAsync static class CustomExecutorConfig extends AsyncConfigurerSupport { @Autowired BeanFactory beanFactory; @Override public Executor getAsyncExecutor() { ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor(); // CUSTOMIZE HERE executor.setCorePoolSize(7); executor.setMaxPoolSize(42); executor.setQueueCapacity(11); executor.setThreadNamePrefix("MyExecutor-"); // DON'T FORGET TO INITIALIZE executor.initialize(); return new LazyTraceExecutor(this.beanFactory, executor); } }
Spring Cloud Sleuth integrates with Spring Integration. It creates spans for publish and
subscribe events. To disable Spring Integration instrumentation, set spring.sleuth.integration.enabled
to false.
You can provide the spring.sleuth.integration.patterns
pattern to explicitly
provide the names of channels that you want to include for tracing. By default all channels
are included.
![]() | Important |
---|---|
When using the |
You can find the running examples deployed in the Pivotal Web Services. Check them out in the following links:
Dalston.SR5
This project provides Consul integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations you can quickly enable and configure the common patterns inside your application and build large distributed systems with Consul based components. The patterns provided include Service Discovery, Control Bus and Configuration. Intelligent Routing (Zuul) and Client Side Load Balancing (Ribbon), Circuit Breaker (Hystrix) are provided by integration with Spring Cloud Netflix.
Please see the installation documentation for instructions on how to install Consul.
A Consul Agent client must be available to all Spring Cloud Consul applications. By default, the Agent client is expected to be at localhost:8500
. See the Agent documentation for specifics on how to start an Agent client and how to connect to a cluster of Consul Agent Servers. For development, after you have installed consul, you may start a Consul Agent using the following command:
./src/main/bash/local_run_consul.sh
This will start an agent in server mode on port 8500, with the ui available at http://localhost:8500
Service Discovery is one of the key tenets of a microservice based architecture. Trying to hand configure each client or some form of convention can be very difficult to do and can be very brittle. Consul provides Service Discovery services via an HTTP API and DNS. Spring Cloud Consul leverages the HTTP API for service registration and discovery. This does not prevent non-Spring Cloud applications from leveraging the DNS interface. Consul Agents servers are run in a cluster that communicates via a gossip protocol and uses the Raft consensus protocol.
To activate Consul Service Discovery use the starter with group org.springframework.cloud
and artifact id spring-cloud-starter-consul-discovery
. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.
When a client registers with Consul, it provides meta-data about itself such as host and port, id, name and tags. An HTTP Check is created by default that Consul hits the /health
endpoint every 10 seconds. If the health check fails, the service instance is marked as critical.
Example Consul client:
@SpringBootApplication @EnableDiscoveryClient @RestController public class Application { @RequestMapping("/") public String home() { return "Hello world"; } public static void main(String[] args) { new SpringApplicationBuilder(Application.class).web(true).run(args); } }
(i.e. utterly normal Spring Boot app). If the Consul client is located somewhere other than localhost:8500
, the configuration is required to locate the client. Example:
application.yml.
spring: cloud: consul: host: localhost port: 8500
![]() | Caution |
---|---|
If you use Spring Cloud Consul Config, the above values will need to be placed in |
The default service name, instance id and port, taken from the Environment
, are ${spring.application.name}
, the Spring Context ID and ${server.port}
respectively.
@EnableDiscoveryClient
make the app into both a Consul "service" (i.e. it registers itself) and a "client" (i.e. it can query Consul to locate other services).
The health check for a Consul instance defaults to "/health", which is the default locations of a useful endpoint in a Spring Boot Actuator application. You need to change these, even for an Actuator application if you use a non-default context path or servlet path (e.g. server.servletPath=/foo
) or management endpoint path (e.g. management.context-path=/admin
). The interval that Consul uses to check the health endpoint may also be configured. "10s" and "1m" represent 10 seconds and 1 minute respectively. Example:
application.yml.
spring: cloud: consul: discovery: healthCheckPath: ${management.context-path}/health healthCheckInterval: 15s
Consul does not yet support metadata on services. Spring Cloud’s ServiceInstance
has a Map<String, String> metadata
field. Spring Cloud Consul uses Consul tags to approximate metadata until Consul officially supports metadata. Tags with the form key=value
will be split and used as a Map
key and value respectively. Tags without the equal =
sign, will be used as both the key and value.
application.yml.
spring: cloud: consul: discovery: tags: foo=bar, baz
The above configuration will result in a map with foo→bar
and baz→baz
.
By default a consul instance is registered with an ID that is equal to its Spring Application Context ID. By default, the Spring Application Context ID is ${spring.application.name}:comma,separated,profiles:${server.port}
. For most cases, this will allow multiple instances of one service to run on one machine. If further uniqueness is required, Using Spring Cloud you can override this by providing a unique identifier in spring.cloud.consul.discovery.instanceId
. For example:
application.yml.
spring: cloud: consul: discovery: instanceId: ${spring.application.name}:${vcap.application.instance_id:${spring.application.instance_id:${random.value}}}
With this metadata, and multiple service instances deployed on localhost, the random value will kick in there to make the instance unique. In Cloudfoundry the vcap.application.instance_id
will be populated automatically in a Spring Boot application, so the random value will not be needed.
Spring Cloud has support for Feign (a REST client builder) and also Spring RestTemplate
using the logical service names instead of physical URLs.
You can also use the org.springframework.cloud.client.discovery.DiscoveryClient
which provides a simple API for discovery clients that is not specific to Netflix, e.g.
@Autowired private DiscoveryClient discoveryClient; public String serviceUrl() { List<ServiceInstance> list = discoveryClient.getInstances("STORES"); if (list != null && list.size() > 0 ) { return list.get(0).getUri(); } return null; }
Consul provides a Key/Value Store for storing configuration and other metadata. Spring Cloud Consul Config is an alternative to the Config Server and Client. Configuration is loaded into the Spring Environment during the special "bootstrap" phase. Configuration is stored in the /config
folder by default. Multiple PropertySource
instances are created based on the application’s name and the active profiles that mimicks the Spring Cloud Config order of resolving properties. For example, an application with the name "testApp" and with the "dev" profile will have the following property sources created:
config/testApp,dev/ config/testApp/ config/application,dev/ config/application/
The most specific property source is at the top, with the least specific at the bottom. Properties is the config/application
folder are applicable to all applications using consul for configuration. Properties in the config/testApp
folder are only available to the instances of the service named "testApp".
Configuration is currently read on startup of the application. Sending a HTTP POST to /refresh
will cause the configuration to be reloaded. Watching the key value store (which Consul supports) is not currently possible, but will be a future addition to this project.
To get started with Consul Configuration use the starter with group org.springframework.cloud
and artifact id spring-cloud-starter-consul-config
. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.
This will enable auto-configuration that will setup Spring Cloud Consul Config.
Consul Config may be customized using the following properties:
bootstrap.yml.
spring: cloud: consul: config: enabled: true prefix: configuration defaultContext: apps profileSeparator: '::'
enabled
setting this value to "false" disables Consul Configprefix
sets the base folder for configuration valuesdefaultContext
sets the folder name used by all applicationsprofileSeparator
sets the value of the separator used to separate the profile name in property sources with profilesThe Consul Config Watch takes advantage of the ability of consul to watch a key prefix. The Config Watch makes a blocking Consul HTTP API call to determine if any relevant configuration data has changed for the current application. If there is new configuration data a Refresh Event is published. This is equivalent to calling the /refresh
actuator endpoint.
To change the frequency of when the Config Watch is called change spring.cloud.consul.config.watch.delay
. The default value is 1000, which is in milliseconds.
To disable the Config Watch set spring.cloud.consul.config.watch.enabled=false
.
It may be more convenient to store a blob of properties in YAML or Properties format as opposed to individual key/value pairs. Set the spring.cloud.consul.config.format
property to YAML
or PROPERTIES
. For example to use YAML:
bootstrap.yml.
spring: cloud: consul: config: format: YAML
YAML must be set in the appropriate data
key in consul. Using the defaults above the keys would look like:
config/testApp,dev/data config/testApp/data config/application,dev/data config/application/data
You could store a YAML document in any of the keys listed above.
You can change the data key using spring.cloud.consul.config.data-key
.
git2consul is a Consul community project that loads files from a git repository to individual keys into Consul. By default the names of the keys are names of the files. YAML and Properties files are supported with file extensions of .yml
and .properties
respectively. Set the spring.cloud.consul.config.format
property to FILES
. For example:
bootstrap.yml.
spring: cloud: consul: config: format: FILES
Given the following keys in /config
, the development
profile and an application name of foo
:
.gitignore application.yml bar.properties foo-development.properties foo-production.yml foo.properties master.ref
the following property sources would be created:
config/foo-development.properties config/foo.properties config/application.yml
The value of each key needs to be a properly formatted YAML or Properties file.
It may be convenient in certain circumstances (like local development or certain test scenarios) to not fail if consul isn’t available for configuration. Setting spring.cloud.consul.config.failFast=false
in bootstrap.yml
will cause the configuration module to log a warning rather than throw an exception. This will allow the application to continue startup normally.
If you expect that the consul agent may occasionally be unavailable when
your app starts, you can ask it to keep trying after a failure. You need to add
spring-retry
and spring-boot-starter-aop
to your classpath. The default
behaviour is to retry 6 times with an initial backoff interval of 1000ms and an
exponential multiplier of 1.1 for subsequent backoffs. You can configure these
properties (and others) using spring.cloud.consul.retry.*
configuration properties.
This works with both Spring Cloud Consul Config and Discovery registration.
![]() | Tip |
---|---|
To take full control of the retry add a |
To get started with the Consul Bus use the starter with group org.springframework.cloud
and artifact id spring-cloud-starter-consul-bus
. See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train.
See the Spring Cloud Bus documentation for the available actuator endpoints and howto send custom messages.
Applications can use the Hystrix Circuit Breaker provided by the Spring Cloud Netflix project by including this starter in the projects pom.xml: spring-cloud-starter-hystrix
. Hystrix doesn’t depend on the Netflix Discovery Client. The @EnableHystrix
annotation should be placed on a configuration class (usually the main class). Then methods can be annotated with @HystrixCommand
to be protected by a circuit breaker. See the documentation for more details.
Turbine (provided by the Spring Cloud Netflix project), aggregates multiple instances Hystrix metrics streams, so the dashboard can display an aggregate view. Turbine uses the DiscoveryClient
interface to lookup relevant instances. To use Turbine with Spring Cloud Consul, configure the Turbine application in a manner similar to the following examples:
pom.xml.
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-netflix-turbine</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-consul-discovery</artifactId> </dependency>
Notice that the Turbine dependency is not a starter. The turbine starter includes support for Netflix Eureka.
application.yml.
spring.application.name: turbine applications: consulhystrixclient turbine: aggregator: clusterConfig: ${applications} appConfig: ${applications}
The clusterConfig
and appConfig
sections must match, so it’s useful to put the comma-separated list of service ID’s into a separate configuration property.
Turbine.java.
@EnableTurbine @EnableDiscoveryClient @SpringBootApplication public class Turbine { public static void main(String[] args) { SpringApplication.run(DemoturbinecommonsApplication.class, args); } }
This project provides Zookeeper integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations you can quickly enable and configure the common patterns inside your application and build large distributed systems with Zookeeper based components. The patterns provided include Service Discovery and Configuration. Intelligent Routing (Zuul) and Client Side Load Balancing (Ribbon), Circuit Breaker (Hystrix) are provided by integration with Spring Cloud Netflix.
Please see the installation documentation for instructions on how to install Zookeeper.
Service Discovery is one of the key tenets of a microservice based architecture. Trying to hand configure each client or some form of convention can be very difficult to do and can be very brittle. Curator(A java library for Zookeeper) provides Service Discovery services via Service Discovery Extension. Spring Cloud Zookeeper leverages this extension for service registration and discovery.
Including a dependency on org.springframework.cloud:spring-cloud-starter-zookeeper-discovery
will enable auto-configuration that will setup Spring Cloud Zookeeper Discovery.
![]() | Note |
---|---|
You still need to include |
When a client registers with Zookeeper, it provides meta-data about itself such as host and port, id and name.
Example Zookeeper client:
@SpringBootApplication @EnableDiscoveryClient @RestController public class Application { @RequestMapping("/") public String home() { return "Hello world"; } public static void main(String[] args) { new SpringApplicationBuilder(Application.class).web(true).run(args); } }
(i.e. utterly normal Spring Boot app). If Zookeeper is located somewhere other than localhost:2181
, the configuration is required to locate the server. Example:
application.yml.
spring: cloud: zookeeper: connect-string: localhost:2181
![]() | Caution |
---|---|
If you use Spring Cloud Zookeeper Config, the above values will need to be placed in |
The default service name, instance id and port, taken from the Environment
, are ${spring.application.name}
, the Spring Context ID and ${server.port}
respectively.
@EnableDiscoveryClient
makes the app into both a Zookeeper "service" (i.e. it registers itself) and a "client" (i.e. it can query Zookeeper to locate other services).
Spring Cloud has support for Feign (a REST client builder) and also Spring RestTemplate
using the logical service names instead of physical URLs.
You can also use the org.springframework.cloud.client.discovery.DiscoveryClient
which provides a simple API for discovery clients that is not specific to Netflix, e.g.
@Autowired private DiscoveryClient discoveryClient; public String serviceUrl() { List<ServiceInstance> list = discoveryClient.getInstances("STORES"); if (list != null && list.size() > 0 ) { return list.get(0).getUri().toString(); } return null; }
Spring Cloud Netflix supplies useful tools that work regardless of which DiscoveryClient
implementation is used. Feign, Turbine, Ribbon and Zuul all work with Spring Cloud Zookeeper.
Spring Cloud Zookeeper implements the ServiceRegistry
interface allowing developers to register arbitrary service in a programmatic way.
The ServiceInstanceRegistration
class offers a builder()
method to create a Registration
object that can be used by the ServiceRegistry
.
@Autowired private ZookeeperServiceRegistry serviceRegistry; public void registerThings() { ZookeeperRegistration registration = ServiceInstanceRegistration.builder() .defaultUriSpec() .address("anyUrl") .port(10) .name("/a/b/c/d/anotherservice") .build(); this.serviceRegistry.register(registration); }
Netflix Eureka supports having instances registered with the server that are OUT_OF_SERVICE
and not returned as active service instances. This is very useful for behaviors such as blue/green deployments. The Curator Service Discovery recipe does not support this behavior. Taking advantage of the flexible payload has let Spring Cloud Zookeeper implement OUT_OF_SERVICE
by updating some specific metadata and then filtering on that metadata in the Ribbon ZookeeperServerList
. The ZookeeperServerList
filters out all non-null instance statuses that do not equal UP
. If the instance status field is empty, it is considered UP
for backwards compatibility. To change the status of an instance POST OUT_OF_SERVICE
to the ServiceRegistry
instance status actuator endpoint.
---- $ echo -n OUT_OF_SERVICE | http POST http://localhost:8081/service-registry/instance-status ----
NOTE: The above example uses the `http` command from https://httpie.org
Spring Cloud Zookeeper gives you a possibility to provide dependencies of your application as properties. As dependencies you can understand other applications that are registered
in Zookeeper and which you would like to call via Feign (a REST client builder)
and also Spring RestTemplate
.
You can also benefit from the Zookeeper Dependency Watchers functionality that lets you control and monitor what is the state of your dependencies and decide what to do with that.
org.springframework.cloud:spring-cloud-starter-zookeeper-discovery
will enable auto-configuration that will setup Spring Cloud Zookeeper Dependencies.spring.cloud.zookeeper.dependencies
section properly set up - check the subsequent section for more details then the feature is activespring.cloud.zookeeper.dependency.enabled
to false (defaults to true
).Let’s take a closer look at an example of dependencies representation:
application.yml.
spring.application.name: yourServiceName spring.cloud.zookeeper: dependencies: newsletter: path: /path/where/newsletter/has/registered/in/zookeeper loadBalancerType: ROUND_ROBIN contentTypeTemplate: application/vnd.newsletter.$version+json version: v1 headers: header1: - value1 header2: - value2 required: false stubs: org.springframework:foo:stubs mailing: path: /path/where/mailing/has/registered/in/zookeeper loadBalancerType: ROUND_ROBIN contentTypeTemplate: application/vnd.mailing.$version+json version: v1 required: true
Let’s now go through each part of the dependency one by one. The root property name is spring.cloud.zookeeper.dependencies
.
Below the root property you have to represent each dependency has by an alias due to the constraints of Ribbon (the application id has to be placed in the URL
thus you can’t pass any complex path like /foo/bar/name). The alias will be the name that you will use instead of serviceId for DiscoveryClient
, Feign
or RestTemplate
.
In the aforementioned examples the aliases are newsletter
and mailing
. Example of Feign usage with newsletter
would be:
@FeignClient("newsletter") public interface NewsletterService { @RequestMapping(method = RequestMethod.GET, value = "/newsletter") String getNewsletters(); }
Represented by path
yaml property.
Path is the path under which the dependency is registered under Zookeeper. Like presented before Ribbon operates on URLs thus this path is not compliant with its requirement. That is why Spring Cloud Zookeeper maps the alias to the proper path.
Represented by loadBalancerType
yaml property.
If you know what kind of load balancing strategy has to be applied when calling this particular dependency then you can provide it in the yaml file and it will be automatically applied. You can choose one of the following load balancing strategies
Represented by contentTypeTemplate
and version
yaml property.
If you version your api via the Content-Type
header then you don’t want to add this header to each of your requests. Also if you want to call a new version of the API you don’t want to
roam around your code to bump up the API version. That’s why you can provide a contentTypeTemplate
with a special $version
placeholder. That placeholder will be filled by the value of the
version
yaml property. Let’s take a look at an example.
Having the following contentTypeTemplate
:
application/vnd.newsletter.$version+json
and the following version
:
v1
Will result in setting up of a Content-Type
header for each request:
application/vnd.newsletter.v1+json
Represented by headers
map in yaml
Sometimes each call to a dependency requires setting up of some default headers. In order not to do that in code you can set them up in the yaml file.
Having the following headers
section:
headers: Accept: - text/html - application/xhtml+xml Cache-Control: - no-cache
Results in adding the Accept
and Cache-Control
headers with appropriate list of values in your HTTP request.
Represented by required
property in yaml
If one of your dependencies is required to be up and running when your application is booting then it’s enough to set up the required: true
property in the yaml file.
If your application can’t localize the required dependency during boot time it will throw an exception and the Spring Context will fail to set up. In other words your application won’t be able to start if the required dependency is not registered in Zookeeper.
You can read more about Spring Cloud Zookeeper Presence Checker in the following sections.
You can provide a colon separated path to the JAR containing stubs of the dependency. Example
stubs: org.springframework:foo:stubs
means that for a particular dependencies can be found under:
org.springframework
foo
stubs
- this is the default valueThis is actually equal to
stubs: org.springframework:foo
since stubs
is the default classifier.
There is a bunch of properties that you can set to enable / disable parts of Zookeeper Dependencies functionalities.
spring.cloud.zookeeper.dependencies
- if you don’t set this property you won’t benefit from Zookeeper Dependenciesspring.cloud.zookeeper.dependency.ribbon.enabled
(enabled by default) - Ribbon requires explicit global configuration or a particular one for a dependency. By turning on this property
runtime load balancing strategy resolution is possible and you can profit from the loadBalancerType
section of the Zookeeper Dependencies. The configuration that needs this property
has an implementation of LoadBalancerClient
that delegates to the ILoadBalancer
presented in the next bulletspring.cloud.zookeeper.dependency.ribbon.loadbalancer
(enabled by default) - thanks to this property the custom ILoadBalancer
knows that the part of the URI passed to Ribbon might
actually be the alias that has to be resolved to a proper path in Zookeeper. Without this property you won’t be able to register applications under nested paths.spring.cloud.zookeeper.dependency.headers.enabled
(enabled by default) - this property registers such a RibbonClient
that automatically will append appropriate headers and content
types with version as presented in the Dependency configuration. Without this setting of those two parameters will not be operational.spring.cloud.zookeeper.dependency.resttemplate.enabled
(enabled by default) - when enabled will modify the request headers of @LoadBalanced
annotated RestTemplate
so that it passes
headers and content type with version set in Dependency configuration. Wihtout this setting of those two parameters will not be operational.The Dependency Watcher mechanism allows you to register listeners to your dependencies. The functionality is in fact an implementation of the Observator
pattern. When a dependency changes
its state (UP or DOWN) then some custom logic can be applied.
Spring Cloud Zookeeper Dependencies functionality needs to be enabled to profit from Dependency Watcher mechanism.
In order to register a listener you have to implement an interface org.springframework.cloud.zookeeper.discovery.watcher.DependencyWatcherListener
and register it as a bean.
The interface gives you one method:
void stateChanged(String dependencyName, DependencyState newState);
If you want to register a listener for a particular dependency then the dependencyName
would be the discriminator for your concrete implementation. newState
will provide you with information
whether your dependency has changed to CONNECTED
or DISCONNECTED
.
Bound with Dependency Watcher is the functionality called Presence Checker. It allows you to provide custom behaviour upon booting of your application to react accordingly to the state of your dependencies.
The default implementation of the abstract org.springframework.cloud.zookeeper.discovery.watcher.presence.DependencyPresenceOnStartupVerifier
class is the
org.springframework.cloud.zookeeper.discovery.watcher.presence.DefaultDependencyPresenceOnStartupVerifier
which works in the following way.
required
and it’s not in Zookeeper then upon booting your application will throw an exception and shutdownrequired
the org.springframework.cloud.zookeeper.discovery.watcher.presence.LogMissingDependencyChecker
will log that application is missing at WARN
levelThe functionality can be overriden since the DefaultDependencyPresenceOnStartupVerifier
is registered only when there is no bean of DependencyPresenceOnStartupVerifier
.
Zookeeper provides a hierarchical namespace that allows clients to store arbitrary data, such as configuration data. Spring Cloud Zookeeper Config is an alternative to the Config Server and Client. Configuration is loaded into the Spring Environment during the special "bootstrap" phase. Configuration is stored in the /config
namespace by default. Multiple PropertySource
instances are created based on the application’s name and the active profiles that mimicks the Spring Cloud Config order of resolving properties. For example, an application with the name "testApp" and with the "dev" profile will have the following property sources created:
config/testApp,dev config/testApp config/application,dev config/application
The most specific property source is at the top, with the least specific at the bottom. Properties is the config/application
namespace are applicable to all applications using zookeeper for configuration. Properties in the config/testApp
namespace are only available to the instances of the service named "testApp".
Configuration is currently read on startup of the application. Sending a HTTP POST to /refresh
will cause the configuration to be reloaded. Watching the configuration namespace (which Zookeeper supports) is not currently implemented, but will be a future addition to this project.
Including a dependency on org.springframework.cloud:spring-cloud-starter-zookeeper-config
will enable auto-configuration that will setup Spring Cloud Zookeeper Config.
Zookeeper Config may be customized using the following properties:
bootstrap.yml.
spring: cloud: zookeeper: config: enabled: true root: configuration defaultContext: apps profileSeparator: '::'
enabled
setting this value to "false" disables Zookeeper Configroot
sets the base namespace for configuration valuesdefaultContext
sets the name used by all applicationsprofileSeparator
sets the value of the separator used to separate the profile name in property sources with profilesYou can add authentication information for Zookeeper ACLs by calling the addAuthInfo method of a CuratorFramework bean. One way to accomplish this is by providing your own CuratorFramework bean:
@BoostrapConfiguration public class CustomCuratorFrameworkConfig { @Bean public CuratorFramework curatorFramework() { CuratorFramework curator = new CuratorFramework(); curator.addAuthInfo("digest", "user:password".getBytes()); return curator; } }
Consult the ZookeeperAutoConfiguration class to see how the CuratorFramework bean is configured by default.
Alternatively, you can add your credentials from a class that depends on the existing CuratorFramework bean:
@BoostrapConfiguration public class DefaultCuratorFrameworkConfig { public ZookeeperConfig(CuratorFramework curator) { curator.addAuthInfo("digest", "user:password".getBytes()); } }
This must occur during the boostrapping phase. You can register configuration classes to run
during this phase by annotating them with @BootstrapConfiguration
and including them in a
comma-separated list set as the value of the property
org.springframework.cloud.bootstrap.BootstrapConfiguration
in the file
resources/META-INF/spring.factories
:
resources/META-INF/spring.factories.
org.springframework.cloud.bootstrap.BootstrapConfiguration=\ my.project.CustomCuratorFrameworkConfig,\ my.project.DefaultCuratorFrameworkConfig
Unresolved directive in spring-cloud.adoc - include::../../../../cli/docs/src/main/asciidoc/spring-cloud-cli.adoc[]
Spring Cloud Security offers a set of primitives for building secure applications and services with minimum fuss. A declarative model which can be heavily configured externally (or centrally) lends itself to the implementation of large systems of co-operating, remote components, usually with a central indentity management service. It is also extremely easy to use in a service platform like Cloud Foundry. Building on Spring Boot and Spring Security OAuth2 we can quickly create systems that implement common patterns like single sign on, token relay and token exchange.
![]() | Note |
---|---|
Spring Cloud is released under the non-restrictive Apache 2.0 license. If you would like to contribute to this section of the documentation or if you find an error, please find the source code and issue trackers in the project at github. |
Here’s a Spring Cloud "Hello World" app with HTTP Basic authentication and a single user account:
app.groovy.
@Grab('spring-boot-starter-security') @Controller class Application { @RequestMapping('/') String home() { 'Hello World' } }
You can run it with spring run app.groovy
and watch the logs for the password (username is "user"). So far this is just the default for a Spring Boot app.
Here’s a Spring Cloud app with OAuth2 SSO:
app.groovy.
@Controller @EnableOAuth2Sso class Application { @RequestMapping('/') String home() { 'Hello World' } }
Spot the difference? This app will actually behave exactly the same as the previous one, because it doesn’t know it’s OAuth2 credentals yet.
You can register an app in github quite easily, so try that if you want a production app on your own domain. If you are happy to test on localhost:8080, then set up these properties in your application configuration:
application.yml.
security: oauth2: client: clientId: bd1c0a783ccdd1c9b9e4 clientSecret: 1a9030fbca47a5b2c28e92f19050bb77824b5ad1 accessTokenUri: https://github.com/login/oauth/access_token userAuthorizationUri: https://github.com/login/oauth/authorize clientAuthenticationScheme: form resource: userInfoUri: https://api.github.com/user preferTokenInfo: false
run the app above and it will redirect to github for authorization. If you are already signed into github you won’t even notice that it has authenticated. These credentials will only work if your app is running on port 8080.
To limit the scope that the client asks for when it obtains an access token
you can set security.oauth2.client.scope
(comma separated or an array in YAML). By
default the scope is empty and it is up to to Authorization Server to
decide what the defaults should be, usually depending on the settings in
the client registration that it holds.
![]() | Note |
---|---|
The examples above are all Groovy scripts. If you want to write the same code in Java (or Groovy) you need to add Spring Security OAuth2 to the classpath (e.g. see the sample here). |
You want to protect an API resource with an OAuth2 token? Here’s a simple example (paired with the client above):
app.groovy.
@Grab('spring-cloud-starter-security') @RestController @EnableResourceServer class Application { @RequestMapping('/') def home() { [message: 'Hello World'] } }
and
application.yml.
security: oauth2: resource: userInfoUri: https://api.github.com/user preferTokenInfo: false
![]() | Note |
---|---|
All of the OAuth2 SSO and resource server features moved to Spring Boot in version 1.3. You can find documentation in the Spring Boot user guide. |
A Token Relay is where an OAuth2 consumer acts as a Client and forwards the incoming token to outgoing resource requests. The consumer can be a pure Client (like an SSO application) or a Resource Server.
If your app is a user facing OAuth2 client (i.e. has declared
@EnableOAuth2Sso
or @EnableOAuth2Client
) then it has an
OAuth2ClientContext
in request scope from Spring Boot. You can
create your own OAuth2RestTemplate
from this context and an
autowired OAuth2ProtectedResourceDetails
, and then the context will
always forward the access token downstream, also refreshing the access
token automatically if it expires. (These are features of Spring
Security and Spring Boot.)
![]() | Note |
---|---|
Spring Boot (1.4.1) does not create an
|
If your app also has a
Spring
Cloud Zuul embedded reverse proxy (using @EnableZuulProxy
) then you
can ask it to forward OAuth2 access tokens downstream to the services
it is proxying. Thus the SSO app above can be enhanced simply like
this:
app.groovy.
@Controller @EnableOAuth2Sso @EnableZuulProxy class Application { }
and it will (in addition to logging the user in and grabbing a token)
pass the authentication token downstream to the /proxy/*
services. If those services are implemented with
@EnableResourceServer
then they will get a valid token in the
correct header.
How does it work? The @EnableOAuth2Sso
annotation pulls in
spring-cloud-starter-security
(which you could do manually in a
traditional app), and that in turn triggers some autoconfiguration for
a ZuulFilter
, which itself is activated because Zuul is on the
classpath (via @EnableZuulProxy
). The
filter
just extracts an access token from the currently authenticated user,
and puts it in a request header for the downstream requests.
If your app has @EnableResourceServer
you might want to relay the
incoming token downstream to other services. If you use a
RestTemplate
to contact the downstream services then this is just a
matter of how to create the template with the right context.
If your service uses UserInfoTokenServices
to authenticate incoming
tokens (i.e. it is using the security.oauth2.user-info-uri
configuration), then you can simply create an OAuth2RestTemplate
using an autowired OAuth2ClientContext
(it will be populated by the
authentication process before it hits the backend code). Equivalently
(with Spring Boot 1.4), you could inject a
UserInfoRestTemplateFactory
and grab its OAuth2RestTemplate
in
your configuration. For example:
MyConfiguration.java.
@Bean public OAuth2RestTemplate restTemplate(UserInfoRestTemplateFactory factory) { return factory.getUserInfoRestTemplate(); }
This rest template will then have the same OAuth2ClientContext
(request-scoped) that is used by the authentication filter, so you can
use it to send requests with the same access token.
If your app is not using UserInfoTokenServices
but is still a client
(i.e. it declares @EnableOAuth2Client
or @EnableOAuth2Sso
), then
with Spring Security Cloud any OAuth2RestOperations
that the user
creates from an @Autowired
@OAuth2Context
will also forward
tokens. This feature is implemented by default as an MVC handler
interceptor, so it only works in Spring MVC. If you are not using MVC
you could use a custom filter or AOP interceptor wrapping an
AccessTokenContextRelay
to provide the same feature.
Here’s a basic example showing the use of an autowired rest template created elsewhere ("foo.com" is a Resource Server accepting the same tokens as the surrounding app):
MyController.java.
@Autowired private OAuth2RestOperations restTemplate; @RequestMapping("/relay") public String relay() { ResponseEntity<String> response = restTemplate.getForEntity("https://foo.com/bar", String.class); return "Success! (" + response.getBody() + ")"; }
If you don’t want to forward tokens (and that is a valid
choice, since you might want to act as yourself, rather than the
client that sent you the token), then you only need to create your own
OAuth2Context
instead of autowiring the default one.
Feign clients will also pick up an interceptor that uses the
OAuth2ClientContext
if it is available, so they should also do a
token relay anywhere where a RestTemplate
would.
You can control the authorization behaviour downstream of an
@EnableZuulProxy
through the proxy.auth.*
settings. Example:
application.yml.
proxy: auth: routes: customers: oauth2 stores: passthru recommendations: none
In this example the "customers" service gets an OAuth2 token relay, the "stores" service gets a passthrough (the authorization header is just passed downstream), and the "recommendations" service has its authorization header removed. The default behaviour is to do a token relay if there is a token available, and passthru otherwise.
See ProxyAuthenticationProperties for full details.
Spring Cloud for Cloudfoundry makes it easy to run Spring Cloud apps in Cloud Foundry (the Platform as a Service). Cloud Foundry has the notion of a "service", which is middlware that you "bind" to an app, essentially providing it with an environment variable containing credentials (e.g. the location and username to use for the service).
The spring-cloud-cloudfoundry-web
project provides basic support for
some enhanced features of webapps in Cloud Foundry: binding
automatically to single-sign-on services and optionally enabling
sticky routing for discovery.
The spring-cloud-cloudfoundry-discovery
project provides an
implementation of Spring Cloud Commons DiscoveryClient
so you can
@EnableDiscoveryClient
and provide your credentials as
spring.cloud.cloudfoundry.discovery.[email,password]
and then you
can use the DiscoveryClient
directly or via a LoadBalancerClient
(also *.url
if you are not connecting to
Pivotal Web Services).
The first time you use it the discovery client might be slow owing to the fact that it has to get an access token from Cloud Foundry.
Here’s a Spring Cloud app with Cloud Foundry discovery:
app.groovy.
@Grab('org.springframework.cloud:spring-cloud-cloudfoundry') @RestController @EnableDiscoveryClient class Application { @Autowired DiscoveryClient client @RequestMapping('/') String home() { 'Hello from ' + client.getLocalServiceInstance() } }
If you run it without any service bindings:
$ spring jar app.jar app.groovy $ cf push -p app.jar
It will show its app name in the home page.
The DiscoveryClient
can lists all the apps in a space, according to
the credentials it is authenticated with, where the space defaults to
the one the client is running in (if any). If neither org nor space
are configured, they default per the user’s profile in Cloud Foundry.
![]() | Note |
---|---|
All of the OAuth2 SSO and resource server features moved to Spring Boot in version 1.3. You can find documentation in the Spring Boot user guide. |
This project provides automatic binding from CloudFoundry service
credentials to the Spring Boot features. If you have a CloudFoundry
service called "sso", for instance, with credentials containing
"client_id", "client_secret" and "auth_domain", it will bind
automatically to the Spring OAuth2 client that you enable with
@EnableOAuth2Sso
(from Spring Boot). The name of the service can be
parameterized using spring.oauth2.sso.serviceId
.
Documentation Authors: Adam Dudczak, Mathias Düsterhöft, Marcin Grzejszczak, Dennis Kieselhorst, Jakub Kubryński, Karol Lassak, Olga Maciaszek-Sharma, Mariusz Smykuła, Dave Syer
Dalston.SR5
What you always need is confidence in pushing new features into a new application or service in a distributed system. This project provides support for Consumer Driven Contracts and service schemas in Spring applications, covering a range of options for writing tests, publishing them as assets, asserting that a contract is kept by producers and consumers, for HTTP and message-based interactions.
![]() | Tip |
---|---|
The Accurest project was initially started by Marcin Grzejszczak and Jakub Kubrynski (codearte.io) |
Just to make long story short - Spring Cloud Contract Verifier is a tool that enables Consumer Driven Contract (CDC) development of JVM-based applications. It is shipped with Contract Definition Language (DSL). Contract definitions are used to produce following resources:
Spring Cloud Contract Verifier moves TDD to the level of software architecture.
Let us assume that we have a system comprising of multiple microservices:
If we wanted to test the application in top left corner if it can communicate with other services then we could do one of two things:
Both have their advantages but also a lot of disadvantages. Let’s focus on the latter.
Deploy all microservices and perform end to end tests
Advantages:
Disadvantages:
Mock other microservices in unit / integration tests
Advantages:
Disadvantages:
To solve the aforementioned issues Spring Cloud Contract Verifier with Stub Runner were created. Their main idea is to give you very fast feedback, without the need to set up the whole world of microservices. If you work on stubs then the only applications you need are those that your application is using directly.
Spring Cloud Contract Verifier gives you the certainty that the stubs that you’re using were created by the service that you’re calling. Also if you can use them it means that they were tested against the producer’s side. In other words - you can trust those stubs.
The main purposes of Spring Cloud Contract Verifier with Stub Runner are:
![]() | Important |
---|---|
Spring Cloud Contract Verifier’s purpose is NOT to start writing business features in the contracts. Let’s assume that we have a business use case of fraud check. If a user can be a fraud for 100 different reasons, we would assume that you would create 2 contracts. One for the positive and one for the negative fraud case. Contract tests are used to test contracts between applications and not to simulate full behaviour. |
As consumers we need to define what exactly we want to achieve. We need to formulate our expectations. That’s why we write the following contract.
Let’s assume that we’d like to send the request containing the id of the client and the amount he wants to borrow from us. We’d like to send it to the /fraudcheck url via the PUT method.
package contracts org.springframework.cloud.contract.spec.Contract.make { request { // (1) method 'PUT' // (2) url '/fraudcheck' // (3) body([ // (4) "client.id": $(regex('[0-9]{10}')), loanAmount: 99999 ]) headers { // (5) contentType('application/json') } } response { // (6) status 200 // (7) body([ // (8) fraudCheckStatus: "FRAUD", "rejection.reason": "Amount too high" ]) headers { // (9) contentType('application/json') } } } /* From the Consumer perspective, when shooting a request in the integration test: (1) - If the consumer sends a request (2) - With the "PUT" method (3) - to the URL "/fraudcheck" (4) - with the JSON body that * has a field `clientId` that matches a regular expression `[0-9]{10}` * has a field `loanAmount` that is equal to `99999` (5) - with header `Content-Type` equal to `application/json` (6) - then the response will be sent with (7) - status equal `200` (8) - and JSON body equal to { "fraudCheckStatus": "FRAUD", "rejectionReason": "Amount too high" } (9) - with header `Content-Type` equal to `application/json` From the Producer perspective, in the autogenerated producer-side test: (1) - A request will be sent to the producer (2) - With the "PUT" method (3) - to the URL "/fraudcheck" (4) - with the JSON body that * has a field `clientId` that will have a generated value that matches a regular expression `[0-9]{10}` * has a field `loanAmount` that is equal to `99999` (5) - with header `Content-Type` equal to `application/json` (6) - then the test will assert if the response has been sent with (7) - status equal `200` (8) - and JSON body equal to { "fraudCheckStatus": "FRAUD", "rejectionReason": "Amount too high" } (9) - with header `Content-Type` matching `application/json.*` */
Spring Cloud Contract will generate stubs, which you can use during client side testing. You will have a WireMock instance / Messaging route up and running that simulates the service Y. You would like to feed that instance with a proper stub definition.
At some point in time you need to send a request to the Fraud Detection service.
ResponseEntity<FraudServiceResponse> response = restTemplate.exchange("http://localhost:" + port + "/fraudcheck", HttpMethod.PUT, new HttpEntity<>(request, httpHeaders), FraudServiceResponse.class);
Annotate your test class with @AutoConfigureStubRunner
. In the annotation provide the group id and artifact id for the Stub Runner to download stubs of your collaborators.
@RunWith(SpringRunner.class) @SpringBootTest(webEnvironment=WebEnvironment.NONE) @AutoConfigureStubRunner(ids = {"com.example:http-server-dsl:+:stubs:6565"}, workOffline = true) @DirtiesContext public class LoanApplicationServiceTests {
After that, during the tests Spring Cloud Contract will automatically find the stubs (simulating the real service) in Maven repository and expose them on configured (or random) port.
Being a service Y since you are developing your stub, you need to be sure that it’s actually resembling your concrete implementation. You can’t have a situation where your stub acts in one way and your application on production behaves in a different way.
That’s why from the provided stub acceptance tests will be generated that will ensure that your application behaves in the same way as you define in your stub.
The autogenerated test would look like this:
@Test public void validate_shouldMarkClientAsFraud() throws Exception { // given: MockMvcRequestSpecification request = given() .header("Content-Type", "application/vnd.fraud.v1+json") .body("{\"client.id\":\"1234567890\",\"loanAmount\":99999}"); // when: ResponseOptions response = given().spec(request) .put("/fraudcheck"); // then: assertThat(response.statusCode()).isEqualTo(200); assertThat(response.header("Content-Type")).matches("application/vnd.fraud.v1.json.*"); // and: DocumentContext parsedJson = JsonPath.parse(response.getBody().asString()); assertThatJson(parsedJson).field("['fraudCheckStatus']").matches("[A-Z]{5}"); assertThatJson(parsedJson).field("['rejection.reason']").isEqualTo("Amount too high"); }
Let’s take an example of Fraud Detection and Loan Issuance process. The business scenario is such that we want to issue loans to people but don’t want them to steal the money from us. The current implementation of our system grants loans to everybody.
Let’s assume that the Loan Issuance
is a client to the
Fraud Detection
server. In the current sprint we are required to develop a new feature - if a client wants to borrow too much money then we mark him as fraud.
Technical remark - Fraud Detection will have artifact id http-server
, Loan Issuance http-client
and both have group id com.example
.
Social remark - both client and server development teams need to communicate directly and discuss changes while going through the process. CDC is all about communication.
The server side code is available here and the client side code here.
![]() | Tip |
---|---|
In this case the ownership of the contracts lays on the producer side. It means that physically all the contract are present in the producer’s repository |
If using the SNAPSHOT / Milestone / Release Candidate versions please add the following section to your
Maven.
<repositories> <repository> <id>spring-snapshots</id> <name>Spring Snapshots</name> <url>https://repo.spring.io/snapshot</url> <snapshots> <enabled>true</enabled> </snapshots> </repository> <repository> <id>spring-milestones</id> <name>Spring Milestones</name> <url>https://repo.spring.io/milestone</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>spring-releases</id> <name>Spring Releases</name> <url>https://repo.spring.io/release</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>spring-snapshots</id> <name>Spring Snapshots</name> <url>https://repo.spring.io/snapshot</url> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>spring-milestones</id> <name>Spring Milestones</name> <url>https://repo.spring.io/milestone</url> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>spring-releases</id> <name>Spring Releases</name> <url>https://repo.spring.io/release</url> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories>
Gradle.
repositories { mavenCentral() mavenLocal() maven { url "http://repo.spring.io/snapshot" } maven { url "http://repo.spring.io/milestone" } maven { url "http://repo.spring.io/release" } }
As a developer of the Loan Issuance service (a consumer of the Fraud Detection server):
start doing TDD by writing a test to your feature
@Test public void shouldBeRejectedDueToAbnormalLoanAmount() { // given: LoanApplication application = new LoanApplication(new Client("1234567890"), 99999); // when: LoanApplicationResult loanApplication = service.loanApplication(application); // then: assertThat(loanApplication.getLoanApplicationStatus()) .isEqualTo(LoanApplicationStatus.LOAN_APPLICATION_REJECTED); assertThat(loanApplication.getRejectionReason()).isEqualTo("Amount too high"); }
We’ve just written a test of our new feature. If a loan application for a big amount is received we should reject that loan application with some description.
write the missing implementation
At some point in time you need to send a request to the Fraud Detection service. Let’s assume that we’d like to send the request containing the id of the client and the amount he wants to borrow from us. We’d like to send it to the /fraudcheck
url via the PUT
method.
ResponseEntity<FraudServiceResponse> response = restTemplate.exchange("http://localhost:" + port + "/fraudcheck", HttpMethod.PUT, new HttpEntity<>(request, httpHeaders), FraudServiceResponse.class);
For simplicity we’ve hardcoded the port of the Fraud Detection service at 8080
and our application is running on 8090
.
If we’d start the written test it would obviously break since we have no service running on port 8080
.
clone the Fraud Detection service repository locally
We’ll start playing around with the server side contract. That’s why we need to first clone it.
git clone https://your-git-server.com/server-side.git local-http-server-repo
define the contract locally in the repo of Fraud Detection service
As consumers we need to define what exactly we want to achieve. We need to formulate our expectations. That’s why we write the following contract.
![]() | Important |
---|---|
We’re placing the contract under |
package contracts org.springframework.cloud.contract.spec.Contract.make { request { // (1) method 'PUT' // (2) url '/fraudcheck' // (3) body([ // (4) "client.id": $(regex('[0-9]{10}')), loanAmount: 99999 ]) headers { // (5) contentType('application/json') } } response { // (6) status 200 // (7) body([ // (8) fraudCheckStatus: "FRAUD", "rejection.reason": "Amount too high" ]) headers { // (9) contentType('application/json') } } } /* From the Consumer perspective, when shooting a request in the integration test: (1) - If the consumer sends a request (2) - With the "PUT" method (3) - to the URL "/fraudcheck" (4) - with the JSON body that * has a field `clientId` that matches a regular expression `[0-9]{10}` * has a field `loanAmount` that is equal to `99999` (5) - with header `Content-Type` equal to `application/json` (6) - then the response will be sent with (7) - status equal `200` (8) - and JSON body equal to { "fraudCheckStatus": "FRAUD", "rejectionReason": "Amount too high" } (9) - with header `Content-Type` equal to `application/json` From the Producer perspective, in the autogenerated producer-side test: (1) - A request will be sent to the producer (2) - With the "PUT" method (3) - to the URL "/fraudcheck" (4) - with the JSON body that * has a field `clientId` that will have a generated value that matches a regular expression `[0-9]{10}` * has a field `loanAmount` that is equal to `99999` (5) - with header `Content-Type` equal to `application/json` (6) - then the test will assert if the response has been sent with (7) - status equal `200` (8) - and JSON body equal to { "fraudCheckStatus": "FRAUD", "rejectionReason": "Amount too high" } (9) - with header `Content-Type` matching `application/json.*` */
The Contract is written using a statically typed Groovy DSL. You might be wondering what are those
value(client(…), server(…))
parts. By using this notation Spring Cloud Contract allows you to
define parts of a JSON / URL / etc. which are dynamic. In case of an identifier or a timestamp you
don’t want to hardcode a value. You want to allow some different ranges of values. That’s why for
the consumer side you can set regular expressions matching those values. You can provide the body
either by means of a map notation or String with interpolations.
Consult the docs
for more information. We highly recommend using the map notation!
![]() | Tip |
---|---|
It’s really important that you understand the map notation to set up contracts. Please read the Groovy docs regarding JSON |
The aforementioned contract is an agreement between two sides that:
if an HTTP request is sent with
PUT
on an endpoint /fraudcheck
client.id
matching the regular expression [0-9]{10}
and loanAmount
equal to 99999
Content-Type
equal to application/vnd.fraud.v1+json
then an HTTP response would be sent to the consumer that
200
fraudCheckStatus
field containing a value FRAUD
and the rejectionReason
field having value Amount too high
Content-Type
header with a value of application/vnd.fraud.v1+json
Once we’re ready to check the API in practice in the integration tests we need to just install the stubs locally
add the Spring Cloud Contract Verifier plugin
We can add either Maven or Gradle plugin - in this example we’ll show how to add Maven. First we need to add the Spring Cloud Contract
BOM.
<dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${spring-cloud-dependencies.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>
Next, the Spring Cloud Contract Verifier
Maven plugin
<plugin> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-contract-maven-plugin</artifactId> <version>${spring-cloud-contract.version}</version> <extensions>true</extensions> <configuration> <packageWithBaseClasses>com.example.fraud</packageWithBaseClasses> </configuration> </plugin>
Since the plugin was added we get the Spring Cloud Contract Verifier
features which from the provided contracts:
We don’t want to generate tests since we, as consumers, want only to play with the stubs. That’s why we need to skip the tests generation and execution. When we execute:
cd local-http-server-repo
./mvnw clean install -DskipTests
In the logs we’ll see something like this:
[INFO] --- spring-cloud-contract-maven-plugin:1.0.0.BUILD-SNAPSHOT:generateStubs (default-generateStubs) @ http-server --- [INFO] Building jar: /some/path/http-server/target/http-server-0.0.1-SNAPSHOT-stubs.jar [INFO] [INFO] --- maven-jar-plugin:2.6:jar (default-jar) @ http-server --- [INFO] Building jar: /some/path/http-server/target/http-server-0.0.1-SNAPSHOT.jar [INFO] [INFO] --- spring-boot-maven-plugin:1.5.4.BUILD-SNAPSHOT:repackage (default) @ http-server --- [INFO] [INFO] --- maven-install-plugin:2.5.2:install (default-install) @ http-server --- [INFO] Installing /some/path/http-server/target/http-server-0.0.1-SNAPSHOT.jar to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT.jar [INFO] Installing /some/path/http-server/pom.xml to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT.pom [INFO] Installing /some/path/http-server/target/http-server-0.0.1-SNAPSHOT-stubs.jar to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar
This line is extremely important
[INFO] Installing /some/path/http-server/target/http-server-0.0.1-SNAPSHOT-stubs.jar to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar
It’s confirming that the stubs of the http-server
have been installed in the local repository.
run the integration tests
In order to profit from the Spring Cloud Contract Stub Runner functionality of automatic stub downloading you have to do the following in our consumer side project (Loan Application service
).
Add the Spring Cloud Contract
BOM
<dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${spring-cloud-dependencies.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>
Add the dependency to Spring Cloud Contract Stub Runner
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-contract-stub-runner</artifactId> <scope>test</scope> </dependency>
Annotate your test class with @AutoConfigureStubRunner
. In the annotation provide the group id and artifact id for the Stub Runner to download stubs of your collaborators. Also provide the offline work switch since you’re playing with the collaborators offline (optional step).
@RunWith(SpringRunner.class) @SpringBootTest(webEnvironment=WebEnvironment.NONE) @AutoConfigureStubRunner(ids = {"com.example:http-server-dsl:+:stubs:6565"}, workOffline = true) @DirtiesContext public class LoanApplicationServiceTests {
Now if you run your tests you’ll see sth like this:
2016-07-19 14:22:25.403 INFO 41050 --- [ main] o.s.c.c.stubrunner.AetherStubDownloader : Desired version is + - will try to resolve the latest version 2016-07-19 14:22:25.438 INFO 41050 --- [ main] o.s.c.c.stubrunner.AetherStubDownloader : Resolved version is 0.0.1-SNAPSHOT 2016-07-19 14:22:25.439 INFO 41050 --- [ main] o.s.c.c.stubrunner.AetherStubDownloader : Resolving artifact com.example:http-server:jar:stubs:0.0.1-SNAPSHOT using remote repositories [] 2016-07-19 14:22:25.451 INFO 41050 --- [ main] o.s.c.c.stubrunner.AetherStubDownloader : Resolved artifact com.example:http-server:jar:stubs:0.0.1-SNAPSHOT to /path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar 2016-07-19 14:22:25.465 INFO 41050 --- [ main] o.s.c.c.stubrunner.AetherStubDownloader : Unpacking stub from JAR [URI: file:/path/to/your/.m2/repository/com/example/http-server/0.0.1-SNAPSHOT/http-server-0.0.1-SNAPSHOT-stubs.jar] 2016-07-19 14:22:25.475 INFO 41050 --- [ main] o.s.c.c.stubrunner.AetherStubDownloader : Unpacked file to [/var/folders/0p/xwq47sq106x1_g3dtv6qfm940000gq/T/contracts100276532569594265] 2016-07-19 14:22:27.737 INFO 41050 --- [ main] o.s.c.c.stubrunner.StubRunnerExecutor : All stubs are now running RunningStubs [namesAndPorts={com.example:http-server:0.0.1-SNAPSHOT:stubs=8080}]
Which means that Stub Runner has found your stubs and started a server for app with group id com.example
, artifact id http-server
with version 0.0.1-SNAPSHOT
of the stubs and with stubs
classifier on port 8080
.
file a PR
What we did until now is an iterative process. We can play around with the contract, install it locally and work on the consumer side until we’re happy with the contract.
Once we’re satisfied with the results and the test passes publish a PR to the server side. Currently the consumer side work is done.
As a developer of the Fraud Detection server (a server to the Loan Issuance service):
initial implementation
As a reminder here you can see the initial implementation
@RequestMapping(value = "/fraudcheck", method = PUT) public FraudCheckResult fraudCheck(@RequestBody FraudCheck fraudCheck) { return new FraudCheckResult(FraudCheckStatus.OK, NO_REASON); }
take over the PR
git checkout -b contract-change-pr master git pull https://your-git-server.com/server-side-fork.git contract-change-pr
You have to add the dependencies needed by the autogenerated tests
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-contract-verifier</artifactId> <scope>test</scope> </dependency>
In the configuration of the Maven plugin we passed the packageWithBaseClasses
property
<plugin> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-contract-maven-plugin</artifactId> <version>${spring-cloud-contract.version}</version> <extensions>true</extensions> <configuration> <packageWithBaseClasses>com.example.fraud</packageWithBaseClasses> </configuration> </plugin>
![]() | Important |
---|---|
We’ve decided to use the "convention based" naming by setting the |
That’s because all the generated tests will extend that class. Over there you can set up your Spring Context or wha