1. Introduction

This section describes the rationale behind the opinionated pipeline. We go through each deployment step and describe it in detail.


You do not need to use all the pieces of Spring Cloud Pipelines. You can (and should) gradually migrate your applications to use those pieces of Spring Cloud Pipelines that you think best suit your needs.

1.1 Five-second Introduction

Spring Cloud Pipelines provides scripts, configuration, and convention for automated deployment pipeline creation for Jenkins and Concourse with Cloud Foundry or Kubernetes. We support JVM languages, PHP, and NodeJS. Since SC-Pipelines uses bash scripts, you can use it with whatever automation server you have.

1.2 Five-minute Introduction

Spring Cloud Pipelines comes with bash scripts (available under common/src/main/bash) that represent the logic of all steps in our opinionated deployment pipeline. Since we believe in convention over configuration, for the supported framework and languages, we assume that the projects follow certain conventions of task naming, profile setting, and so on. That way, if you create a new application, your application can follow those conventions and the deployment pipeline works. Since no one pipeline can serve the purposes of all teams in a company, we believe that minor deployment pipeline tweaking should take place. That is why we allow the usage of that sc-pipelines.yml descriptor, which allows for provide some customization.

From the pipeline visualization perspective, we have prepared templates for Concourse and Jenkins (through the Jenkins Job DSL and Jenkinsfile). That means you can reuse them immediately to visualize a deployment pipeline. If you use some other tool for continuous delivery, you can set the visualization yourself and reference the bash scripts for each step. In other words, Spring Cloud Pipelines can be reused with any continuous delivery tool.

1.2.1 How to Use It

This repository can be treated as a template for your pipeline. We provide some opinionated implementation that you can alter to suit your needs. To use it, we recommend downloading the Spring Cloud Pipelines repository as a zip file, unzipping it in a directory, initializing a Git project in that directory, and then modifying the project to suit your needs. The following bash script shows how to do so:

$ # pass the branch (e.g. master) or a particular tag (e.g. v1.0.0.RELEASE)
$ curl -LOk https://github.com/spring-cloud/spring-cloud-pipelines/archive/${SC_PIPELINES_RELEASE}.zip
$ cd spring-cloud-pipelines-${SC_PIPELINES_RELEASE}
$ git init
$ # modify the pipelines to suit your needs
$ git add .
$ git commit -m "Initial commit"
$ git remote add origin ${YOUR_REPOSITORY_URL}
$ git push origin master

To keep your repository aligned with the changes in the upstream repository, you can also clone the repository. To not have many merge conflicts, we recommend using the custom folder hooks to override functions.

1.2.2 How It Works

As the following image shows, Spring Cloud Pipelines contains logic to generate a pipeline and the runtime to execute pipeline steps.

Figure 1.1. How Spring Cloud Pipelines works


Once a pipeline is created (for example, by using the Jenkins Job DSL or from a Concourse templated pipeline), when the jobs are ran, they clone or download Spring Cloud Pipelines code to run each step. Those steps run functions that are defined in the commons module of Spring Cloud Pipelines.

Spring Cloud Pipelines performs steps to guess what kind of a project your repository is (for example, JVM or PHP) and what framework it uses (Maven or Gradle), and it can deploy your application to a cloud (Cloud Foundry or Kubernetes). You can read about how it works by reading the Chapter 2, How the Scripts Work section.

All of that happens automatically if your application follows the conventions. You can read about them in the Chapter 4, Project Opinions section.

1.2.3 Supported Languages

Currently, we support the following languages:

  • JVM

    • Maven wrapper-based project
    • Gradle wrapper-based project
  • PHP

    • Composer-based project
  • NPM

1.2.4 Centralized Pipeline Creation

You can use Spring Cloud Pipelines to generate pipelines for all the projects in your system. You can scan all your repositories (for example, you can call the Stash or Github API to retrieve the list of repositories) and then:

  • For Jenkins, call the seed job and pass the REPOS parameter, which contains the list of repositories.
  • For Concourse, call fly and set the pipeline for every repository.

We recommend using Spring Cloud Pipelines this way.

1.2.5 A Pipeline for Each Repository

You can use Spring Cloud Pipelines in such a way that each project contains its own pipeline definition in its code. Spring Cloud Pipelines clones the code with the pipeline definitions (the bash scripts), so the only piece of logic that needs to be in your application’s repository is the pipeline definition.

For Jenkins, you need to either set up the Jenkinsfile or the jobs by using the Jenkins Job DSL plugin in your repo. Then, in Jenkins, whenever you set up a new pipeline for a repository, you can reference the pipeline definition in that repo. For Concourse, each project contains its own pipeline steps, and it is up to the project to set up the pipeline.

1.3 The Flow

The following images show the flow of the opinionated pipeline:

Figure 1.2. Flow in Concourse

flow concourse

Figure 1.3. Flow in Jenkins


We first describe the overall concept behind the flow and then split it into pieces and describe each piece independently.


This section defines some common vocabulary. We describe four typical environments in terms of running the pipeline.

1.3.1 Environments

We typically encounter the following environments:

  • build environment is a machine where the building of the application takes place. It is a continuous integration or continuous delivery tool worker.
  • test is an environment where you can deploy an application to test it. It does not resemble production, because we cannot be sure of its state (which application is deployed there and in which version). It can be used by multiple teams at the same time.
  • stage is an environment that does resemble production. Most likely, applications are deployed there in versions that correspond to those deployed to production. Typically, staging databases hold (often obfuscated) production data. Most often, this environment is a single environment shared between many teams. In other words, in order to run some performance and user acceptance tests, you have to block and wait until the environment is free.
  • prod is the production environment where we want our tested applications to be deployed for our customers.

1.3.2 Tests

We typically encounter the following kinds of tests:

  • Unit tests: Tests that run on the application during the build phase. No integrations with databases or HTTP server stubs or other resources take place. Generally speaking, your application should have plenty of these tests to provide fast feedback about whether your features work.
  • Integration tests: Tests that run on the built application during the build phase. Integrations with in-memory databases and HTTP server stubs take place. According to the test pyramid, in most cases, you should not have many of these kind of tests.
  • Smoke tests: Tests that run on a deployed application. The concept of these tests is to check that the crucial parts of your application are working properly. If you have 100 features in your application but you gain the most money from five features, you could write smoke tests for those five features. We are talking about smoke tests of an application, not of the whole system. In our understanding inside the opinionated pipeline, these tests are executed against an application that is surrounded with stubs.
  • End-to-end tests: Tests that run on a system composed of multiple applications. These tests ensure that the tested feature works when the whole system is set up. Due to the fact that it takes a lot of time, effort, and resources to maintain such an environment and that these tests are often unreliable (due to many different moving pieces, such as network, database, and others), you should have a handful of those tests. They should be only for critical parts of your business. Since only production is the key verifier of whether your feature works, some companies do not even want to have these tests and move directly to deployment to production. When your system contains KPI monitoring and alerting, you can quickly react when your deployed application does not behave properly.
  • Performance testing: Tests run on an application or set of applications to check if your system can handle a big load. In the case of our opinionated pipeline, these tests can run either on test (against a stubbed environment) or on staging (against the whole system).

1.3.3 Testing against Stubs

Before we go into the details of the flow, consider the example described by the following image:

Figure 1.4. Two monolithic applications deployed for end to end testing


When you have only a handful of applications, end-to-end testing is beneficial. From the operations perspective, it is maintainable for a finite number of deployed instances. From the developers perspective, it is nice to verify the whole flow in the system for a feature.

In the case of microservices, the scale starts to be a problem, as the following image shows:

Figure 1.5. Many microservices deployed in different versions

many microservices

The following questions arise:

  • Should I queue deployments of microservices on one testing environment or should I have an environment per microservice?

    • If I queue deployments, people have to wait for hours to have their tests run. That can be a problem
  • To remove that issue, I can have an environment for each microservice.

    • Who will pay the bills? (Imagine 100 microservices, each having each own environment).
    • Who will support each of those environments?
    • Should we spawn a new environment each time we execute a new pipeline and then wrap it up or should we have them up and running for the whole day?
  • In which versions should I deploy the dependent microservices - development or production versions?

    • If I have development versions, I can test my application against a feature that is not yet on production. That can lead to exceptions in production.
    • If I test against production versions, I can never test against a feature under development anytime before deployment to production.

One of the possibilities of tackling these problems is to not do end-to-end tests.

The following image shows one solution to the problem, in the form of stubbed dependencies:

Figure 1.6. Execute tests on a deployed microservice on stubbed dependencies

stubbed dependencies

If we stub out all the dependencies of our application, most of the problems presented earlier disappear. There is no need to start and setup the infrastructure required by the dependent microservices. That way, the testing setup looks like the following image:

Figure 1.7. We’re testing microservices in isolation

stubbed dependencies

Such an approach to testing and deployment gives the following benefits (thanks to the usage of Spring Cloud Contract):

  • No need to deploy dependent services.
  • The stubs used for the tests run on a deployed microservice are the same as those used during integration tests.
  • Those stubs have been tested against the application that produces them (see Spring Cloud Contract for more information).
  • We do not have many slow tests running on a deployed application, so the pipeline gets executed much faster.
  • We do not have to queue deployments. We test in isolation so that pipelines do not interfere with each other.
  • We do not have to spawn virtual machines each time for deployment purposes.

However, this approach brings the following challenges:

  • No end-to-end tests before production. You do not have full certainty that a feature is working.
  • The first time the applications interact in a real way is on production.

As with every solution, it has its benefits and drawbacks. The opinionated pipeline lets you configure whether you want to follow this flow or not.

1.3.4 General View

The general view behind this deployment pipeline is to:

  • Test the application in isolation.
  • Test the backwards compatibility of the application, in order to roll it back if necessary.
  • Allow testing of the packaged application in a deployed environment.
  • Allow user acceptance tests and performance tests in a deployed environment.
  • Allow deployment to production.

The pipeline could have been split to more steps, but it seems that all of the aforementioned actions fit nicely in our opinionated proposal.

1.4 Pipeline Descriptor

Each application can contain a file (called sc-pipelines.yml) with the following structure:

language_type: jvm
	# used for multi module projects
	main_module: things/thing
	# used for multi projects
		- monoRepoA
		- monoRepoB
	# should deploy to stage automatically and run e2e tests
	auto_stage: true
	# should deploy to production automatically
	auto_prod: true
	# should the api compatibility check be there
	api_compatibility_step: true
	# should the test rollback step be there
	rollback_step: true
	# should the stage step be there
	stage_step: true
	# should the test step (including rollback) be there
	test_step: true
	# used by spinnaker
	deployment_strategy: HIGHlANDER
	# list of services to be deployed
		- type: service1Type
		  name: service1Name
		  coordinates: value
		- type: service2Type
		  name: service2Name
		  key: value
	# used by spinnaker
	deployment_strategy: HIGHlANDER
	# list of services to be deployed
		- type: service3Type
		  name: service3Name
		  coordinates: value
		- type: service4Type
		  name: service4Name
		  key: value

If you have a multi-module project, you should point to the folder that contains the module that produces the fat jar. In the preceding example, that module would be present under the things/thing folder. If you have a single module project, you need not create this section.

For a given environment, we declare a list of infrastructure services that we want to have deployed. Services have:

  • type (examples: eureka, mysql, rabbitmq, and stubrunner): This value gets then applied to the deployService Bash function
  • [KUBERNETES]: For mysql, you can pass the database name in the database property.
  • name: The name of the service to get deployed.
  • coordinates: The coordinates that let you fetch the binary of the service. It can be a Maven coordinate (groupid:artifactid:version), a docker image (organization/nameOfImage), and so on.
  • Arbitrary key value pairs, which let you customize the services as you wish.

1.4.1 Pipeline Descriptor for Cloud Foundry

When deploying to Cloud Foundry you can provide services of the following types:

  • type: broker

    • broker: The name of the CF broker
    • plan: The name of the plan
    • params: Additional parameters are converted to JSON
    • useExisting: Whether to use an existing one or create a new one (defaults to false)
  • type: app

    • coordinates: The Maven coordinates of the stub runner jar
    • manifestPath: The path to the manifest for the stub runner jar
  • type: cups

    • params: Additional parameters are converted to JSON
  • type: cupsSyslog

    • url: The URL to the syslog drain
  • type: cupsRoute

    • url: The URL to the route service
  • type: stubrunner

    • coordinates: The Maven coordinates of the stub runner jar
    • manifestPath: The path to the manifest for the stub runner jar

The following example shows the contents of a YAML file that defines the preceding values:

# This file describes which services are required by this application
# in order for the smoke tests on the TEST environment and end to end tests
# on the STAGE environment to pass

# lowercase name of the environment
  # list of required services
    - name: config-server
      type: broker
      broker: p-config-server
      plan: standard
          uri: https://github.com/ciberkleid/app-config
      useExisting: true
    - name: cloud-bus
      type: broker
      broker: cloudamqp
      plan: lemur
      useExisting: true
    - name: service-registry
      type: broker
      broker: p-service-registry
      plan: standard
      useExisting: true
    - name: circuit-breaker-dashboard
      type: broker
      broker: p-circuit-breaker-dashboard
      plan: standard
      useExisting: true
    - name: stubrunner
      type: stubrunner
      coordinates: io.pivotal:cloudfoundry-stub-runner-boot:0.0.1.M1
      manifestPath: sc-pipelines/manifest-stubrunner.yml

    - name: config-server
      type: broker
      broker: p-config-server
      plan: standard
          uri: https://github.com/ciberkleid/app-config
    - name: cloud-bus
      type: broker
      broker: cloudamqp
      plan: lemur
    - name: service-registry
      type: broker
      broker: p-service-registry
      plan: standard
    - name: circuit-breaker-dashboard
      type: broker
      broker: p-circuit-breaker-dashboard
      plan: standard

Another CF specific property is artifact_type. Its value can be either binary or source. Certain languages (such as Java) require a binary to be uploaded, but others (such as PHP) require you to push the sources. The default value is binary.

1.5 Project Setup

Spring Cloud Pipelines supports three main types of project setup:

  • Single Project
  • Multi Module
  • Multi Project (also known as mono repo)

A Single Project is a project that contains a single module that gets built and packaged into a single, executable artifact.

A Multi Module project is a project that contains multiple modules. After building all modules, one gets packaged into a single, executable artifact. You have to point to that module in your pipeline descriptor.

A Multi Project is a project that contains multiple projects. Each of those projects can in turn be a Single Project or a Multi Module project. Spring Cloud Pipelines assume that, if a PROJECT_NAME environment variable corresponds to a folder with the same name in the root of the repository, this is the project it should build. For example, for PROJECT_NAME=something, if there’s a folder named something, then Spring Cloud Pipelines treats the something directory as the root of the something project.