Table of Contents
version
resource is brokenpipeline.sh
infoDocumentation Authors: Marcin Grzejszczak, Cora Iberkleid
Spring, Spring Boot and Spring Cloud are tools that allow developers speed up the time of creating new business features. It’s common knowledge however that the feature is only valuable if it’s in production. That’s why companies spend a lot of time and resources on building their own deployment pipelines.
This project tries to solve the following problems:
A common way of running, configuring and deploying applications lowers support costs and time needed by new developers to blend in when they change projects.
In the following section we will describe in more depth the rationale behind the presented opinionated pipeline. We will go through each deployment step and describe it in details.
![]() | Important |
---|---|
You don’t need to use all pieces of Spring Cloud Pipelines. You can (and should) gradually migrate your applications to use those pieces of Spring Cloud Pipelines that you think best suit your needs. |
. ├── common ├── concourse ├── dist ├── docs ├── docs-sources └── jenkins
In the common
folder you can find all the Bash scripts containing the pipeline logic. These
scripts are reused by both Concourse and Jenkins pipelines.
In the concourse
folder you can find all the necessary scripts and setup to run Concourse demo.
In the dist
folder you can find the packaged sources of the project. Since the package
contains no tests or documentation it’s extremely small and can be used in the pipelines.
In the docs
folder you have the whole generated documentation of the project.
In the docs-source
folder you have the sources required to generate the documentation.
In the jenkins
folder you can find all the necessary scripts and setup to run Jenkins demo.
This repository can be treated as a template for your pipeline. We provide some opinionated implementation that you can alter to suit your needs. The best approach to use it to build your production projects would be to download the Spring Cloud Pipelines repository as ZIP, then init a Git project there and modify it as you wish.
$ # pass the branch (e.g. master) or a particular tag (e.g. v1.0.0.RELEASE) $ SC_PIPELINES_RELEASE=... $ curl -LOk https://github.com/spring-cloud/spring-cloud-pipelines/archive/${SC_PIPELINES_RELEASE}.zip $ unzip ${SC_PIPELINES_RELEASE}.zip $ cd spring-cloud-pipelines-${SC_PIPELINES_RELEASE} $ git init $ # modify the pipelines to suit your needs $ git add . $ git commit -m "Initial commit" $ git remote add origin ${YOUR_REPOSITORY_URL} $ git push origin master
You can also clone the repository in case you would like to keep aligned
with the changes in the upstream repository. In order not to have many merge
conflicts it’s encouraged to use the custom
folder hooks to override functions.
You can use Spring Cloud Pipelines to generate pipelines for all projects in your system. You can scan all your repositories (e.g. call the Stash / Github API and retrieve the list of repos) and then…
REPOS
parameter that would contain the list of repositoriesfly
and set
pipeline for every single repoYou can use Spring Cloud Pipelines in such a way that each project contains its own pipeline definition in its code. Spring Cloud Pipelines clones the code with the pipeline definitions (the bash scripts) so the only piece of logic that could be there in your application’s repository would be how the pipeline should look like.
Jenkinsfile
or the jobs using Jenkins Job DSL plugin in your repo.
Then in Jenkins whenever you set up a new pipeline for a repo
then you reference the pipeline definition in that repo.Let’s take a look at the flow of the opinionated pipeline
We’ll first describe the overall concept behind the flow and then we’ll split it into pieces and describe every piece independently.
So we’re on the same page let’s define some common vocabulary. We discern 4 typical environments in terms of running the pipeline.
Unit tests - tests that are executed on the application during the build phase. No integrations with databases / HTTP server stubs etc. take place. Generally speaking your application should have plenty of these to have fast feedback if your features are working fine.
Integration tests - tests that are executed on the built application during the build phase. Integrations with in memory databases / HTTP server stubs take place. According to the test pyramid, in most cases you should have not too many of these kind of tests.
Smoke tests - tests that are executed on a deployed application. The concept of these tests is to check the crucial parts of your application are working properly. If you have 100 features in your application but you gain most money from e.g. 5 features then you could write smoke tests for those 5 features. As you can see we’re talking about smoke tests of an application, not of the whole system. In our understanding inside the opinionated pipeline, these tests are executed against an application that is surrounded with stubs.
End to end tests - tests that are executed on a system composing of multiple applications. The idea of these tests is to check if the tested feature works when the whole system is set up. Due to the fact that it takes a lot of time, effort, resources to maintain such an environment and that often those tests are unreliable (due to many different moving pieces like network database etc.) you should have a handful of those tests. Only for critical parts of your business. Since only production is the key verifier of whether your feature works, some companies don’t even want to do those and move directly to deployment to production. When your system contains KPI monitoring and alerting you can quickly react when your deployed application is not behaving properly.
Performance testing - tests executed on an application or set of applications to check if your system can handle big load of input. In case of our opinionated pipeline these tests could be executed either on test (against stubbed environment) or stage (against the whole system)
Before we go into details of the flow let’s take a look at the following example.
When having only a handful of applications, performing end to end testing is beneficial. From the operations perspective it’s maintainable for a finite number of deployed instances. From the developers perspective it’s nice to verify the whole flow in the system for a feature.
In case of microservices the scale starts to be a problem:
The questions arise:
Should I queue deployments of microservices on one testing environment or should I have an environment per microservice?
To remove that issue I can have an environment per microservice
In which versions should I deploy the dependent microservices - development or production versions?
One of the possibilities of tackling these problems is to… not do end to end tests.
If we stub out all the dependencies of our application then most of the problems presented above disappear. There is no need to start and setup infrastructure required by the dependant microservices. That way the testing setup looks like this:
Such an approach to testing and deployment gives the following benefits (thanks to the usage of Spring Cloud Contract):
It brings however the following challenges:
Like every solution it has its benefits and drawbacks. The opinionated pipeline allows you to configure whether you want to follow this flow or not.
The general view behind this deployment pipeline is to:
Obviously the pipeline could have been split to more steps but it seems that all of the aforementioned actions comprise nicely in our opinionated proposal.
Spring Cloud Pipelines uses Bash scripts extensively. Below you can find the list of software that needs to be installed on a CI server worker for the build to pass.
![]() | Tip |
---|---|
In the demo setup all of these libraries are already installed. |
apt-get -y install \ bash \ git \ tar \ zip \ curl \ ruby \ wget \ unzip \ python \ jq
![]() | Important |
---|---|
In the Jenkins case you will also need |
Each application can contain a file called sc-pipelines.yml
with the following structure:
build: main_module: foo/bar lowercaseEnvironmentName1: services: - type: service1Type name: service1Name coordinates: value - type: service2Type name: service2Name key: value lowercaseEnvironmentName2: services: - type: service3Type name: service3Name coordinates: value - type: service4Type name: service4Name key: value
If you have a multi-module project, you should point to the folder, where your
module that produces the fat jar lays. In the aforementioned example that module
would be present under the foo/bar
folder. If you have a single module project,
then you don’t have to create this section.
For a given environment we declare a list of infrastructure services that we want to have deployed. Services have
type
(example: eureka
, mysql
, rabbitmq
, stubrunner
) - this value gets
then applied to the deployService
Bash functionmysql
you can pass the database name via the database
propertyname
- name of the service to get deployedcoordinates
- coordinate that allows you to fetch the binary of the service.
Examples: It can be a maven coordinate groupid:artifactid:version
,
docker image organization/nameOfImage
, etc.When deploying to Cloud Foundry you can provide services of the following types:
type: broker
broker
- name of the CF brokerplan
- name of the planparams
- additional parameters that will be converted to JSONuseExisting
- should use existing one or
create a new one (defaults to false
)type: app
coordinates
- maven coordinates of the stub runner jarmanifestPath
- path to the manifest for the stub runner jartype: cups
params
- additional parameters that will be converted to JSONtype: cupsSyslog
url
- URL to the syslog draintype: cupsRoute
url
- URL to the route servicetype: stubrunner
coordinates
- maven coordinates of the stub runner jarmanifestPath
- path to the manifest for the stub runner jar# This file describes which services are required by this application # in order for the smoke tests on the TEST environment and end to end tests # on the STAGE environment to pass # lowercase name of the environment test: # list of required services services: - name: config-server type: broker broker: p-config-server plan: standard params: git: uri: https://github.com/ciberkleid/app-config useExisting: true - name: cloud-bus type: broker broker: cloudamqp plan: lemur useExisting: true - name: service-registry type: broker broker: p-service-registry plan: standard useExisting: true - name: circuit-breaker-dashboard type: broker broker: p-circuit-breaker-dashboard plan: standard useExisting: true - name: stubrunner type: stubrunner coordinates: io.pivotal:cloudfoundry-stub-runner-boot:0.0.1.M1 manifestPath: sc-pipelines/manifest-stubrunner.yml stage: services: - name: config-server type: broker broker: p-config-server plan: standard params: git: uri: https://github.com/ciberkleid/app-config - name: cloud-bus type: broker broker: cloudamqp plan: lemur - name: service-registry type: broker broker: p-service-registry plan: standard - name: circuit-breaker-dashboard type: broker broker: p-circuit-breaker-dashboard plan: standard
Spring Cloud Pipelines supports three main types of project setup
- Single Project
- Multi Module
- Multi Project
(aka mono repo)
A Single Project
is a project that contains a single module that gets
built and package into a single, executable artifact.
A Multi Module
project is a project that contains a multiple modules.
After building all modules, one gets packaged into a single, executable artifact.
You have to point to that module in your pipeline descriptor.
A Multi Project
is a project that contains multiple projects. Each of those
projects can be in turn a Single Project
or a Multi Module
project. Spring
Cloud Pipelines will assume that if there’s a PROJECT_NAME
environment
variable that corresponds to a folder with the same name in the root of the
repository, that means that this is the project it should build. E.g for
PROJECT_NAME=foo
, if there’s a folder foo
, then Spring Cloud Pipelines
will treat the foo
directory as the root of the foo
project.
For the demo purposes we’re providing Docker Compose setup with Artifactory and Concourse / Jenkins tools. Regardless of the picked CD application for the pipeline to pass one needs either
Eureka
for Service DiscoveryStub Runner Boot
for running Spring Cloud Contract stubs.![]() | Tip |
---|---|
In the demos we’re showing you how to first build the |
In this step we’re generating a version of the pipeline, next we’re running unit, integration and contract tests. Finally we’re:
During this phase we’re executing a Maven
build using Maven Wrapper or a Gradle
build using Gradle Wrapper
, with unit and integration tests. We’re also tagging the repository with dev/${version}
format. That way in each
subsequent step of the pipeline we’re able to retrieve the tagged version. Also we know
exactly which version of the pipeline corresponds to which Git hash.
Once the artifact got built we’re running API compatibility check.
Here we’re
Eureka
infrastructure application to PaaS![]() | Tip |
---|---|
Currently due to port constraints in Cloud Foundry
we cannot run multiple stubbed HTTP services in the cloud so to fix this issue we’re running
the application with |
stubrunner.ids
property that contains
all the groupId:artifactId:version:classifier
notation of dependant projects for which
the stubs should be downloaded.Stub Runner Boot
and pass the extracted stubrunner.ids
to it. That way
we’ll have a running application in Cloud Foundry that will download all the necessary stubs
of our applicationsmoke
profile. In the
case of GitHub Analytics
application we’re triggering a message from the GitHub Webhook
application’s stub, that is sent via RabbitMQ to GitHub Analytics. Then we’re checking if
message count has increased.prod/${version}
tag. If there is no such tag
(there was no production release) there will be no rollback tests executed. If there was
a production release the tests will get executed.smoke
tests against the freshly deployed application surrounded by stubs.
If those tests pass then we have a high probability that the application is backwards compatibleHere we’re
Eureka
infrastructure application to PaaSNext we have a manual step in which:
e2e
profile. In the
case of GitHub Analytics
application we’re sending a HTTP message to GitHub Analytic’s endpoint. Then we’re checking if
the received message count has increased.The step is manual by default due to the fact that stage environment is often shared between teams and some preparations on databases / infrastructure have to take place before running the tests. Ideally these step should be fully automatic.
The step to deploy to production is manual but ideally it should be automatic.
![]() | Important |
---|---|
This step does deployment to production. On production you would assume
that you have the infrastructure running. That’s why before you run this step you
must execute a script that will provision the services on the production environment.
For |
Here we’re
prod/${version}
tagfor Cloud Foundry
fooService
to fooService-venerable
fooService
namefor Kubernetes
fooService
fooService-1-0-0-M1-123-456-VERSION
name
equal to app name e.g. fooService
name
label selectorin the Complete switch over
which is a manual step
in the Rollback
which is a manual step
we’re routing all the traffic to the old instance
In this section we will go through the assumptions we’ve made in the project structure and project properties.
We’ve taken the following opinionated decisions for a Cloud Foundry based project:
manifest.yml
Cloud Foundry descriptorFor Maven (example project):
settings.xml
is parametrized to pass the credentials to push code to Artifactory
M2_SETTINGS_REPO_ID
- server id for Artifactory / Nexus deploymentM2_SETTINGS_REPO_USERNAME
- username for Artifactory / Nexus deploymentM2_SETTINGS_REPO_PASSWORD
- password for Artifactory / Nexus deployment./mvnw clean deploy
stubrunner.ids
property to retrieve list of collaborators for which stubs should be downloadedrepo.with.binaries
property - (Injected by the pipeline) will contain the URL to the repo containing binaries (e.g. Artifactory)distribution.management.release.id
property - (Injected by the pipeline) ID of the distribution management. Corresponds to server id in settings.xml
distribution.management.release.url
property - (Injected by the pipeline) Will contain the URL to the repo containing binaries (e.g. Artifactory)apicompatibility
Maven profilelatest.production.version
property - (Injected by the pipeline) will contain the latest production version for the repo (retrieved from Git tags)smoke
Maven profilee2e
Maven profileFor Gradle (example project check the gradle/pipeline.gradle
file):
deploy
task for artifacts deploymentREPO_WITH_BINARIES_FOR_UPLOAD
env var - (Injected by the pipeline) will contain the URL to the repo containing binaries (e.g. Artifactory)M2_SETTINGS_REPO_USERNAME
env var - Username used to send the binary to the repo containing binaries (e.g. Artifactory)M2_SETTINGS_REPO_PASSWORD
env var - Password used to send the binary to the repo containing binaries (e.g. Artifactory)apiCompatibility
tasklatestProductionVersion
property - (Injected by the pipeline) will contain the latest production version for the repo (retrieved from Git tags)smoke
taske2e
taskgroupId
task to retrieve group idartifactId
task to retrieve artifact idcurrentVersion
task to retrieve the current versionstubIds
task to retrieve list of collaborators for which stubs should be downloadedWe’ve taken the following opinionated decisions for a Cloud Foundry based project:
SYSTEM_PROPS
env variableFor Maven (example project):
settings.xml
is parametrized to pass the credentials to push code to Artifactory and Docker repository
M2_SETTINGS_REPO_ID
- server id for Artifactory / Nexus deploymentM2_SETTINGS_REPO_USERNAME
- username for Artifactory / Nexus deploymentM2_SETTINGS_REPO_PASSWORD
- password for Artifactory / Nexus deploymentDOCKER_SERVER_ID
- server id for Docker image pushingDOCKER_USERNAME
- username for Docker image pushingDOCKER_PASSWORD
- password for Docker image pushingDOCKER_EMAIL
- email for Artifactory / Nexus deploymentDOCKER_REGISTRY_URL
env var - (Overridable - defaults to DockerHub) URL of the Docker registryDOCKER_REGISTRY_ORGANIZATION
- env var containing the organization where your Docker repo lays./mvnw clean deploy
stubrunner.ids
property to retrieve list of collaborators for which stubs should be downloadedrepo.with.binaries
property - (Injected by the pipeline) will contain the URL to the repo containing binaries (e.g. Artifactory)distribution.management.release.id
property - (Injected by the pipeline) ID of the distribution management. Corresponds to server id in settings.xml
distribution.management.release.url
property - (Injected by the pipeline) Will contain the URL to the repo containing binaries (e.g. Artifactory)deployment.yml
contains the Kubernetes deployment descriptorservice.yml
contains the Kubernetes service descriptorapicompatibility
Maven profilelatest.production.version
property - (Injected by the pipeline) will contain the latest production version for the repo (retrieved from Git tags)smoke
Maven profilee2e
Maven profileFor Gradle (example project check the gradle/pipeline.gradle
file):
deploy
task for artifacts deploymentREPO_WITH_BINARIES_FOR_UPLOAD
env var - (Injected by the pipeline) will contain the URL to the repo containing binaries (e.g. Artifactory)M2_SETTINGS_REPO_USERNAME
env var - Username used to send the binary to the repo containing binaries (e.g. Artifactory)M2_SETTINGS_REPO_PASSWORD
env var - Password used to send the binary to the repo containing binaries (e.g. Artifactory)DOCKER_REGISTRY_URL
env var - (Overridable - defaults to DockerHub) URL of the Docker registryDOCKER_USERNAME
env var - Username used to send the the Docker imageDOCKER_PASSWORD
env var - Password used to send the the Docker imageDOCKER_EMAIL
env var - Email used to send the the Docker imageDOCKER_REGISTRY_ORGANIZATION
- env var containing the organization where your Docker repo laysdeployment.yml
contains the Kubernetes deployment descriptorservice.yml
contains the Kubernetes service descriptorapiCompatibility
tasklatestProductionVersion
property - (Injected by the pipeline) will contain the latest production version for the repo (retrieved from Git tags)smoke
taske2e
taskgroupId
task to retrieve group idartifactId
task to retrieve artifact idcurrentVersion
task to retrieve the current versionstubIds
task to retrieve list of collaborators for which stubs should be downloaded![]() | Important |
---|---|
In this chapter we assume that you perform deployment of your application to Cloud Foundry PaaS |
The Spring Cloud Pipelines repository contains opinionated Concourse pipeline definition. Those jobs will form an empty pipeline and a sample, opinionated one that you can use in your company.
All in all there are the following projects taking part in the whole microservice setup
for this demo.
If you want to just run the demo as far as possible using PCF Dev and Docker Compose
There are 4 apps that are composing the pipeline
You need to fork only these. That’s because only then will your user be able to tag and push the tag to repo.
Concourse + Artifactory can be run locally. To do that just execute the
start.sh
script from this repo.
git clone https://github.com/spring-cloud/spring-cloud-pipelines cd spring-cloud-pipelines/concourse ./setup_docker_compose.sh ./start.sh 192.168.99.100
The setup_docker_compose.sh
script should be executed once only to allow
generation of keys.
The 192.168.99.100
param is an example of an external URL of Concourse
(equal to Docker-Machine ip in this example).
Then Concourse will be running on port 8080
and Artifactory 8081
.
When Artifactory is running, just execute the tools/deploy-infra.sh
script from this repo.
git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/
./tools/deploy-infra.sh
As a result both eureka
and stub runner
repos will be cloned, built
and uploaded to Artifactory.
![]() | Tip |
---|---|
You can skip this step if you have CF installed and don’t want to use PCF Dev The only thing you have to do is to set up spaces. |
![]() | Warning |
---|---|
It’s more than likely that you’ll run out of resources when you reach stage step. Don’t worry! Keep calm and clear some apps from PCF Dev and continue. |
You have to download and start PCF Dev. A link how to do it is available here.
The default credentials when using PCF Dev are:
username: user password: pass email: user org: pcfdev-org space: pcfdev-space api: api.local.pcfdev.io
You can start the PCF Dev like this:
cf dev start
You’ll have to create 3 separate spaces (email admin, pass admin)
cf login -a https://api.local.pcfdev.io --skip-ssl-validation -u admin -p admin -o pcfdev-org cf create-space pcfdev-test cf set-space-role user pcfdev-org pcfdev-test SpaceDeveloper cf create-space pcfdev-stage cf set-space-role user pcfdev-org pcfdev-stage SpaceDeveloper cf create-space pcfdev-prod cf set-space-role user pcfdev-org pcfdev-prod SpaceDeveloper
You can also execute the ./tools/cf-helper.sh setup-spaces
to do this.
If you go to Concourse website you should see sth like this:
You can click one of the icons (depending on your OS) to download fly
, which is the Concourse CLI. Once you’ve downloaded that (and maybe added to your PATH) you can run:
fly --version
If fly
is properly installed then it should print out the version.
The repo comes with credentials-sample-cf.yml
which is set up with sample data (most credentials) are set to be applicable for PCF Dev. Copy this file to a new file credentials.yml
(the file is added to .gitignore so don’t worry that you’ll push it with your passwords) and edit it as you wish. For our demo just setup:
app-url
- url pointing to your forked github-webhook
repogithub-private-key
- your private key to clone / tag GitHub reposrepo-with-binaries
- the IP is set to the defaults for Docker Machine. You should update it to point to your setupIf you don’t have a Docker Machine just execute ./whats_my_ip.sh
script to
get an external IP that you can pass to your repo-with-binaries
instead of the default
Docker Machine IP.
Below you can see what environment variables are required by the scripts. To the right hand side you can see the default values for PCF Dev that we set in the credentials-sample-cf.yml
.
Property Name | Property Description | Default value |
---|---|---|
BUILD_OPTIONS | Additional options you would like to pass to the Maven / Gradle build | |
PAAS_TEST_API_URL | The URL to the CF Api for TEST env | api.local.pcfdev.io |
PAAS_STAGE_API_URL | The URL to the CF Api for STAGE env | api.local.pcfdev.io |
PAAS_PROD_API_URL | The URL to the CF Api for PROD env | api.local.pcfdev.io |
PAAS_TEST_ORG | Name of the org for the test env | pcfdev-org |
PAAS_TEST_SPACE_PREFIX | Prefix of the name of the CF space for the test env to which the app name will be appended | sc-pipelines-test |
PAAS_STAGE_ORG | Name of the org for the stage env | pcfdev-org |
PAAS_STAGE_SPACE | Name of the space for the stage env | sc-pipelines-stage |
PAAS_PROD_ORG | Name of the org for the prod env | pcfdev-org |
PAAS_PROD_SPACE | Name of the space for the prod env | sc-pipelines-prod |
REPO_WITH_BINARIES_FOR_UPLOAD | URL to repo with the deployed jars | |
M2_SETTINGS_REPO_ID | The id of server from Maven settings.xml | artifactory-local |
PAAS_HOSTNAME_UUID | Additional suffix for the route. In a shared environment the default routes can be already taken | |
JAVA_BUILDPACK_URL | The URL to the Java buildpack to be used by CF |
Log in (e.g. for Concourse running at 192.168.99.100
- if you don’t provide any value then localhost
is assumed). If you execute this script (it assumes that either fly
is on your PATH
or it’s in the same folder as the script is):
./login.sh 192.168.99.100
Next run the command to create the pipeline.
./set_pipeline.sh
Then you’ll create a github-webhook
pipeline under the docker
alias, using the provided credentials.yml
file.
You can override these values in exactly that order (e.g. ./set-pipeline.sh some-project another-target some-other-credentials.yml
)
![]() | Important |
---|---|
In this chapter we assume that you perform deployment of your application to Kubernetes PaaS |
The Spring Cloud Pipelines repository contains opinionated Concourse pipeline definition. Those jobs will form an empty pipeline and a sample, opinionated one that you can use in your company.
All in all there are the following projects taking part in the whole microservice setup
for this demo.
This is a guide for Concourse pipeline.
If you want to just run the demo as far as possible using PCF Dev and Docker Compose
The simplest way to deploy Concourse to K8S is to use Helm.
Once you have Helm installed and your kubectl
is pointing to the
cluster, just type this command to install the Concourse cluster in your K8S cluster.
$ helm install stable/concourse --name concourse
Once it’s done you’ll see the following output
1. Concourse can be accessed: * Within your cluster, at the following DNS name at port 8080: concourse-web.default.svc.cluster.local * From outside the cluster, run these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=concourse-web" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use Concourse" kubectl port-forward --namespace default $POD_NAME 8080:8080 2. Login with the following credentials Username: concourse Password: concourse
Just follow these steps and log in to Concourse under http://127.0.0.1:8080.
We can use Helm also to deploy Artifactory to K8S
$ helm install --name artifactory --set artifactory.image.repository=docker.bintray.io/jfrog/artifactory-oss stable/artifactory
After executing this you’ll see the following output
NOTES: Congratulations. You have just deployed JFrog Artifactory Pro! 1. Get the Artifactory URL by running these commands: NOTE: It may take a few minutes for the LoadBalancer IP to be available. You can watch the status of the service by running 'kubectl get svc -w nginx' export SERVICE_IP=$(kubectl get svc --namespace default nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo http://$SERVICE_IP/ 2. Open Artifactory in your browser Default credential for Artifactory: user: admin password: password
Next, we need to set up the repositories.
First, access the Artifactory URL and log in with
user, admin
and password
password.
Then, click on the Maven setup and click Create
.
If you go to Concourse website you should see sth like this:
You can click one of the icons (depending on your OS) to download fly
, which is the Concourse CLI. Once you’ve downloaded that (and maybe added to your PATH) you can run:
fly --version
If fly
is properly installed then it should print out the version.
There is a sample credentials file called credentials-sample-k8s.yml
prepared for k8s
. You can use it as a base for your credentials.yml
.
To allow the Concourse worker’s spawned container to connect to Kubernetes cluster you will need to pass the CA contents and the auth token.
To get the contents of CA for GCE just execute
$ kubectl get secret $(kubectl get secret | grep default-token | awk '{print $1}') -o jsonpath='{.data.ca\.crt}' | base64 --decode
To get the token just type:
$ kubectl get secret $(kubectl get secret | grep default-token | awk '{print $1}') -o jsonpath='{.data.token}' | base64 --decode
Set that value under paas-test-client-token
, paas-stage-client-token
and paas-prod-client-token
After running Concourse you should get the following output in your terminal
$ export POD_NAME=$(kubectl get pods --namespace default -l "app=concourse-web" -o jsonpath="{.items[0].metadata.name}") $ echo "Visit http://127.0.0.1:8080 to use Concourse" $ kubectl port-forward --namespace default $POD_NAME 8080:8080 Visit http://127.0.0.1:8080 to use Concourse
Log in (e.g. for Concourse running at 127.0.0.1
- if you don’t provide any value then localhost
is assumed). If you execute this script (it assumes that either fly
is on your PATH
or it’s in the same folder as the script is):
$ fly -t k8s login -c http://localhost:8080 -u concourse -p concourse
Next run the command to create the pipeline.
$ ./set_pipeline.sh github-webhook k8s credentials-k8s.yml
Not really. This is an opinionated pipeline
that’s why we took some
opinionated decisions. Check out the documentation to see
what those decisions are.
Sure! It’s open-source! The important thing is that the core part of the logic is written in Bash scripts. That way, in the majority of cases, you could change only the bash scripts without changing the whole pipeline. You can check out the scripts here.
Furthermore, if you only want to customize a particular function under common/src/main/bash
, you can provide your own
function under common/src/main/bash/<some custom identifier>
where <some custom identifier>
is equal to the value of
the CUSTOM_SCRIPT_IDENTIFIER
environment variable. It defaults to custom
.
When deploying the app to stage or prod you can get an exception Insufficient resources
. The way to
solve it is to kill some apps from test / stage env. To achieve that just call
cf target -o pcfdev-org -s pcfdev-test
cf stop github-webhook
cf stop github-eureka
cf stop stubrunner
You can also execute ./tools/cf-helper.sh kill-all-apps
that will remove
all demo-related apps deployed to PCF Dev.
You must have pushed some tags and have removed the Artifactory volume that contained them. To fix this, just remove the tags
git tag -l | xargs -n 1 git push --delete origin
Yes! Assuming that pipeline name is github-webhook
and job name is build-and-upload
you can running
fly watch --job github-webhook/build-and-upload -t docker
Don’t worry… most likely you’ve just forgotten to click the play
button to
unpause the pipeline. Click to the top left, expand the list of pipelines and click
the play
button next to github-webhook
.
Another problem that might occur is that you need to have the version
branch.
Concourse will wait for the version
branch to appear in your repo. So in order for
the pipeline to start ensure that when doing some git operations you haven’t
forgotten to create / copy the version
branch too.
If you play around with Jenkins / Concourse you might end up with the routes occupied
Using route github-webhook-test.local.pcfdev.io Binding github-webhook-test.local.pcfdev.io to github-webhook... FAILED The route github-webhook-test.local.pcfdev.io is already in use.
Just delete the routes
yes | cf delete-route local.pcfdev.io -n github-webhook-test yes | cf delete-route local.pcfdev.io -n github-eureka-test yes | cf delete-route local.pcfdev.io -n stubrunner-test yes | cf delete-route local.pcfdev.io -n github-webhook-stage yes | cf delete-route local.pcfdev.io -n github-eureka-stage yes | cf delete-route local.pcfdev.io -n github-webhook-prod yes | cf delete-route local.pcfdev.io -n github-eureka-prod
You can also execute the ./tools/cf-helper.sh delete-routes
Most likely you’ve forgotten to update your local settings.xml
with the Artifactory’s
setup. Check out this section of the docs and update your settings.xml
.
When I click on it it looks like this:
resource script '/opt/resource/check []' failed: exit status 128 stderr: Identity added: /tmp/git-resource-private-key (/tmp/git-resource-private-key) Cloning into '/tmp/git-resource-repo-cache'... warning: Could not find remote branch version to clone. fatal: Remote branch version not found in upstream origin
That means that your repo doesn’t have the version
branch. Please
set it up.
In this section we will present the common setup of Jenkins for any platform. We will also provide answers to most frequently asked questions.
.
├── declarative-pipeline
│ └── Jenkinsfile-sample.groovy
├── jobs
│ ├── jenkins_pipeline_empty.groovy
│ ├── jenkins_pipeline_jenkinsfile_empty.groovy
│ ├── jenkins_pipeline_sample.groovy
│ └── jenkins_pipeline_sample_view.groovy
├── seed
│ ├── init.groovy
│ ├── jenkins_pipeline.groovy
│ ├── k8s
│ └── settings.xml
└── src
├── main
└── test
In the declarative-pipeline
you can find a definition of a Jenkinsfile-sample.groovy
declarative
pipeline. It’s used together with the Blueocean UI.
In the jobs
folder you have all the seed jobs that will generate pipelines.
jenkins_pipeline_empty.groovy
- is a template of a pipeline with empty steps using the Jenkins Job DSL pluginjenkins_pipeline_jenkinsfile_empty.groovy
- is a template of a pipeline with empty steps using the Pipeline pluginjenkins_pipeline_sample.groovy
- is an opinionated implementation using the Jenkins Job DSL pluginjenkins_pipeline_sample_view.groovy
- builds the views for the pipelinesIn the seed
folder you have the init.groovy
file which is executed when Jenkins starts.
That way we can configure most of Jenkins options for you (adding credentials, JDK etc.).
jenkins_pipeline.groovy
contains logic to build a seed job (that way you don’t have to even click that
job - we generate it for you). Under the k8s
folder there are all the configuration
files required for deployment to a Kubernetes cluster.
In the src
folder you have production and test classes needed for you to build your own pipeline.
Currently we have tests only cause the whole logic resides in the jenkins_pipeline_sample
file.
All the steps below are not necessary to run the demo. They are needed only when you want to do some custom changes.
It’s enough to set the ARTIFACTORY_URL
environmental variable before
executing tools/deploy-infra.sh
. Example for deploying to Artifactory at IP 192.168.99.100
git clone https://github.com/spring-cloud/spring-cloud-pipelines cd spring-cloud-pipelines/ ARTIFACTORY_URL="http://192.168.99.100:8081/artifactory/libs-release-local" ./tools/deploy-infra.sh
![]() | Tip |
---|---|
If you want to use the default connection to the Docker version of Artifactory you can skip this step |
So that ./mvnw deploy
works with Artifactory from Docker we’re
already copying the missing settings.xml
file for you. It looks more or less like this:
<?xml version="1.0" encoding="UTF-8"?> <settings> <servers> <server> <id>${M2_SETTINGS_REPO_ID}</id> <username>${M2_SETTINGS_REPO_USERNAME}</username> <password>${M2_SETTINGS_REPO_PASSWORD}</password> </server> <server> <id>${DOCKER_SERVER_ID}</id> <username>${DOCKER_USERNAME}</username> <password>${DOCKER_PASSWORD}</password> <configuration> <email>${DOCKER_EMAIL}</email> </configuration> </server> </servers> </settings>
As you can see the file is parameterized. In Maven it’s enough to pass
to ./mvnw
command the proper system property to override that value. For example to pass
a different docker email you’d have to call ./mvnw -DDOCKER_EMAIL=[email protected]
and the value
gets updated.
If you want to use your own version of Artifactory / Nexus you have to update
the file (it’s in seed/settings.xml
).
If you want to only play around with the demo that we’ve prepared you have to set ONE variable which is the REPOS
variable.
That variable needs to consists of comma separated list of URLs to repositories containing business apps. So you should pass your forked repos URLs.
You can do it in the following ways:
REPOS
property)For the sake of simplicity let’s go with the last option.
![]() | Important |
---|---|
If you’re choosing the global envs, you HAVE to remove the other approach
(e.g. if you set the global env for |
Click on the seed job and pick Build with parameters
. Then as presented in the screen below (you’ll have far more properties to set) just modify the REPOS
property by providing the comma separated list of URLs to your forks. Whatever you set will be parsed by the seed job and passed to the generated Jenkins jobs.
![]() | Tip |
---|---|
This is very useful when the repos you want to build differ. E.g. use
different JDK. Then some seeds can set the |
Example screen:
In the screenshot we could parametrize the REPOS
and REPO_WITH_BINARIES
params.
Since our pipeline is setting the git user / name explicitly for the build step
you’d have to go to Configure
of the build step and modify the Git name / email.
If you want to set it globally you’ll have to remove the section from the build
step and follow these steps to set it globally.
You can set Git email / user globally like this:
The scripts will need to access the credential in order to tag the repo.
You have to set credentials with id: git
.
Below you can find instructions on how to set a credential (e.g. for Cloud Foundry cf-test
credential but
remember to provide the one with id git
).
Figure 7.7. Fill out the user / password and provide the git
credential ID (in this example cf-test
)
./gradlew clean build
![]() | Warning |
---|---|
The ran test only checks if your scripts compile. |
Check out the tutorial. Provide the link to this repository in your Jenkins installation.
![]() | Warning |
---|---|
Remember that views can be overridden that’s why the suggestion is to contain in one script all the logic needed to build a view
for a single project (check out that |
If you would like to run the pre-configured Jenkins image somewhere other than your local machine, we
have an image you can pull and use on DockerHub.
The latest
tag corresponds to the latest snapshot build. You can also find tags
corresponding to stable releases that you can use as well.
![]() | Important |
---|---|
In this chapter we assume that you perform deployment of your application to Cloud Foundry PaaS |
The Spring Cloud Pipelines repository contains job definitions and the opinionated setup pipeline using Jenkins Job DSL plugin. Those jobs will form an empty pipeline and a sample, opinionated one that you can use in your company.
All in all there are the following projects taking part in the whole microservice setup
for this demo.
This is a guide for Jenkins Job DSL based pipeline.
If you want to just run the demo as far as possible using PCF Dev and Docker Compose
There are 4 apps that are composing the pipeline
You need to fork only these. That’s because only then will your user be able to tag and push the tag to repo.
Jenkins + Artifactory can be ran locally. To do that just execute the
start.sh
script from this repo.
git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/jenkins
./start.sh yourGitUsername yourGitPassword yourForkedGithubOrg
Then Jenkins will be running on port 8080
and Artifactory 8081
.
The provided parameters will be passed as env variables to Jenkins VM
and credentials will be set in your set. That way you don’t have to do
any manual work on the Jenkins side. In the above parameters, the third parameter
could be yourForkedGithubOrg or yourGithubUsername. Also the REPOS
env variable will
contain your GitHub org in which you have the forked repos.
Instead of the Git username and password parameters you could pass -key <path_to_private_key>
if you prefer to use the key-based authentication with your Git repositories.
When Artifactory is running, just execute the tools/deploy-infra.sh
script from this repo.
git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/
./tools/deploy-infra.sh
As a result both eureka
and stub runner
repos will be cloned, built
and uploaded to Artifactory.
![]() | Tip |
---|---|
You can skip this step if you have CF installed and don’t want to use PCF Dev The only thing you have to do is to set up spaces. |
![]() | Warning |
---|---|
It’s more than likely that you’ll run out of resources when you reach stage step. Don’t worry! Keep calm and clear some apps from PCF Dev and continue. |
You have to download and start PCF Dev. A link how to do it is available here.
The default credentials when using PCF Dev are:
username: user password: pass email: user org: pcfdev-org space: pcfdev-space api: api.local.pcfdev.io
You can start the PCF Dev like this:
cf dev start
You’ll have to create 3 separate spaces (email admin, pass admin)
cf login -a https://api.local.pcfdev.io --skip-ssl-validation -u admin -p admin -o pcfdev-org cf create-space pcfdev-test cf set-space-role user pcfdev-org pcfdev-test SpaceDeveloper cf create-space pcfdev-stage cf set-space-role user pcfdev-org pcfdev-stage SpaceDeveloper cf create-space pcfdev-prod cf set-space-role user pcfdev-org pcfdev-prod SpaceDeveloper
You can also execute the ./tools/cf-helper.sh setup-spaces
to do this.
We already create the seed job for you but you’ll have to run it. When you do run it you have to provide some properties. By default we create a seed that has all the properties options, but you can delete most of it. If you set the properties as global env variables you have to remove them from the seed.
Anyways, to run the demo just provide in the REPOS
var the comma separated
list of URLs of the 2 aforementioned forks of github-webhook
and `github-analytics'.
Figure 8.1. Click the 'jenkins-pipeline-seed-cf' job for Cloud Foundry and jenkins-pipeline-seed-k8s
for Kubernetes
Figure 8.3. The REPOS
parameter should already contain your forked repos (you’ll have more properties than the ones in the screenshot)
We already create the seed job for you but you’ll have to run it. When you do run it you have to provide some properties. By default we create a seed that has all the properties options, but you can delete most of it. If you set the properties as global env variables you have to remove them from the seed.
Anyways, to run the demo just provide in the REPOS
var the comma separated
list of URLs of the 2 aforementioned forks of github-webhook
and github-analytics
.
![]() | Important |
---|---|
If your build fails on the deploy previous version to stage due to missing jar, that means that you’ve forgotten to clear the tags in your repo. Typically that’s due to the fact that you’ve removed the Artifactory volume with deployed JAR whereas a tag in the repo is still pointing there. Check out this section on how to remove the tag. |
Figure 8.7. Click the manual step to go to stage (remember about killing the apps on test env). To do this click the ARROW next to the job name
![]() | Important |
---|---|
Most likely you will run out of memory so when reaching the stage environment it’s good to kill all apps on test. Check out the FAQ section for more details! |
You can also use the declarative pipeline approach with the Blue Ocean UI. Here is a step by step guide to run a pipeline via this approach.
The Blue Ocean UI is available under the blue/
URL. E.g. for Docker Machine based setup http://192.168.99.100:8080/blue
.
![]() | Important |
---|---|
There is no possibility of restarting pipeline from specific stage, after failure. Please check out this issue for more information |
![]() | Warning |
---|---|
Currently there is no way to introduce manual steps in a performant way. Jenkins is blocking an executor when manual step is required. That means that you’ll run out of executors pretty fast. You can check out this issue for and this StackOverflow question for more information. |
All the steps below are not necessary to run the demo. They are needed only when you want to do some custom changes.
The env vars that are used in all of the jobs are as follows:
Property Name | Property Description | Default value |
---|---|---|
BINARY_EXTENSION | Extension of the binary uploaded to Artifactory / Nexus. Example: change this to | jar |
PAAS_TEST_API_URL | The URL to the CF Api for TEST env | api.local.pcfdev.io |
PAAS_STAGE_API_URL | The URL to the CF Api for STAGE env | api.local.pcfdev.io |
PAAS_PROD_API_URL | The URL to the CF Api for PROD env | api.local.pcfdev.io |
PAAS_TEST_ORG | Name of the org for the test env | pcfdev-org |
PAAS_TEST_SPACE_PREFIX | Prefix of the name of the CF space for the test env to which the app name will be appended | sc-pipelines-test |
PAAS_STAGE_ORG | Name of the org for the stage env | pcfdev-org |
PAAS_STAGE_SPACE | Name of the space for the stage env | sc-pipelines-stage |
PAAS_PROD_ORG | Name of the org for the prod env | pcfdev-org |
PAAS_PROD_SPACE | Name of the space for the prod env | sc-pipelines-prod |
REPO_WITH_BINARIES_FOR_UPLOAD | URL to repo with the deployed jars | |
M2_SETTINGS_REPO_ID | The id of server from Maven settings.xml | artifactory-local |
JDK_VERSION | The name of the JDK installation | jdk8 |
PIPELINE_VERSION | What should be the version of the pipeline (ultimately also version of the jar) | 1.0.0.M1-${GROOVY,script ="new Date().format('yyMMdd_HHmmss')"}-VERSION |
GIT_EMAIL | The email used by Git to tag repo | |
GIT_NAME | The name used by Git to tag repo | Pivo Tal |
PAAS_HOSTNAME_UUID | Additional suffix for the route. In a shared environment the default routes can be already taken | |
AUTO_DEPLOY_TO_STAGE | Should deployment to stage be automatic | false |
AUTO_DEPLOY_TO_PROD | Should deployment to prod be automatic | false |
API_COMPATIBILITY_STEP_REQUIRED | Should api compatibility step be required | true |
DB_ROLLBACK_STEP_REQUIRED | Should DB rollback step be present | true |
DEPLOY_TO_STAGE_STEP_REQUIRED | Should deploy to stage step be present | true |
JAVA_BUILDPACK_URL | The URL to the Java buildpack to be used by CF | |
BUILD_OPTIONS | Additional options you would like to pass to the Maven / Gradle build |
In your scripts we reference the credentials via IDs. These are the defaults for credentials
Property Name | Property Description | Default value |
---|---|---|
PAAS_PROD_CREDENTIAL_ID | Credential ID for CF Prod env access | cf-prod |
GIT_CREDENTIAL_ID | Credential ID used to tag a git repo | git |
GIT_SSH_CREDENTIAL_ID | SSH credential ID used to tag a git repo | gitSsh |
GIT_USE_SSH_KEY | if | false |
REPO_WITH_BINARIES_CREDENTIAL_ID | Credential ID used for the repo with jars | repo-with-binaries |
PAAS_TEST_CREDENTIAL_ID | Credential ID for CF Test env access | cf-test |
PAAS_STAGE_CREDENTIAL_ID | Credential ID for CF Stage env access | cf-stage |
If you already have in your system a credential to for example tag a repo
you can use it by passing the value of the property GIT_CREDENTIAL_ID
![]() | Tip |
---|---|
Check out the |
![]() | Important |
---|---|
In this chapter we assume that you perform deployment of your application to Kubernetes PaaS |
The Spring Cloud Pipelines repository contains job definitions and the opinionated setup pipeline using Jenkins Job DSL plugin. Those jobs will form an empty pipeline and a sample, opinionated one that you can use in your company.
All in all there are the following projects taking part in the whole microservice setup
for this demo.
This is a guide for Jenkins Job DSL based pipeline.
If you want to just run the demo as far as possible using PCF Dev and Docker Compose
There are 4 apps that are composing the pipeline
You need to fork only these. That’s because only then will your user be able to tag and push the tag to repo.
Jenkins + Artifactory can be ran locally. To do that just execute the
start.sh
script from this repo.
git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/jenkins
./start.sh yourGitUsername yourGitPassword yourForkedGithubOrg yourDockerRegistryOrganization yourDockerRegistryUsername yourDockerRegistryPassword yourDockerRegistryEmail
Then Jenkins will be running on port 8080
and Artifactory 8081
.
The provided parameters will be passed as env variables to Jenkins VM
and credentials will be set in your set. That way you don’t have to do
any manual work on the Jenkins side. In the above parameters, the third parameter
could be yourForkedGithubOrg or yourGithubUsername. Also the REPOS
env variable will
contain your GitHub org in which you have the forked repos.
Instead of the Git username and password parameters you could pass -key <path_to_private_key>
if you prefer to use the key-based authentication with your Git repositories.
You need to pass the credentials for the Docker organization (by default we will search for the Docker images at Docker Hub) so that the pipeline will be able to push images to your org.
When Artifactory is running, just execute the tools/deploy-infra.sh
script from this repo.
git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/
./tools/deploy-infra-k8s.sh
As a result both eureka
and stub runner
repos will be cloned, built,
uploaded to Artifactory and their docker images will be built.
![]() | Important |
---|---|
Your local Docker process will be reused by the Jenkins instance running in Docker. That’s why you don’t have to push these images to Docker Hub. On the other hand if you run this sample in a remote Kubernetes cluster the driver will not be shared by the Jenkins workers so you can consider pushing these Docker images to Docker Hub too. |
We already create the seed job for you but you’ll have to run it. When you do run it you have to provide some properties. By default we create a seed that has all the properties options, but you can delete most of it. If you set the properties as global env variables you have to remove them from the seed.
Anyways, to run the demo just provide in the REPOS
var the comma separated
list of URLs of the 2 aforementioned forks of github-webhook
and `github-analytics'.
Figure 9.1. Click the 'jenkins-pipeline-seed-cf' job for Cloud Foundry and jenkins-pipeline-seed-k8s
for Kubernetes
Figure 9.3. The REPOS
parameter should already contain your forked repos (you’ll have more properties than the ones in the screenshot)
We already create the seed job for you but you’ll have to run it. When you do run it you have to provide some properties. By default we create a seed that has all the properties options, but you can delete most of it. If you set the properties as global env variables you have to remove them from the seed.
Anyways, to run the demo just provide in the REPOS
var the comma separated
list of URLs of the 2 aforementioned forks of github-webhook
and github-analytics
.
![]() | Important |
---|---|
If your build fails on the deploy previous version to stage due to missing jar, that means that you’ve forgotten to clear the tags in your repo. Typically that’s due to the fact that you’ve removed the Artifactory volume with deployed JAR whereas a tag in the repo is still pointing there. Check out this section on how to remove the tag. |
Figure 9.7. Click the manual step to go to stage (remember about killing the apps on test env). To do this click the ARROW next to the job name
![]() | Important |
---|---|
Most likely you will run out of memory so when reaching the stage environment it’s good to kill all apps on test. Check out the FAQ section for more details! |
You can also use the declarative pipeline approach with the Blue Ocean UI. Here is a step by step guide to run a pipeline via this approach.
The Blue Ocean UI is available under the blue/
URL. E.g. for Docker Machine based setup http://192.168.99.100:8080/blue
.
![]() | Important |
---|---|
There is no possibility of restarting pipeline from specific stage, after failure. Please check out this issue for more information |
![]() | Warning |
---|---|
Currently there is no way to introduce manual steps in a performant way. Jenkins is blocking an executor when manual step is required. That means that you’ll run out of executors pretty fast. You can check out this issue for and this StackOverflow question for more information. |
![]() | Important |
---|---|
All the steps below are not necessary to run the demo. They are needed only when you want to do some custom changes. |
The env vars that are used in all of the jobs are as follows:
Property Name | Property Description | Default value |
---|---|---|
BUILD_OPTIONS | Additional options you would like to pass to the Maven / Gradle build | |
DOCKER_REGISTRY_ORGANIZATION | Name of the docker organization to which Docker images should be deployed | scpipelines |
DOCKER_REGISTRY_CREDENTIAL_ID | Credential ID used to push Docker images | docker-registry |
DOCKER_SERVER_ID | Server ID in | docker-repo |
DOCKER_EMAIL | Email used to connect to Docker registry` and Maven builds | |
DOCKER_REGISTRY_ORGANIZATION | URL to Kubernetes cluster for test env | scpipelines |
DOCKER_REGISTRY_URL | URL to the docker registry | |
PAAS_TEST_API_URL | URL of the API of the Kubernetes cluster for test environment | 192.168.99.100:8443 |
PAAS_STAGE_API_URL | URL of the API of the Kubernetes cluster for stage environment | 192.168.99.100:8443 |
PAAS_PROD_API_URL | URL of the API of the Kubernetes cluster for prod environment | 192.168.99.100:8443 |
PAAS_TEST_CA_PATH | Path to the certificate authority for test environment | /usr/share/jenkins/cert/ca.crt |
PAAS_STAGE_CA_PATH | Path to the certificate authority for stage environment | /usr/share/jenkins/cert/ca.crt |
PAAS_PROD_CA_PATH | Path to the certificate authority for prod environment | /usr/share/jenkins/cert/ca.crt |
PAAS_TEST_CLIENT_CERT_PATH | Path to the client certificate for test environment | /usr/share/jenkins/cert/apiserver.crt |
PAAS_STAGE_CLIENT_CERT_PATH | Path to the client certificate for stage environment | /usr/share/jenkins/cert/apiserver.crt |
PAAS_PROD_CLIENT_CERT_PATH | Path to the client certificate for prod environment | /usr/share/jenkins/cert/apiserver.crt |
PAAS_TEST_CLIENT_KEY_PATH | Path to the client key for test environment | /usr/share/jenkins/cert/apiserver.key |
PAAS_STAGE_CLIENT_KEY_PATH | Path to the client key for stage environment | /usr/share/jenkins/cert/apiserver.key |
PAAS_PROD_CLIENT_KEY_PATH | Path to the client key for test environment | /usr/share/jenkins/cert/apiserver.key |
PAAS_TEST_CLIENT_TOKEN_PATH | Path to the file containing the token for test env | |
PAAS_STAGE_CLIENT_TOKEN_PATH | Path to the file containing the token for stage env | |
PAAS_PROD_CLIENT_TOKEN_PATH | Path to the file containing the token for prod env | |
PAAS_TEST_CLIENT_TOKEN_ID | ID of the credential containing access token for test environment | |
PAAS_STAGE_CLIENT_TOKEN_ID | ID of the credential containing access token for stage environment | |
PAAS_PROD_CLIENT_TOKEN_ID | ID of the credential containing access token for prod environment | |
PAAS_TEST_CLUSTER_NAME | Name of the cluster for test environment | minikube |
PAAS_STAGE_CLUSTER_NAME | Name of the cluster for stage environment | minikube |
PAAS_PROD_CLUSTER_NAME | Name of the cluster for prod environment | minikube |
PAAS_TEST_CLUSTER_USERNAME | Name of the user for test environment | minikube |
PAAS_STAGE_CLUSTER_USERNAME | Name of the user for stage environment | minikube |
PAAS_PROD_CLUSTER_USERNAME | Name of the user for prod environment | minikube |
PAAS_TEST_SYSTEM_NAME | Name of the system for test environment | minikube |
PAAS_STAGE_SYSTEM_NAME | Name of the system for stage environment | minikube |
PAAS_PROD_SYSTEM_NAME | Name of the system for prod environment | minikube |
PAAS_TEST_NAMESPACE | Namespace for test environment | sc-pipelines-test |
PAAS_STAGE_NAMESPACE | Namespace for stage environment | sc-pipelines-stage |
PAAS_PROD_NAMESPACE | Namespace for prod environment | sc-pipelines-prod |
KUBERNETES_MINIKUBE | Will you connect to Minikube? | true |
REPO_WITH_BINARIES_FOR_UPLOAD | URL to repo with the deployed jars | |
REPO_WITH_BINARIES_CREDENTIAL_ID | Credential ID used for the repo with jars | repo-with-binaries |
M2_SETTINGS_REPO_ID | The id of server from Maven settings.xml | artifactory-local |
JDK_VERSION | The name of the JDK installation | jdk8 |
PIPELINE_VERSION | What should be the version of the pipeline (ultimately also version of the jar) | 1.0.0.M1-${GROOVY,script ="new Date().format('yyMMdd_HHmmss')"}-VERSION |
GIT_EMAIL | The email used by Git to tag repo | |
GIT_NAME | The name used by Git to tag repo | Pivo Tal |
AUTO_DEPLOY_TO_STAGE | Should deployment to stage be automatic | false |
AUTO_DEPLOY_TO_PROD | Should deployment to prod be automatic | false |
API_COMPATIBILITY_STEP_REQUIRED | Should api compatibility step be required | true |
DB_ROLLBACK_STEP_REQUIRED | Should DB rollback step be present | true |
DEPLOY_TO_STAGE_STEP_REQUIRED | Should deploy to stage step be present | true |
![]() | Important |
---|---|
Skip this step if you’re not using GCE |
In order to use GCE we need to have gcloud
running. If you already have the
CLI installed, skip this step. If not just execute to have the CLI
downloaded and an installer started
$ ./tools/k8s-helper.sh download-gcloud
Next, configure gcloud
. Execute gcloud init
and log in
to your cluster. You will get redirected to a login page, pick the
proper Google account and log in.
Pick an existing project or create a new one.
Go to your platform page (click on Container Engine
) in GCP and connect to your cluster
$ CLUSTER_NAME=... $ ZONE=us-east1-b $ PROJECT_NAME=... $ gcloud container clusters get-credentials ${CLUSTER_NAME} --zone ${ZONE} --project ${PROJECT_NAME} $ kubectl proxy
The Kubernetes dashboard will be running at http://localhost:8001/ui/
.
We’ll need a Persistent Disk for our Jenkins installation. Let’s create it
$ ZONE=us-east1-b
$ gcloud compute disks create --size=200GB --zone=${ZONE} sc-pipelines-jenkins-disk
Since the disk got created now we need to format it. You can check out the instructions on how to do it here - https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting
![]() | Important |
---|---|
Skip this step if you’re not using Kubo or GCE |
In this section a description of steps required to deploy Jenkins and Artifactory to a Kubernetes cluster deployed via Kubo.
![]() | Tip |
---|---|
To see the dashboard just do |
Deploy Jenkins and Artifactory to the cluster
./tools/k8s-helper.sh setup-tools-infra-vsphere
for a cluster deployed on VSphere./tools/k8s-helper.sh setup-tools-infra-gce
for a cluster deployed to GCE$ NAMESPACE=default $ JENKINS_POD=jenkins-1430785859-nfhx4 $ LOCAL_PORT=32044 $ CONTAINER_PORT=8080 $ kubectl port-forward --namespace=${NAMESPACE} ${JENKINS_POD} ${LOCAL_PORT}:${CONTAINER_PORT}
Credentials
, click System
and Global credentials
git
, repo-with-binaries
and docker-registry
credentialsRun the jenkins-pipeline-k8s-seed
seed job and fill it out with the following data
Put kubernetes.default:443
here (or KUBERNETES_API:KUBERNETES_PORT
)
PAAS_TEST_API_URL
PAAS_STAGE_API_URL
PAAS_PROD_API_URL
Put /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
data here
PAAS_TEST_CA_PATH
PAAS_STAGE_CA_PATH
PAAS_PROD_CA_PATH
Kubernetes Minikube
valueClear the following vars
PAAS_TEST_CLIENT_CERT_PATH
PAAS_STAGE_CLIENT_CERT_PATH
PAAS_PROD_CLIENT_CERT_PATH
PAAS_TEST_CLIENT_KEY_PATH
PAAS_STAGE_CLIENT_KEY_PATH
PAAS_PROD_CLIENT_KEY_PATH
Set /var/run/secrets/kubernetes.io/serviceaccount/token
value to these vars
PAAS_TEST_CLIENT_TOKEN_PATH
PAAS_STAGE_CLIENT_TOKEN_PATH
PAAS_STAGE_CLIENT_TOKEN_PATH
Set the cluster name to these vars (you can get it by calling kubectl config current-context
)
PAAS_TEST_CLUSTER_NAME
PAAS_STAGE_CLUSTER_NAME
PAAS_PROD_CLUSTER_NAME
Set the system name to these vars (you can get it by calling kubectl config current-context
)
PAAS_TEST_SYSTEM_NAME
PAAS_STAGE_SYSTEM_NAME
PAAS_PROD_SYSTEM_NAME
DOCKER_EMAIL
property with your emailDOCKER_REGISTRY_ORGANIZATION
with your Docker organization nameDOCKER_REGISTRY_URL
Below you can find the answers to most frequently asked questions.
You can check the Jenkins logs and you’ll see
WARNING: Skipped parameter `PIPELINE_VERSION` as it is undefined on `jenkins-pipeline-sample-build`. Set `-Dhudson.model.ParametersAction.keepUndefinedParameters`=true to allow undefined parameters to be injected as environment variables or `-Dhudson.model.ParametersAction.safeParameters=[comma-separated list]` to whitelist specific parameter names, even though it represents a security breach
To fix it you have to do exactly what the warning suggests… Also ensure that the Groovy token macro processing
checkbox is set.
You can see that the Jenkins version is properly set but in the build version is still snapshot and
the echo "${PIPELINE_VERSION}"
doesn’t print anything.
You can check the Jenkins logs and you’ll see
WARNING: Skipped parameter `PIPELINE_VERSION` as it is undefined on `jenkins-pipeline-sample-build`. Set `-Dhudson.model.ParametersAction.keepUndefinedParameters`=true to allow undefined parameters to be injected as environment variables or `-Dhudson.model.ParametersAction.safeParameters=[comma-separated list]` to whitelist specific parameter names, even though it represents a security breach
To fix it you have to do exactly what the warning suggests…
Docker compose, docker compose, docker compose… The problem is that for some reason, only in Docker, the execution of Java hangs. But it hangs randomly and only the first time you try to execute the pipeline.
The solution to this is to run the pipeline again. If once it suddenly, magically passes then it will pass for any subsequent build.
Another thing that you can try is to run it with plain Docker. Maybe that will help.
Sure! you can pass REPOS
variable with comma separated list of
project_name$project_url
format. If you don’t provide the PROJECT_NAME the
repo name will be extracted and used as the name of the project.
E.g. for REPOS
equal to:
will result in the creation of pipelines with root names github-analytics
and github-webhook
.
E.g. for REPOS
equal to:
foo$https://github.com/spring-cloud-samples/github-analytics,bar$https://github.com/spring-cloud-samples/atom-feed
will result in the creation of pipelines with root names foo
for github-analytics
and bar
for github-webhook
.
Not really. This is an opinionated pipeline
that’s why we took some
opinionated decisions like:
For Maven:
./mvnw clean deploy
stubrunner.ids
property to retrieve list of collaborators for which stubs should be downloadedsmoke
Maven profilee2e
Maven profileFor Gradle (in the github-analytics
application check the gradle/pipeline.gradle
file):
deploy
task for artifacts deploymentsmoke
taske2e
taskgroupId
task to retrieve group idartifactId
task to retrieve artifact idcurrentVersion
task to retrieve the current versionstubIds
task to retrieve list of collaborators for which stubs should be downloadedThis is the initial approach that can be easily changed in the future.
Sure! It’s open-source! The important thing is that the core part of the logic is written in Bash scripts. That way, in the majority of cases, you could change only the bash scripts without changing the whole pipeline.
You must have pushed some tags and have removed the Artifactory volume that contained them. To fix this, just remove the tags
git tag -l | xargs -n 1 git push --delete origin
jdk8
configuredJDK_VERSION
env var and point to the proper one![]() | Tip |
---|---|
The docker image comes in with Java installed at |
To change the default one just follow these steps:
And that’s it!
With scripted that but if you needed to this manually then this is how to do it:
No problem, just set the property / env var to true
AUTO_DEPLOY_TO_STAGE
to automatically deploy to stageAUTO_DEPLOY_TO_PROD
to automatically deploy to prodNo problem, just set the API_COMPATIBILITY_STEP_REQUIRED
env variable
to false
and rerun the seed (you can pick it from the seed
job’s properties too).
When you get sth like this:
19:01:44 stderr: remote: Invalid username or password. 19:01:44 fatal: Authentication failed for 'https://github.com/marcingrzejszczak/github-webhook/' 19:01:44 19:01:44 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1740) 19:01:44 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1476) 19:01:44 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:63) 19:01:44 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$8.execute(CliGitAPIImpl.java:1816) 19:01:44 at hudson.plugins.git.GitPublisher.perform(GitPublisher.java:295) 19:01:44 at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:45) 19:01:44 at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:779) 19:01:44 at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:720) 19:01:44 at hudson.model.Build$BuildExecution.post2(Build.java:185) 19:01:44 at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:665) 19:01:44 at hudson.model.Run.execute(Run.java:1745) 19:01:44 at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) 19:01:44 at hudson.model.ResourceController.execute(ResourceController.java:98) 19:01:44 at hudson.model.Executor.run(Executor.java:404)
most likely you’ve passed a wrong password. Check the credentials section on how to update your credentials.
Most likely you’ve forgotten to update your local settings.xml
with the Artifactory’s
setup. Check out this section of the docs and update your settings.xml
.
In some cases it may be required that when performing a release that the artifacts be signed
before pushing them to the repository.
To do this you will need to import your GPG keys into the Docker image running Jenkins.
This can be done by placing a file called public.key
containing your public key
and a file called private.key
containing your private key in the seed
directory.
These keys will be imported by the init.groovy
script that is run when Jenkins starts.
The seed job checks if an env variable GIT_USE_SSH_KEY
is set to true
. If that’s the case
then env variable GIT_SSH_CREDENTIAL_ID
will be chosen as the one that contains the
id of the credential that contains SSH private key. By default GIT_CREDENTIAL_ID
will be picked
as the one that contains username and password to connect to git.
You can set these values in the seed job by filling out the form / toggling a checkbox.
There can be a number of reason but remember that for stage we assume that a sequence of manual steps need to be performed. We don’t redeploy any existing services cause most likely you deliberately have set it up in that way or the other. If in the logs of your application you can see that you can’t connect to a service, first ensure that the service is forwarding traffic to a pod. Next if that’s not the case please delete the service and re-run the step in the pipeline. That way Spring Cloud Pipelines will redeploy the service and the underlying pods.
[jenkins-cf-resources]] When deploying the app to stage or prod you can get an exception Insufficient resources
. The way to
solve it is to kill some apps from test / stage env. To achieve that just call
cf target -o pcfdev-org -s pcfdev-test
cf stop github-webhook
cf stop github-eureka
cf stop stubrunner
You can also execute ./tools/cf-helper.sh kill-all-apps
that will remove all demo-related apps
deployed to PCF Dev.
If you receive a similar exception:
20:26:18 API endpoint: https://api.local.pcfdev.io (API version: 2.58.0) 20:26:18 User: user 20:26:18 Org: pcfdev-org 20:26:18 Space: No space targeted, use 'cf target -s SPACE' 20:26:18 FAILED 20:26:18 Error finding space pcfdev-test 20:26:18 Space pcfdev-test not found
It means that you’ve forgotten to create the spaces in your PCF Dev installation.
If you play around with Jenkins / Concourse you might end up with the routes occupied
Using route github-webhook-test.local.pcfdev.io Binding github-webhook-test.local.pcfdev.io to github-webhook... FAILED The route github-webhook-test.local.pcfdev.io is already in use.
Just delete the routes
yes | cf delete-route local.pcfdev.io -n github-webhook-test yes | cf delete-route local.pcfdev.io -n github-eureka-test yes | cf delete-route local.pcfdev.io -n stubrunner-test yes | cf delete-route local.pcfdev.io -n github-webhook-stage yes | cf delete-route local.pcfdev.io -n github-eureka-stage yes | cf delete-route local.pcfdev.io -n github-webhook-prod yes | cf delete-route local.pcfdev.io -n github-eureka-prod
You can also execute the ./tools/cf-helper.sh delete-routes
Assuming that you’re already logged into the cluster it’s enough to run the
helper script with the REUSE_CF_LOGIN=true
env variable. Example:
REUSE_CF_LOGIN=true ./tools/cf-helper.sh setup-prod-infra
This script will create the mysql db, rabbit mq service, download and deploy Eureka to the space and organization you’re logged into.
First you’ll need to install the kubectl
CLI.
You can use the tools/k8s-helper.sh
script to install kubectl
. Just call
$ ./tools/minikube-helper download-kubectl
and then the kubectl
will get downloaded
Example for OSX
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl $ chmod +x ./kubectl $ sudo mv ./kubectl /usr/local/bin/kubectl
Example for Linux
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl $ chmod +x ./kubectl $ sudo mv ./kubectl /usr/local/bin/kubectl
Check out this page for more information.
We need a cluster of Kubernetes. The best choice will be Minikube.
![]() | Tip |
---|---|
You can skip this step if you have Kubernetes cluster installed and don’t want to use Minikube The only thing you have to do is to set up spaces. |
![]() | Warning |
---|---|
It’s more than likely that you’ll run out of resources when you reach stage step. Don’t worry! Keep calm and clear some apps from Minikube and continue. |
You can use the tools/k8s-helper.sh
script to install Minikube
. Just call
$ ./tools/minikube-helper download-minikube
and then the Minikube
cluster will get downloaded
Example for OSX
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.20.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
Feel free to leave off the sudo mv minikube /usr/local/bin
if you would like to add minikube to your path manually.
Example for Linux
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.20.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
Feel free to leave off the sudo mv minikube /usr/local/bin
if you would like to add minikube to your path manually.
Check out this page for more info on the installation.
Just type in minikube start
to start Kubernetes on your local box.
To add the dashboard just execute minikube dashboard
By default if you install Minikube all the certificates get installed in your
~/.minikube
folder. Your kubectl
configuration under ~/.kube/config
will also
get updated to use Minikube.
![]() | Important |
---|---|
If you just want to run the default, demo setup you can skip this section |
To target a given Kubernetes instance one needs to pass around Certificate Authority key and also user keys.
You can read more about the instructions on how to generate those keys here.
Generally speaking if you have a Kubernetes installation (e.g. minikube
) this step
has already been done for you. Time to reuse those keys on the workers.
Extracted from the official docs.
Configure kubectl to connect to the target cluster using the following commands, replacing several values as indicated:
${MASTER_HOST}
with the master node address or name used in previous steps${CA_CERT}
with the absolute path to the ca.pem
created in previous steps${ADMIN_KEY}
with the absolute path to the admin-key.pem
created in previous steps${ADMIN_CERT}
with the absolute path to the admin.pem
created in previous steps$ kubectl config set-cluster default-cluster --server=https://${MASTER_HOST} --certificate-authority=${CA_CERT} $ kubectl config set-credentials default-admin --certificate-authority=${CA_CERT} --client-key=${ADMIN_KEY} --client-certificate=${ADMIN_CERT} $ kubectl config set-context default-system --cluster=default-cluster --user=default-admin $ kubectl config use-context default-system
The demo uses 2 applications. Github Webhook and Github analytics code. Below you can see an image of how these application communicate with each other.
For the demo scenario we have two applications. Github Analytics
and Github Webhook
.
Let’s imagine a case where Github is emitting events via HTTP. Github Webhook
has
an API that could register to such hooks and receive those messages. Once this happens
Github Webhook
sends a message by RabbitMQ to a channel. Github Analytics
is
listening to those messages and stores them in a MySQL database.
Github Analytics
has its KPIs (Key Performance Indicators) monitored. In the case
of that application the KPI is number of issues.
Let’s assume that if we go below the threshold of X issues then an alert should be sent to Slack.
In the real world scenario we wouldn’t want to automatically provision services like
RabbitMQ, MySQL or Eureka each time we deploy a new application to production. Typically
production is provisioned manually (using automated solutions). In our case, before
you deploy to production you can provision the pcfdev-prod
space using the
cf-helper.sh
. Just call
$ ./cf-helper.sh setup-prod-infra
What will happen is that the CF CLI will login to PCF Dev, target pcfdev-prod
space,
setup RabbitMQ (under rabbitmq-github
name), MySQL (under mysql-github-analytics
name)
and Eureka (under github-eureka
name).
You can check out Toshiaki Maki’s code on how to automate Prometheus installation on CF.
Go to https://prometheus.io/download/ and download linux binary. Then call:
cf push sc-pipelines-prometheus -b binary_buildpack -c './prometheus -web.listen-address=:8080' -m 64m
Also localhost:9090
in prometheus.yml
should be localhost:8080
.
The file should look like this to work with the demo setup (change github-analytics-sc-pipelines.cfapps.io
to your github-analytics
installation).
# my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'codelab-monitor' # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first.rules" # - "second.rules" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:8080'] - job_name: 'demo-app' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s metrics_path: '/prometheus' # scheme defaults to 'http'. static_configs: - targets: ['github-analytics-sc-pipelines.cfapps.io']
A deployed version for the Spring Cloud Pipelines demo is available here
You can check out Toshiaki Maki’s code on how to automate Prometheus installation on CF.
Download tarball from https://grafana.com/grafana/download?platform=linux
Next set http_port = 8080
in conf/default.ini
. Then call
cf push sc-pipelines-grafana -b binary_buildpack -c './bin/grafana-server web' -m 64m
The demo is using Grafana Dashboard with ID 2471
.
A deployed version for the Spring Cloud Pipelines demo is available here
The demo uses 2 applications. Github Webhook and Github analytics code. Below you can see an image of how these application communicate with each other.
For the demo scenario we have two applications. Github Analytics
and Github Webhook
.
Let’s imagine a case where Github is emitting events via HTTP. Github Webhook
has
an API that could register to such hooks and receive those messages. Once this happens
Github Webhook
sends a message by RabbitMQ to a channel. Github Analytics
is
listening to those messages and stores them in a MySQL database.
Github Analytics
has its KPIs (Key Performance Indicators) monitored. In the case
of that application the KPI is number of issues.
Let’s assume that if we go below the threshold of X issues then an alert should be sent to Slack.
In the real world scenario we wouldn’t want to automatically provision services like
RabbitMQ, MySQL or Eureka each time we deploy a new application to production. Typically
production is provisioned manually (using automated solutions). In our case, before
you deploy to production you can provision the sc-pipelines-prod
namespace using the
k8s-helper.sh
. Just call
$ ./k8s-helper.sh setup-prod-infra
Use Helm to install Prometheus. We will point it to the services deployed to our cluster.
Create a file called values.yaml
.
values.yaml.
rbac: create: false alertmanager: ## If false, alertmanager will not be installed ## enabled: true # Defines the serviceAccountName to use when `rbac.create=false` serviceAccountName: default ## alertmanager container name ## name: alertmanager ## alertmanager container image ## image: repository: prom/alertmanager tag: v0.9.1 pullPolicy: IfNotPresent ## Additional alertmanager container arguments ## extraArgs: {} ## The URL prefix at which the container can be accessed. Useful in the case the '-web.external-url' includes a slug ## so that the various internal URLs are still able to access as they are in the default case. ## (Optional) baseURL: "" ## Additional alertmanager container environment variable ## For instance to add a http_proxy ## extraEnv: {} ## ConfigMap override where fullname is {{.Release.Name}}-{{.Values.alertmanager.configMapOverrideName}} ## Defining configMapOverrideName will cause templates/alertmanager-configmap.yaml ## to NOT generate a ConfigMap resource ## configMapOverrideName: "" ingress: ## If true, alertmanager Ingress will be created ## enabled: false ## alertmanager Ingress annotations ## annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: 'true' ## alertmanager Ingress hostnames ## Must be provided if Ingress is enabled ## hosts: [] # - alertmanager.domain.com ## alertmanager Ingress TLS configuration ## Secrets must be manually created in the namespace ## tls: [] # - secretName: prometheus-alerts-tls # hosts: # - alertmanager.domain.com ## Alertmanager Deployment Strategy type # strategy: # type: Recreate ## Node labels for alertmanager pod assignment ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: {} persistentVolume: ## If true, alertmanager will create/use a Persistent Volume Claim ## If false, use emptyDir ## enabled: true ## alertmanager data Persistent Volume access modes ## Must match those of existing PV or dynamic provisioner ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ ## accessModes: - ReadWriteOnce ## alertmanager data Persistent Volume Claim annotations ## annotations: {} ## alertmanager data Persistent Volume existing claim name ## Requires alertmanager.persistentVolume.enabled: true ## If defined, PVC must be created manually before volume will be bound existingClaim: "" ## alertmanager data Persistent Volume mount root path ## mountPath: /data ## alertmanager data Persistent Volume size ## size: 2Gi ## alertmanager data Persistent Volume Storage Class ## If defined, storageClassName: <storageClass> ## If set to "-", storageClassName: "", which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS & OpenStack) ## # storageClass: "-" ## Subdirectory of alertmanager data Persistent Volume to mount ## Useful if the volume's root directory is not empty ## subPath: "" ## Annotations to be added to alertmanager pods ## podAnnotations: {} replicaCount: 1 ## alertmanager resource requests and limits ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: # cpu: 10m # memory: 32Mi # requests: # cpu: 10m # memory: 32Mi service: annotations: {} labels: {} clusterIP: "" ## List of IP addresses at which the alertmanager service is available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 80 # nodePort: 30000 type: ClusterIP ## Monitors ConfigMap changes and POSTs to a URL ## Ref: https://github.com/jimmidyson/configmap-reload ## configmapReload: ## configmap-reload container name ## name: configmap-reload ## configmap-reload container image ## image: repository: jimmidyson/configmap-reload tag: v0.1 pullPolicy: IfNotPresent ## configmap-reload resource requests and limits ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} kubeStateMetrics: ## If false, kube-state-metrics will not be installed ## enabled: true # Defines the serviceAccountName to use when `rbac.create=false` serviceAccountName: default ## kube-state-metrics container name ## name: kube-state-metrics ## kube-state-metrics container image ## image: repository: gcr.io/google_containers/kube-state-metrics tag: v1.1.0-rc.0 pullPolicy: IfNotPresent ## Node labels for kube-state-metrics pod assignment ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: {} ## Annotations to be added to kube-state-metrics pods ## podAnnotations: {} replicaCount: 1 ## kube-state-metrics resource requests and limits ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: # cpu: 10m # memory: 16Mi # requests: # cpu: 10m # memory: 16Mi service: annotations: prometheus.io/scrape: "true" labels: {} clusterIP: None ## List of IP addresses at which the kube-state-metrics service is available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 80 type: ClusterIP nodeExporter: ## If false, node-exporter will not be installed ## enabled: true # Defines the serviceAccountName to use when `rbac.create=false` serviceAccountName: default ## node-exporter container name ## name: node-exporter ## node-exporter container image ## image: repository: prom/node-exporter tag: v0.15.0 pullPolicy: IfNotPresent ## Additional node-exporter container arguments ## extraArgs: {} ## Additional node-exporter hostPath mounts ## extraHostPathMounts: [] # - name: textfile-dir # mountPath: /srv/txt_collector # hostPath: /var/lib/node-exporter # readOnly: true ## Node tolerations for node-exporter scheduling to nodes with taints ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ ## tolerations: [] # - key: "key" # operator: "Equal|Exists" # value: "value" # effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)" ## Node labels for node-exporter pod assignment ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: {} ## Annotations to be added to node-exporter pods ## podAnnotations: {} ## node-exporter resource limits & requests ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: # cpu: 200m # memory: 50Mi # requests: # cpu: 100m # memory: 30Mi service: annotations: prometheus.io/scrape: "true" labels: {} clusterIP: None ## List of IP addresses at which the node-exporter service is available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] hostPort: 9100 loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 9100 type: ClusterIP server: ## Prometheus server container name ## name: server # Defines the serviceAccountName to use when `rbac.create=false` serviceAccountName: default ## Prometheus server container image ## image: repository: prom/prometheus tag: v1.8.0 pullPolicy: IfNotPresent ## (optional) alertmanager URL ## only used if alertmanager.enabled = false alertmanagerURL: "" ## The URL prefix at which the container can be accessed. Useful in the case the '-web.external-url' includes a slug ## so that the various internal URLs are still able to access as they are in the default case. ## (Optional) baseURL: "" ## Additional Prometheus server container arguments ## extraArgs: {} ## Additional Prometheus server hostPath mounts ## extraHostPathMounts: [] # - name: certs-dir # mountPath: /etc/kubernetes/certs # hostPath: /etc/kubernetes/certs # readOnly: true ## ConfigMap override where fullname is {{.Release.Name}}-{{.Values.server.configMapOverrideName}} ## Defining configMapOverrideName will cause templates/server-configmap.yaml ## to NOT generate a ConfigMap resource ## configMapOverrideName: "" ingress: ## If true, Prometheus server Ingress will be created ## enabled: false ## Prometheus server Ingress annotations ## annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: 'true' ## Prometheus server Ingress hostnames ## Must be provided if Ingress is enabled ## hosts: [] # - prometheus.domain.com ## Prometheus server Ingress TLS configuration ## Secrets must be manually created in the namespace ## tls: [] # - secretName: prometheus-server-tls # hosts: # - prometheus.domain.com ## Server Deployment Strategy type # strategy: # type: Recreate ## Node tolerations for server scheduling to nodes with taints ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ ## tolerations: [] # - key: "key" # operator: "Equal|Exists" # value: "value" # effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)" ## Node labels for Prometheus server pod assignment ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ nodeSelector: {} persistentVolume: ## If true, Prometheus server will create/use a Persistent Volume Claim ## If false, use emptyDir ## enabled: true ## Prometheus server data Persistent Volume access modes ## Must match those of existing PV or dynamic provisioner ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ ## accessModes: - ReadWriteOnce ## Prometheus server data Persistent Volume annotations ## annotations: {} ## Prometheus server data Persistent Volume existing claim name ## Requires server.persistentVolume.enabled: true ## If defined, PVC must be created manually before volume will be bound existingClaim: "" ## Prometheus server data Persistent Volume mount root path ## mountPath: /data ## Prometheus server data Persistent Volume size ## size: 8Gi ## Prometheus server data Persistent Volume Storage Class ## If defined, storageClassName: <storageClass> ## If set to "-", storageClassName: "", which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS & OpenStack) ## # storageClass: "-" ## Subdirectory of Prometheus server data Persistent Volume to mount ## Useful if the volume's root directory is not empty ## subPath: "" ## Annotations to be added to Prometheus server pods ## podAnnotations: {} # iam.amazonaws.com/role: prometheus replicaCount: 1 ## Prometheus server resource requests and limits ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: # cpu: 500m # memory: 512Mi # requests: # cpu: 500m # memory: 512Mi service: annotations: {} labels: {} clusterIP: "" ## List of IP addresses at which the Prometheus server service is available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 80 type: ClusterIP ## Prometheus server pod termination grace period ## terminationGracePeriodSeconds: 300 ## Prometheus data retention period (i.e 360h) ## retention: "" pushgateway: ## If false, pushgateway will not be installed ## enabled: true ## pushgateway container name ## name: pushgateway ## pushgateway container image ## image: repository: prom/pushgateway tag: v0.4.0 pullPolicy: IfNotPresent ## Additional pushgateway container arguments ## extraArgs: {} ingress: ## If true, pushgateway Ingress will be created ## enabled: false ## pushgateway Ingress annotations ## annotations: # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: 'true' ## pushgateway Ingress hostnames ## Must be provided if Ingress is enabled ## hosts: [] # - pushgateway.domain.com ## pushgateway Ingress TLS configuration ## Secrets must be manually created in the namespace ## tls: [] # - secretName: prometheus-alerts-tls # hosts: # - pushgateway.domain.com ## Node labels for pushgateway pod assignment ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: {} ## Annotations to be added to pushgateway pods ## podAnnotations: {} replicaCount: 1 ## pushgateway resource requests and limits ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: # cpu: 10m # memory: 32Mi # requests: # cpu: 10m # memory: 32Mi service: annotations: prometheus.io/probe: pushgateway labels: {} clusterIP: "" ## List of IP addresses at which the pushgateway service is available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 9091 type: ClusterIP ## alertmanager ConfigMap entries ## alertmanagerFiles: alertmanager.yml: |- global: # slack_api_url: '' receivers: - name: default-receiver # slack_configs: # - channel: '@you' # send_resolved: true route: group_wait: 10s group_interval: 5m receiver: default-receiver repeat_interval: 3h ## Prometheus server ConfigMap entries ## serverFiles: alerts: "" rules: "" prometheus.yml: |- rule_files: - /etc/config/rules - /etc/config/alerts scrape_configs: - job_name: 'demo-app' scrape_interval: 5s metrics_path: '/prometheus' static_configs: - targets: - github-analytics.sc-pipelines-prod.svc.cluster.local:8080 - job_name: prometheus static_configs: - targets: - localhost:9090 # A scrape configuration for running Prometheus on a Kubernetes cluster. # This uses separate scrape configs for cluster components (i.e. API server, node) # and services to allow each to use different authentication configs. # # Kubernetes labels will be added as Prometheus labels on metrics via the # `labelmap` relabeling action. # Scrape config for API servers. # # Kubernetes exposes API servers as endpoints to the default/kubernetes # service so this uses `endpoints` role and uses relabelling to only keep # the endpoints associated with the default/kubernetes service using the # default named port `https`. This works for single API server deployments as # well as HA API server deployments. - job_name: 'kubernetes-apiservers' kubernetes_sd_configs: - role: endpoints # Default to scraping over https. If required, just disable this or change to # `http`. scheme: https # This TLS & bearer token file config is used to connect to the actual scrape # endpoints for cluster components. This is separate to discovery auth # configuration because discovery & scraping are two separate concerns in # Prometheus. The discovery auth config is automatic if Prometheus runs inside # the cluster. Otherwise, more config options have to be provided within the # <kubernetes_sd_config>. tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt # If your node certificates are self-signed or use a different CA to the # master CA, then disable certificate verification below. Note that # certificate verification is an integral part of a secure infrastructure # so this should only be disabled in a controlled environment. You can # disable certificate verification by uncommenting the line below. # insecure_skip_verify: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token # Keep only the default/kubernetes service endpoints for the https port. This # will add targets for each API server which Kubernetes adds an endpoint to # the default/kubernetes service. relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;https - job_name: 'kubernetes-nodes' # Default to scraping over https. If required, just disable this or change to # `http`. scheme: https # This TLS & bearer token file config is used to connect to the actual scrape # endpoints for cluster components. This is separate to discovery auth # configuration because discovery & scraping are two separate concerns in # Prometheus. The discovery auth config is automatic if Prometheus runs inside # the cluster. Otherwise, more config options have to be provided within the # <kubernetes_sd_config>. tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt # If your node certificates are self-signed or use a different CA to the # master CA, then disable certificate verification below. Note that # certificate verification is an integral part of a secure infrastructure # so this should only be disabled in a controlled environment. You can # disable certificate verification by uncommenting the line below. # insecure_skip_verify: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics # Scrape config for service endpoints. # # The relabeling allows the actual service scrape endpoint to be configured # via the following annotations: # # * `prometheus.io/scrape`: Only scrape services that have a value of `true` # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need # to set this to `https` & most likely set the `tls_config` of the scrape config. # * `prometheus.io/path`: If the metrics path is not `/metrics` override this. # * `prometheus.io/port`: If the metrics are exposed on a different port to the # service then set this appropriately. - job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] action: replace target_label: __scheme__ regex: (https?) - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] action: replace target_label: __address__ regex: (.+)(?::\d+);(\d+) replacement: $1:$2 - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] action: replace target_label: kubernetes_name - job_name: 'prometheus-pushgateway' honor_labels: true kubernetes_sd_configs: - role: service relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe] action: keep regex: pushgateway # Example scrape config for probing services via the Blackbox Exporter. # # The relabeling allows the actual service scrape endpoint to be configured # via the following annotations: # # * `prometheus.io/probe`: Only probe services that have a value of `true` - job_name: 'kubernetes-services' metrics_path: /probe params: module: [http_2xx] kubernetes_sd_configs: - role: service relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe] action: keep regex: true - source_labels: [__address__] target_label: __param_target - target_label: __address__ replacement: blackbox - source_labels: [__param_target] target_label: instance - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] target_label: kubernetes_name # Example scrape config for pods # # The relabeling allows the actual pod scrape endpoint to be configured via the # following annotations: # # * `prometheus.io/scrape`: Only scrape pods that have a value of `true` # * `prometheus.io/path`: If the metrics path is not `/metrics` override this. # * `prometheus.io/port`: Scrape the pod on the indicated port instead of the default of `9102`. - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: (.+):(?:\d+);(\d+) replacement: ${1}:${2} target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name networkPolicy: ## Enable creation of NetworkPolicy resources. ## enabled: false
Next, let’s create the prometheus installation with the predefined values.
$ helm install --name sc-pipelines-prometheus stable/prometheus -f values.yaml
Then you should see the following output
NOTES: The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster: sc-pipelines-prometheus-prometheus-server.default.svc.cluster.local Get the Prometheus server URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9090 The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster: sc-pipelines-prometheus-prometheus-alertmanager.default.svc.cluster.local Get the Alertmanager URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9093 The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster: sc-pipelines-prometheus-prometheus-pushgateway.default.svc.cluster.local Get the PushGateway URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9093 For more information on running Prometheus, visit: https://prometheus.io/
Use Helm to install Grafana
$ helm install --name sc-pipelines-grafana stable/grafana
NOTES: 1. Get your 'admin' user password by running: kubectl get secret --namespace default sc-pipelines-grafana-grafana -o jsonpath="{.data.grafana-admin-password}" | base64 --decode ; echo 2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster: sc-pipelines-grafana-grafana.default.svc.cluster.local Get the Grafana URL to visit by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=sc-pipelines-grafana-grafana,component=grafana" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 3000 3. Login with the password from step 1 and the username: admin
Perform the aforementioned steps and add the Grafana’s datasource
as Prometheus with URL http://sc-pipelines-prometheus-prometheus-server.default.svc.cluster.local
You can pick the dashboard via the Grafana ID (2471). This is the default dashboard for the Spring Cloud Pipelines demo apps.
If you have both apps (github-webhook
and github-analytics
) running on production
we can now trigger the messages. Download the JSON with a sample request
from the github-webhook repository.
Next, pick one of the github-webhook
pods and forward its port
locally to a port 9876
like this:
$ kubectl port-forward --namespace=sc-pipelines-prod $( kubectl get pods --namespace=sc-pipelines-prod | grep github-webhook | head -1 | awk '{print $1}' ) 9876:8080
next send a couple of requests (more than 4).
$ curl -X POST http://localhost:9876/ -d @path/to/issue-created.json \ --header "Content-Type: application/json"
Then if you check out Grafana you’ll see that you went above the threshold.
Click here to check out the slides by Cora Iberkleid where she migrates a setup of applications to be compliant with Spring Cloud Pipelines.
This tutorial covers refactoring applications to comply with, and take advantage of, Spring Cloud Pipelines.
We will use a simple 3-tier application as an example:
At the end of this tutorial, it will be possible to instantly create a Concourse pipeline for each app and run successfully through a full lifecycle, from source code commit to production deployment, following the lifecycle stages for testing and deployment recommended by Spring Cloud Pipelines. The app code bases will be improved with organized test coverage, a contract-based API, and a versioned database schema, enabling Spring Cloud Pipelines to carry out stubbed testing and to ensure backward compatibility for API and database schema changes.
The sample application is implemented using Spring Boot apps for the UI and service tiers, and MySQL for the database.
The apps are built using Maven and pushed manually to Cloud Foundry. They leverage the three Pivotal Spring Cloud Services: Config Server, Service Discovery, and Circuit Breaker Dashboard. Rabbit is used to propagate Config Server refresh triggers.
The source code for the two Spring Boot apps is stored on GitHub, as is the backing repo for Config Server.
Through this tutorial, we will be adding Concourse and JFrog Bintray to manage the application lifecycle.
We will also be refactoring the application to comply with Spring Cloud Pipelines requirements and recommendations, including adding/organizing tests and introducing database versioning using Flyway and API contracts using Spring Cloud Contract.
GitHub - sample app source code and config repositories, a sample stubrunner app repository, and the Spring Cloud Pipelines code base
The migration steps are broken down into three stages:
Scaffolding
Tests
Contracts
If you want to simply review the migration steps explained below, you can look at the various branches in the greeting-ui and fortune-service repositories - there is a branch representing the end-state of each stage:
If you want to use this tutorial as a hands-on lab, fork each of the following repositories:
Then, create a new directory on your local machine. You may name it anything you like; we will refer to it as $SCP_HOME
throughout this tutorial.
In $SCP_HOME
, clone your forks of greeting-ui
and fortune-service
, as well as the following two repositories:
Finally, create a directory called $SCP_HOME/credentials
. Leave it empty for now.
In this stage, we make minimal changes to satisfy basic Spring Cloud Pipelines requirements so that the apps can run through the entire pipeline without error. We make "scaffolding" changes only - no code changes.
The steps in this stage must be completed for both greeting-ui
and fortune-service
.
git branch version git checkout -b sc-pipelines
Branch version is required to exist, though it can be created as an empty branch. It is used by Spring Coud Pipelines to generate a version number for each new pipeline execution.
Branch sc-pipelines is optional and can be named anything you wish. The intention is for you to use it as a working branch for the changes suggested in this tutorial (hence we create it and also check it out).
mvn -N io.takari:maven:wrapper
This commands adds 4 files to a project:
. ├── mvnw ├── mvnw.cmd └── .mvn └── wrapper ├── maven-wrapper.jar └── maven-wrapper.properties
Make sure all four files are tracked by Git. For example, you can add the following to the .gitignore
file:
#Exceptions !/mvnw !/mvnw.cmd !/.mvn/wrapper/maven-wrapper.jar !/.mvn/wrapper/maven-wrapper.properties
We are using Bintray as the maven repository. Bintray requires that a package exist before any app artifacts can be uploaded.
Log into the Bintray UI and create the packages as follows. You can use the Import from GitHub
option to create these:
Edit the app pom.xml
files as follows. Make sure the Bintray URLs match the URLs of the corresponding packages created in the previous step. The values you use will be different from the example shown below.
<properties> ... <distribution.management.release.id>bintray</distribution.management.release.id> <distribution.management.release.url>https://api.bintray.com/maven/ciberkleid/maven-repo/fortune-service</distribution.management.release.url> </properties> ... <distributionManagement> <repository> <id>${distribution.management.release.id}</id> <url>${distribution.management.release.url}</url> </repository> </distributionManagement>
Though not required by Spring Cloud Pipelines, it makes sense to also configure your local maven settings with the credentials to your Bintray maven repo. To do so, edit your maven settings file, usually ~/.m2/settings.xml
. If the file does not exist, create it.
Note that the id
must match the id specified in the previous step. Also, make sure to use your username and API token (not account password) instead of the sample values shown below.
<?xml version="1.0" encoding="UTF-8"?> <settings> <servers> <server> <id>bintray</id> <username>ciberkleid</username> <password>my-super-secret-api-token</password> </server> </servers> </settings>
Push the above changes to GitHub. You should be pushing the following to each of the two app repos:
In $SCP_HOME/credentials
, make two copies of the file $SCP_HOME/spring-cloud-pipelines/concourse/credentials-sample-cf.yml
. Rename them as credentials-fortune-service.yml
and credentials-greeting-ui.yml
.
![]() | Caution |
---|---|
These files will contain credentials to your GitHub repo, your Bintray repo, and your Cloud Foundry foundation. Hence, we opt to put them in a separate directory. You may choose to store these files in a private git repo, but do not push them to a public repo. |
Edit the git properties of each credentials file. Make sure to replace the sample values shown below as appropriate. For tools-branch
, you may opt to use a fixed release (use v1.0.0.M8 or later for Cloud Foundry). Leave other values as they are, we will update those in later steps.
app-url: [email protected]:ciberkleid/fortune-service.git app-branch: sc-pipelines tools-scripts-url: https://github.com/spring-cloud/spring-cloud-pipelines.git tools-branch: master build-options: "" github-private-key: | -----BEGIN RSA PRIVATE KEY----- MIIJKQIBAAKCAgEAvwkL97vBllOSE39Wa5ppczT1cr5Blmkhadfoa1Va2/IBVyvk NJ9PqoTI+BahF2EgzweyiDSvKsstlTsG7QgiM9So8Voi2PlDOrXL6uOfCuAS/G8X ... -----END RSA PRIVATE KEY----- git-email: [email protected] git-name: Cora Iberkleid
Edit the maven repo properties of each credentials file. Make sure to replace the sample values shown below as appropriate. Bintray requires separate URLs for uploads and downloads. If you are using a different artifact repository, such as Artifactory or Nexus, and the repository URL is the same for uploads and downloads, then you do not need to set repo-with-binaries-for-upload
.
m2-settings-repo-id: bintray m2-settings-repo-username: ciberkleid m2-settings-repo-password: my-super-secret-api-token repo-with-binaries: https://ciberkleid:[email protected]/ciberkleid/maven-repo repo-with-binaries-for-upload: https://api.bintray.com/maven/ciberkleid/maven-repo/fortune-service
At this point, all of the build jobs, which run on Concourse workers, will succeed.
To verify this, log in to your Concourse target and set the Concourse pipelines. Update the target name in the example below as appropriate.
# Set greeting-ui pipeline fly -t myTarget set-pipeline -p greeting-ui -c "${SCP_HOME}/spring-cloud-pipelines/concourse/pipeline.yml" -l "${SCP_HOME}/credentials/credentials-greeting-ui.yml" -n # Set fortune-service pipeline fly -t myTarget set-pipeline -p fortune-service -c "${SCP_HOME}/spring-cloud-pipelines/concourse/pipeline.yml" -l "${SCP_HOME}/credentials/credentials-fortune-service.yml" -n
Log into the Concourse UI and unpause the pipelines. Start each. You should see that the build jobs all succeed.
In addition, you will see a new dev/<version_number> tag in each GitHub repo, as well as the app jars uploaded into Bintray.
The test, stage, and prod jobs will fail because we have not yet added scaffolding for deployment to Cloud Foundry. We will do that next.
If you are deploying to Cloud Foundry, you may already be routinely including manifest files with your apps. Our sample apps did not have manifest files, so we add them now.
In the greeting-ui
repo, create a manifest.yml
file as follows:
--- applications: - name: greeting-ui timeout: 120 services: - config-server - cloud-bus - service-registry - circuit-breaker-dashboard env: JAVA_OPTS: -Djava.security.egd=file:///dev/urandom TRUST_CERTS: api.run.pivotal.io
In the fortune-service
repo, create a manifest.yml
file as follows:
--- applications: - name: fortune-service timeout: 120 services: - fortune-db - config-server - cloud-bus - service-registry - circuit-breaker-dashboard env: JAVA_OPTS: -Djava.security.egd=file:///dev/urandom TRUST_CERTS: api.run.pivotal.io
The TRUST_CERTS
variable is used by the Pivotal Spring Cloud Services (Config Server, Service Registry, and Circuit Breaker Dashboard), which we are using in this example. The value specified above assumes deployment to Pivotal Web Services. Update it accordingly if you are deploying to a different Cloud Foundry foundation, or you can leave it out altogether if you are replacing the Pivotal Spring Cloud Services with alternative implementations (e.g. deploying the services as apps and exposing them as user-provided services).
You may add additional values to the manifest files if you wish, for example if additional values are useful for any manual deployment you may still want to do, or desirable in your Spring Cloud Pipelines deployment. For example, an alternative manifest.yml for fortune-service
could be as follows:
--- applications: - name: fortune-service timeout: 120 instances: 3 memory: 1024M buildpack: https://github.com/cloudfoundry/java-buildpack.git random-route: true path: ./target/fortune-service-0.0.1-SNAPSHOT.jar services: - fortune-db - config-server - cloud-bus - service-registry - circuit-breaker-dashboard env: SPRING_PROFILES_ACTIVE: someProfile JAVA_OPTS: -Djava.security.egd=file:///dev/urandom TRUST_CERTS: api.run.pivotal.io
Note that random-route
and path
are ignored by Spring Cloud Pipelines. instances
is honored in stage and prod, but overridden with a value of 1 for test.
The Cloud Foundry manifest created in the previous step includes the logical names of the services to which the apps should be bound, but it does describe how the services can be provisioned. Hence, we add a second manifest file so that Spring Cloud Pipelines can provision the services.
Add a file called sc-pipelines.yml
to each app, and include the same list of services as in the corresponding manifest.yml
. Add the necessary details such that Spring Cloud Pipelines can construct a cf create-service
command.
![]() | Note |
---|---|
The `type: broker' parameter shown below instructs Spring Cloud Pipelines to provision a service using `cf create-service'. Other service types are also supported: cups, syslog, route, app, and stubrunner. |
More specifically, for greeting-ui
, create an sc-pipelines.yml
file with the following content:
test: services: - name: config-server type: broker broker: p-config-server plan: standard params: git: uri: https://github.com/ciberkleid/app-config useExisting: true - name: cloud-bus type: broker broker: cloudamqp plan: lemur useExisting: true - name: service-registry type: broker broker: p-service-registry plan: standard useExisting: true - name: circuit-breaker-dashboard type: broker broker: p-circuit-breaker-dashboard plan: standard useExisting: true
The sc-pipelines.yml
file for fortune-service
is similar, with the addition of the fortune-db
service:
test: # list of required services services: - name: fortune-db type: broker broker: cleardb plan: spark useExisting: true - name: config-server type: broker broker: p-config-server plan: standard params: git: uri: https://github.com/ciberkleid/app-config useExisting: true - name: cloud-bus type: broker broker: cloudamqp plan: lemur useExisting: true - name: service-registry type: broker broker: p-service-registry plan: standard useExisting: true - name: circuit-breaker-dashboard type: broker broker: p-circuit-breaker-dashboard plan: standard useExisting: true
The values above assume deployment to Pivotal Web Services. If you are deploying to a different Cloud Foundry foundation, please update the values accordingly. Also, make sure to replace the config-server
uri with the address of your fork of the app-config repo.
![]() | Tip |
---|---|
Notice the |
Push the above changes to GitHub. You should be pushing the following to each of the two app repos:
Spring Cloud Pipelines requires that the Cloud Foundry test, stage, and prod spaces exist before a pipeline is run. If you wish, you can use different foundations, orgs, and users for each. For simplicity, in this example, we use a single foundation (PWS), a single org, and a single user.
You can name the org(s) and spaces anything you like. Each app requires its own test space. The stage and prod spaces are shared.
For this example, create the following spaces:
cf create-space scp-test-greeting-ui cf create-space scp-test-fortune-service cf create-space scp-stage cf create-space scp-prod
Spring Cloud Pipelines will dynamically create the services in the test spaces as per the sc-pipelines.yml
file we created previously. Optionally, a second section can be added to the sc-pipelines.yml
file for the stage environment, and these will be created dynamically as well. Prod services, however, must always be created manually.
For this example, we will create the stage and prod services manually.
Create the services listed in the app manifest files in both scp-stage
and scp-prod
.
Update the greeting-ui
and fortune-service
credentials files with Cloud Foundry information. Replace values in the example below as appropriate for your Cloud Foundry environment.
Notice that the test space name specified is a prefix, unlike the stage and prod space names, which are literals. Spring Cloud Pipelines will append the app name to the test space name, thereby matching the test space names we created manually. The stage and prod space names are not prefixes and will not be altered by Spring Cloud Pipelines.
Note also the paas-hostname-uuid
. The value will be included in each route created. This value is optional, but it is useful in shared/multi-tenant environments such as PWS, as it helps ensure routes are unique. Change it to a unique uuid of your choosing.
pipeline-descriptor: sc-pipelines.yml paas-type: cf paas-hostname-uuid: cyi # test values paas-test-api-url: https://api.run.pivotal.io paas-test-username: [email protected] paas-test-password: secret paas-test-org: S1Pdemo12 paas-test-space-prefix: scp-test # stage values paas-stage-api-url: https://api.run.pivotal.io paas-stage-username: [email protected] paas-stage-password: my-super-secret-password paas-stage-org: S1Pdemo12 paas-stage-space: scp-stage # prod values paas-prod-api-url: https://api.run.pivotal.io paas-prod-username: [email protected] paas-prod-password: my-super-secret-password paas-prod-org: S1Pdemo12 paas-prod-space: scp-prod
Set the Concourse pipelines again, as we did previously, to update them with the values added to the credentials files. The test, stage, and prod jobs will all now succeed.
On Cloud Foundry, you will now see the apps deployed in the test, stage, and prod spaces. The image below shows the deployment of fortune-service
to its dedicated test space. Notice that the 5 services declared in its manifest files (sc-pipelines.yml
for provisioning, and manifest.yml
for binding) have also been automatically provisioned. The image also shows the deployment of the same app to the shared prod space. Notice that the instance of the previous version has been renamed as "venerable" and stopped. If a rollback were deemed necessary, the prod-rollback
job in the pipeline could be triggered to remove the currently running version, remove the prod/<version_number>
tag from GitHub, and re-start the former ("venerable") version.
What have we accomplished?
greeting-ui
and fortune-service
from source code commit to production deploy, we have made it possible for the app dev teams to instantly and easily create pipelines for each app using a common, standardized templateWe can count on the pipelines to:
dev/<version_number>
and prod/<version_number>
prod-rollback
job, if necessaryThese accomplishments are extremely valuable, but in order to derive confidence and reliability from the pipelines, we need to incorporate testing. We do this in Stage 2 of the app migration.
In this stage, we enable Spring Cloud Pipelines to execute tests so that we can increase confidence in the code being deployed. We do so by adding test profiles to the pom.xml files, and then organizing and/or adding tests in a way that corresponds to the profiles. By doing so, we are establishing standards around testing across development teams in the enterprise.
We will also enable database schema versioning in this stage, thereby providing the foundation for rollback testing during schema changes.
For both greeting-ui
and fortune-service
, add a profiles
section to the pom.xml
file, as shown below. Note that we are adding four profiles:
default
apicompatibility
smoke
e2e
<profiles> <profile> <id>default</id> <activation> <activeByDefault>true</activeByDefault> </activation> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <includes> <include>**/*Tests.java</include> <include>**/*Test.java</include> </includes> <excludes> <exclude>**/smoke/**</exclude> <exclude>**/e2e/**</exclude> </excludes> </configuration> </plugin> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </profile> <profile> <id>apicompatibility</id> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <includes> <include>**/contracttests/**/*Tests.java</include> <include>**/contracttests/**/*Test.java</include> </includes> </configuration> </plugin> </plugins> </build> </profile> <profile> <id>smoke</id> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <includes> <include>smoke/**/*Tests.java</include> <include>smoke/**/*Test.java</include> </includes> </configuration> </plugin> </plugins> </build> </profile> <profile> <id>e2e</id> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <includes> <include>e2e/**/*Tests.java</include> <include>e2e/**/*Test.java</include> </includes> </configuration> </plugin> </plugins> </build> </profile> </profiles>
Next, we ensure that we have a matching test package structure in our apps:
Note that we are creating matching packages for the default, smoke, and e2e profiles only. We will address the package for the apicompatibility profile in Stage 3.
When working with your own apps, if you have existing tests, you would move the files into one of these packages now, and rename them so that they are included by the filters declared in the profiles (i.e. the file names end in Test.java
or Tests.java
)
In the case of our sample apps, there are no tests, so we add some now as follows.
fortune-service default tests
Add your unit and integration tests so that they match the default profile as defined in the fortune-service
pom.xml
file. These will be executed on Concourse against the fortune-service
application running on the Concourse worker in the build-and-upload
job.
As an example, we will add two tests, one that loads the context, and another that verifies the number of rows expected in the database:
package io.pivotal; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.test.context.junit4.SpringRunner; import org.springframework.jdbc.core.JdbcTemplate; import static org.assertj.core.api.Assertions.assertThat; import static org.junit.Assert.*; @RunWith(SpringRunner.class) @SpringBootTest(classes = FortuneServiceApplication.class) public class FortuneServiceApplicationTests { @Test public void contextLoads() throws Exception { } @Autowired private JdbcTemplate template; @Test public void testDefaultSettings() throws Exception { assertThat(this.template.queryForObject("SELECT COUNT(*) from FORTUNE", Integer.class)).isEqualTo(7); } }
fortune-service smoke tests
Add your smoke tests so that they match the smoke profile as defined in the fortune-service
pom.xml
file. These will be executed on Concourse against the fortune-service
application deployed in the Cloud Foundry scp-test-fortune-service
space. Two versions of these tests are executed against the app:
test-smoke
jobtest-rollback-smoke
jobIn the test environment, we choose to verify that fortune-service
is retrieving a fortune from fortune-db
, and not returning its Hystrix fallback response:
package smoke; import org.assertj.core.api.BDDAssertions; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.http.ResponseEntity; import org.springframework.test.context.junit4.SpringRunner; import org.springframework.web.client.RestTemplate; @RunWith(SpringRunner.class) @SpringBootTest(classes = SmokeTests.class, webEnvironment = SpringBootTest.WebEnvironment.NONE) @EnableAutoConfiguration public class SmokeTests { @Value("${application.url}") String applicationUrl; RestTemplate restTemplate = new RestTemplate(); @Test public void should_return_a_fortune() { ResponseEntity<String> response = this.restTemplate .getForEntity("http://" + this.applicationUrl + "/", String.class); BDDAssertions.then(response.getStatusCodeValue()).isEqualTo(200); // Filter out the known Hystrix fallback response BDDAssertions.then(response.getBody()).doesNotContain("The fortuneteller will be back soon."); } }
fortune-service e2e tests
Add your e2e tests so that they match the e2e profile as defined in the fortune-service
pom.xml
file. These will be executed on Concourse against the fortune-service
application deployed in the Cloud Foundry scp-stage
space. This space is shared, so we assume greeting-ui
is also present.
In the e2e environment, we choose to use a string replacement to obtain the URL for greeting-ui
. We also choose to verify that we are hitting fortune-db
and not receiving Hystrix fallback responses from either application:
package e2e; import org.assertj.core.api.BDDAssertions; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.http.ResponseEntity; import org.springframework.test.context.junit4.SpringRunner; import org.springframework.web.client.RestTemplate; @RunWith(SpringRunner.class) @SpringBootTest(classes = E2eTests.class, webEnvironment = SpringBootTest.WebEnvironment.NONE) @EnableAutoConfiguration public class E2eTests { // The app is running in CF but the tests are executed from Concourse worker, // so the test will deduce the url to greeting-ui: it will assume the same host // as fortune-service, and simply replace "fortune-service" with "greeting-ui" in the url @Value("${application.url}") String applicationUrl; RestTemplate restTemplate = new RestTemplate(); @Test public void should_return_a_fortune() { ResponseEntity<String> response = this.restTemplate .getForEntity("http://" + this.applicationUrl.replace("fortune-service", "greeting-ui") + "/", String.class); BDDAssertions.then(response.getStatusCodeValue()).isEqualTo(200); // Filter out the known Hystrix fallback responses from both fortune and greeting BDDAssertions.then(response.getBody()).doesNotContain("This fortune is no good. Try another.").doesNotContain("The fortuneteller will be back soon."); } }
greeting-ui default tests
Add your unit and integration tests so that they match the default profile as defined in the greeting-ui
pom.xml
file. These will be executed on Concourse against the greeting-ui
application running on the Concourse worker in the build-and-upload
job.
As an example, we will add one test that loads the context:
package io.pivotal; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.test.context.junit4.SpringRunner; @RunWith(SpringRunner.class) @SpringBootTest(classes = GreetingUIApplication.class) public class GreetingUIApplicationTests { @Test public void contextLoads() throws Exception { } }
greeting-ui smoke tests
Add your smoke tests so that they match the smoke profile as defined in the greeting-ui
pom.xml
file. These will be executed on Concourse against the greeting-ui
application deployed in the Cloud Foundry scp-test-greeting-ui
space. Two versions of these tests are executed against the app:
test-smoke
jobtest-rollback-smoke
jobSince fortune-service
is not deployed to the scp-test-greeting-ui
space, we expect to receive the Hystrix fallback response defined in greeting-ui
. Hence, our smoke test validates that condition:
package smoke; import org.assertj.core.api.BDDAssertions; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.http.ResponseEntity; import org.springframework.test.context.junit4.SpringRunner; import org.springframework.web.client.RestTemplate; @RunWith(SpringRunner.class) @SpringBootTest(classes = SmokeTests.class, webEnvironment = SpringBootTest.WebEnvironment.NONE) @EnableAutoConfiguration public class SmokeTests { @Value("${application.url}") String applicationUrl; RestTemplate restTemplate = new RestTemplate(); @Test public void should_return_a_fallback_fortune() { ResponseEntity<String> response = this.restTemplate .getForEntity("http://" + this.applicationUrl + "/", String.class); BDDAssertions.then(response.getStatusCodeValue()).isEqualTo(200); // Expect the hystrix fallback response BDDAssertions.then(response.getBody()).contains("This fortune is no good. Try another."); } }
greeting-ui e2e tests
Add your e2e tests so that they match the e2e profile as defined in the greeting-ui
pom.xml
file. These will be executed on Concourse against the greeting-ui
application deployed in the Cloud Foundry scp-stage
space. This space is shared, so we assume fortune-service
is also present.
In the e2e environment, we choose to verify that we are hitting fortune-service
and not receiving the Hystrix fallback response from greeting-ui
:
package e2e; import org.assertj.core.api.BDDAssertions; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.http.ResponseEntity; import org.springframework.test.context.junit4.SpringRunner; import org.springframework.web.client.RestTemplate; @RunWith(SpringRunner.class) @SpringBootTest(classes = E2eTests.class, webEnvironment = SpringBootTest.WebEnvironment.NONE) @EnableAutoConfiguration public class E2eTests { @Value("${application.url}") String applicationUrl; RestTemplate restTemplate = new RestTemplate(); @Test public void should_return_a_fortune() { ResponseEntity<String> response = this.restTemplate .getForEntity("http://" + this.applicationUrl + "/", String.class); BDDAssertions.then(response.getStatusCodeValue()).isEqualTo(200); // Filter out the known Hystrix fallback response BDDAssertions.then(response.getBody()).doesNotContain("This fortune is no good. Try another."); } }
At this point we will also incorporate Flyway, an OSS database migration tool, to track database schema versions and handle schema changes and data loading.
This change only needs to be made to fortune-service
, since fortune-service
owns the interaction with fortune-db
.
Add Flyway dependency
We first add the Flyway dependency to the fortune-service
pom.xml
. We need not add a version as Spring Boot will take care of that for us.
<dependency> <groupId>org.flywaydb</groupId> <artifactId>flyway-core</artifactId> </dependency> <dependency>
Create Flyway migration
Next, we create a migration directory and our initial migration file following Flyway’s file naming convention:
Note the filename specifies the version (V1
), followed by two underscore characters.
We place our CREATE TABLE and INSERT statements in our src/main/resources/db/migration/V1__init.sql
file:
CREATE TABLE fortune ( id BIGINT PRIMARY KEY AUTO_INCREMENT, text varchar(255) not null ); INSERT INTO fortune (text) VALUES ('Do what works.'); INSERT INTO fortune (text) VALUES ('Do the right thing.'); INSERT INTO fortune (text) VALUES ('Always be kind.'); INSERT INTO fortune (text) VALUES ('You learn from your mistakes... You will learn a lot today.'); INSERT INTO fortune (text) VALUES ('You can always find happiness at work on Friday.'); INSERT INTO fortune (text) VALUES ('You will be hungry again in one hour.'); INSERT INTO fortune (text) VALUES ('Today will be an awesome day!');
Disable JPA DDL initialization
Now that we are relying on Flyway to create and populate the schema, we need to disable JPA-based database initialization. We can set ddl-auto
to validate
, which will validate the schema against the application entities and throw an error in case of a mismatch, but not actually generate the schema:
spring: jpa: hibernate: ddl-auto: validate
There are a few options for where to store the ddl-auto
configuration, both in terms of location (in the fortune-service
app or on the app-config
GitHub repo) and in terms of file name. For this example, update the application.yml
in the fortune-service
app for local testing. Additionally, save these values in a new file called application-flyway.yml
on your fork of app-config.
By convention, fortune-service
will pick up the configurations in application-flyway.yml
if the string flyway
is in the list of active Spring profiles. Thus, we add flyway
to the environment variable SPRING_PROFILES_ACTIVE
via the fortune-service
manifest.yml
:
--- applications: - name: fortune-service timeout: 120 services: - fortune-db - config-server - cloud-bus - service-registry - circuit-breaker-dashboard env: SPRING_PROFILES_ACTIVE: flyway JAVA_OPTS: -Djava.security.egd=file:///dev/urandom TRUST_CERTS: api.run.pivotal.io
Remove non-Flyway data loading
We can now remove the old code that populated the database. In our sample app, this was found in class io.pivotal.FortuneServiceApplication
. The following shows the code we now remove:
@Bean CommandLineRunner loadDatabase(FortuneRepository fortuneRepo) { return args -> { // logger.debug("loading database.."); // fortuneRepo.save(new Fortune(1L, "Do what works.")); // fortuneRepo.save(new Fortune(2L, "Do the right thing.")); // fortuneRepo.save(new Fortune(3L, "Always be kind.")); // fortuneRepo.save(new Fortune(4L, "You learn from your mistakes... You will learn a lot today.")); // fortuneRepo.save(new Fortune(5L, "You can always find happiness at work on Friday.")); // fortuneRepo.save(new Fortune(6L, "You will be hungry again in one hour.")); // fortuneRepo.save(new Fortune(7L, "Today will be an awesome day!")); logger.debug("record count: {}", fortuneRepo.count()); fortuneRepo.findAll().forEach(x -> logger.debug(x.toString())); }; }
We also no longer need the Fortune entity constructors, so we can comment these out in class io.pivotal.fortune.Fortune
as shown below:
// public Fortune() { // } // // public Fortune(Long id, String text) { // super(); // this.id = id; // this.text = text; // }
Flyway integration summary
With that, we have completed the setup for Flyway and our database schema is now versioned. From this point onward, Spring Boot will call Flyway.migrate()
to perform the database migration. As long as we follow Flyway conventions for future schema changes, Flyway will take care of tracking the schema version and migrating the database for us.
From a rollback perspective, Spring Cloud Pipelines includes two jobs in the test
phase - test-rollback-deploy
and test-rollback-smoke
- wherein it validates that the latest prod jar works against the newly updated database. The purpose is to ensure that we can roll back the application in prod if a problem is discovered after the prod database schema has been updated, and avoid the burden of rolling back the database.
Read more about Spring Boot database initialization with Flyway for further information, including Flyway configuration options.
For greeting-ui
, you should be pushing the following new or modified files:
For fortune-service
, you should be pushing the following new or modified files:
For app-config
, you should be pushing the following new or modified files:
Run through the pipelines again and view the output for the jobs that run the default, smoke, and e2e tests. You will see that the tests we added in this stage were executed.
As you run through the pipelines a second time, you will see the smoke tests from the latest prod version run against the database in the test-rollback-smoke
job. In this case there is no schema upgrade, but nonetheless the tests confirm that the latest prod version of the app can be used with the current database schema.
You can see the database version information stored in the database by Flyway either by querying the database itself or by hitting the flyway endpoint on the fortune-service
URL. Here is an example from the scp-stage environment:
What have we accomplished?
We are now positioned to add any unit, integration, smoke, and end-to-end tests to our code base and extract a very high level of reliability and confidence from our pipelines. We are also better positioned to ensure that our dev teams conform to these practices, given the structure established by Spring Cloud Pipelines and the fast feedback and visibility we gain from the pipelines as they execute the tests.
However, we could benefit further by incorporating contracts to define and test the API integration points between applications. We do this in Stage 3 of the app migration.
In this stage, we introduce contract-based programming practices into our sample application. Doing so improves API management capabilities, including defining, communicating, and testing API semantics. It also enables us to catch breaking API changes (i.e. validate API backward compatibility) in the build phase. This will extend the effectiveness of the pipelines, encourage better communication and programming practices across development teams, and provide faster feedback to developers.
We will integrate Spring Cloud Contract and add contracts, stubs, and a stub runner. We will also now complete and make use of the apicompatibility profile defined in Stage 2.
Let’s start by creating the contract for the interaction between greeting-ui
and fortune-service
. The contract should describe the following expectation:
greeting-ui
makes a GET
request to the root URL of fortune-service
and expects a response with status 200 and a string ("foo fortune") in the bodyWe codify this using groovy syntax as follows:
import org.springframework.cloud.contract.spec.Contract Contract.make { description(""" should return a fortune string """) request { method GET() url "/" } response { status 200 body "foo fortune" } }
Save this contract in the fortune-service
code base in the following location, which is compliant with Spring Cloud Contract convention (src/test/resources/contracts/<service-name>/<contract-file>
):
![]() | Note |
---|---|
You can optionally enable your IDE to assist with contract syntax by adding the Spring Cloud Contract Verifier to your |
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-contract-verifier</artifactId> <scope>test</scope> </dependency>
Now that we have a codified contract, we want to enable auto-generation of contract-based tests. The auto-generation, which we will configure in the next steps, requires a base class that stubs out the service that satisfies the API call, so that we can run the test without external dependencies (e.g. the DB). The objective is to focus on testing API semantics.
We create the base class in the fortune-service
test package as follows:
package io.pivotal.fortune; import io.restassured.module.mockmvc.RestAssuredMockMvc; import org.junit.Before; import org.mockito.BDDMockito; public class BaseClass { @Before public void setup() { FortuneService service = BDDMockito.mock(FortuneService.class); BDDMockito.given(service.getFortune()).willReturn("foo fortune"); RestAssuredMockMvc.standaloneSetup(new FortuneController(service)); } }
Now that we have a contract and a base class, we can use the Spring Cloud Contract maven plugin to auto-generate contract tests, stubs, and a stub jar.
First we add the Spring Cloud Contract version to the list of properties in the fortune-service
pom.xml
file, since we will reference it when we enable the Spring Cloud Contract maven plugin:
<properties> ... <spring-cloud-contract.version>1.2.1.RELEASE</spring-cloud-contract.version> ... </properties>
Next, we edit the default
profile in the fortune-service
pom.xml
file as follows:
io.pivotal.fortune.BaseClass
) to generate testsio.pivotal.fortune.contracttests
Note that the package of the contracttests will be included by the include
filter in the default
profile, so these tests will be run against the app during the build-and-upload
job. For fortune-service
, this serves to validate that the app conforms to the contract.
Here is the complete profile:
<profile> <id>default</id> <activation> <activeByDefault>true</activeByDefault> </activation> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <includes> <include>**/*Tests.java</include> <include>**/*Test.java</include> </includes> <excludes> <exclude>**/smoke/**</exclude> <exclude>**/e2e/**</exclude> </excludes> </configuration> </plugin> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <!--Spring Cloud Contract maven plugin --> <plugin> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-contract-maven-plugin</artifactId> <version>${spring-cloud-contract.version}</version> <extensions>true</extensions> <configuration> <baseClassForTests>io.pivotal.fortune.BaseClass</baseClassForTests> <basePackageForTests>io.pivotal.fortune.contracttests</basePackageForTests> </configuration> </plugin> </plugins> </build> </profile>
When the app is built, the Spring Cloud Contract maven plugin will also now produce a stub and a stub jar containing the contract and stub. This stub jar will be uploaded to Bintray, along with the usual app jar. As we will see shortly, this stub jar can be used by the greeting-ui
dev team while they wait for fortune-service
to be completed. In other words, this gives the greeting-ui
dev team a producer to test against that is based on a mutually agreed-upon contract without the lead time of having to wait for fortune-service
to implement anything more than a base class, and without having to manually stub out calls to fortune-service
based on arbitrary or static responses.
![]() | Tip |
---|---|
Package the project locally (run |
To enable Spring Cloud Pipelines to catch any breaking API changes during the build-api-compatibility-check
job, we add the Spring Cloud Contract maven plugin to the apicompatibility
profile as well.
In this case, we want the plugin to generate tests based on contracts outside of the project (the ones from the latest prod version), so we configure the plugin to download the latest prod stub jar, which contains the old contract. The plugin will use the old contract and the specified base class, which in our example is the same as the one in the previous step, to generate contract tests. These tests are run against the new code to validate that it is still compatible with consumers complying with the prior contract. This ensures backward compatibility for the API.
In short, we edit the apicompatibility profile in the fortune-service
pom.xml
file as follows:
io.pivotal.fortune.BaseClass
) to generate tests (we are using the same one as in the prior step)io.pivotal.fortune.contracttests
Note that the package of the contracttests matches the include
filter in the apicompatibility
profile, so these tests will be run against the app during the build-api-compatibility-check
job. For fortune-service
, this serves to validate that the app conforms to the old contract.
Here is the complete profile:
<profile> <id>apicompatibility</id> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <includes> <include>**/contracttests/**/*Tests.java</include> <include>**/contracttests/**/*Test.java</include> </includes> </configuration> </plugin> <!--Spring Cloud Contract maven plugin --> <plugin> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-contract-maven-plugin</artifactId> <version>${spring-cloud-contract.version}</version> <extensions>true</extensions> <configuration> <contractsRepositoryUrl>${repo.with.binaries}</contractsRepositoryUrl> <contractDependency> <groupId>${project.groupId}</groupId> <artifactId>${project.artifactId}</artifactId> <classifier>stubs</classifier> <version>${latest.production.version}</version> </contractDependency> <contractsPath>/</contractsPath> <baseClassForTests>io.pivotal.fortune.BaseClass</baseClassForTests> <basePackageForTests>io.pivotal.fortune.contracttests</basePackageForTests> </configuration> </plugin> </plugins> </build> </profile>
The values for ${repo.with.binaries}
and ${latest.production.version}
will be injected dynamically by Spring Cloud Pipelines. You can run this locally by providing these values manually as system properties in the maven command.
All changes in Stage 3 thus far are in fortune-service
. At this point, you should be pushing the following new or modified files:
Run through the fortune-service
pipeline to generate stubs. The following output from the build-and-upload
job shows the auto-generation of tests and stubs:
You will also see output in the build-and-upload
job showing the execution of these tests against the code.
Additionally, you will see the stub jar uploaded to Bintray along with the usual app jar.
Finally, as you run through the pipeline a second time, you will see the contract tests from the latest prod version run against the new code in the output of the build-api-compatibility-check
job. In this case there is no API change, but nonetheless the tests confirm that the latest prod version of the API can be used with the current code base.
We are in the home stretch! Let’s turn our attention to greeting-ui
.
The following image compares the path of a request through greeting-ui
in the build phase, both with and without stubs.
Without stubs, we expect the response to be the hystrix fallback response that is hard-coded in greeting-ui
. With stubs, however, we can expect the response that was declared in the contract. In this case, the stubs are loaded into the greeting-ui
process. This leads us to our next task: load the stubs produced by fortune-service
.
Enable in-process stub runner
To load the stubs into the greeting-ui
process, we must enable the Spring Cloud Contract Stub Runner dependency. This dependency will start an in-process stub runner that automatically configures Wiremock.
Add the following to the greeting-ui
pom.xml
file:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-contract-stub-runner</artifactId> <scope>test</scope> </dependency>
Add integration tests aligned with the contract
Next, we add integration tests to greeting-ui
that test for the expected response declared in the contract.
Add the following class to the test package in greeting-ui
:
package io.pivotal.fortune; import io.pivotal.GreetingUIApplication; import org.assertj.core.api.BDDAssertions; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.cloud.contract.stubrunner.spring.AutoConfigureStubRunner; import org.springframework.test.context.junit4.SpringRunner; @RunWith(SpringRunner.class) @SpringBootTest(classes = GreetingUIApplication.class, webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = {"spring.application.name=greeting-ui", "spring.cloud.circuit.breaker.enabled=false", "hystrix.stream.queue.enabled=false"}) @AutoConfigureStubRunner(ids = {"io.pivotal:fortune-service:1.0.0.M1-20180102_203542-VERSION"}, repositoryRoot = "${REPO_WITH_BINARIES}" //workOffline = true ) public class FortuneServiceTests { @Autowired FortuneService fortuneService; @Test public void shouldSendRequestToFortune() { // when String fortune = fortuneService.getFortune(); // then BDDAssertions.then(fortune).isEqualTo("foo fortune"); } }
At this point, we can get through the build phase for greeting-ui
, and the integration tests will be executed against the fortune-service
stubs running in the greeting-ui
process on the Concourse worker.
![]() | Tip |
---|---|
Notice the configuration of |
![]() | Tip |
---|---|
Setting |
The following image compares the path of a request through greeting-ui
in the test phase, both with and without stubs. Note that in the build phase, where the app process is running on the Concourse worker, we ran the stubs in the same process. In the test environment (Cloud Foundry), we will run the stubs in a separate process using a standalone stub runner application.
As in the build phase, without stubs we expect the response to be the hystrix fallback response that is hard-coded in greeting-ui
. With stubs, however, we can expect the response that was declared in the contract.
We will rely on Spring Cloud Pipelines to:
We will rely on the stub runner application to:
The following steps describe how to configure this.
Provide standalone stub runner app jar
In the Prep step for this tutorial, you cloned the cloudfoundry-stub-runner-boot repo to your local machine. The next step is to build this app and upload it to Bintray to make the jar available to Spring Cloud Pipelines.
As mentioned in Stage 1 of this tutorial, Bintray requires that a package exist before any app artifacts can be uploaded. Log into the Bintray UI and create a package for cloudfoundry-stub-runner-boot
. If you forked this repo, you can use the Import from GitHub
option. Otherwise, create the package manually and choose any license (e.g. Apache 2.0).
Now you are ready to build and upload this app to Bintray. The following script shows cloning, building and uploading. Edit as appropriate to match your Bintray URL, the Bintray ID in your ~/.m2/settings/xml
file, and the cloudfoundry-stub-runner-boot
repo URL if you chose to fork it.
# Edit to match your Bintray URL and M2 repo ID setting (check your ~/.m2/settings.xml file) MAVEN_REPO_URL=https://api.bintray.com/maven/ciberkleid/maven-repo/cloudfoundry-stub-runner-boot MAVEN_REPO_ID=bintray # Clone cloudfoundry-stub-runner-boot git clone https://github.com/spring-cloud-samples/cloudfoundry-stub-runner-boot.git cd cloudfoundry-stub-runner-boot # Build and upload ./mvnw clean deploy -Ddistribution.management.release.url="${MAVEN_REPO_URL}" -Ddistribution.management.release.id="${MAVEN_REPO_ID}"
You should now see the cloudfoundry-stub-runner-boot
artifacts uploaded on Bintray.
Provide standalone stub runner app manifest
Next, we add a manifest file for the stub runner app for deployment to Cloud Foundry.
We will place this file in the greeting-ui
repo. The file name and location can be your choice. For this example, we will use sc-pipelines/manifest-stubrunner.yml
:
We populate this manifest-stubrunner.yml
with the content shown below so that the stub runner binds to service-registry
. The stub runner will register the fortune-service
stub there so that greeting-ui
can discover it in the same way it will discover the actual fortune-service
app endpoint in stage and prod. From the greeting-ui
perspective, there is no difference in how it interacts with Eureka and the stub runner app in test and the way it will interact with Eureka and the fortune-service
app in stage and prod.
--- applications: - name: stubrunner timeout: 120 services: - service-registry env: JAVA_OPTS: -Djava.security.egd=file:///dev/urandom TRUST_CERTS: api.run.pivotal.io
Provide stub runner jar and manifest info to the pipeline
Now that we have a jar file and manifest file for our stub runner app, we need to provide this information to our greeting-ui
pipeline so that the pipeline downloads the jar from Bintray and deploys it to Cloud Foundry. We do this through the greeting-ui
sc-pipelines.yml
file. We add an entry to the list of services in the test
section, as follows:
- name: stubrunner type: stubrunner coordinates: io.pivotal:cloudfoundry-stub-runner-boot:0.0.1.M1 pathToManifest: sc-pipelines/manifest-stubrunner.yml
Notice that name
matches the name of the app in manifest-stubrunner.yml
, coordinates
corresponds to the jar coordinates on the maven repo, and pathToManifest
matches our chosen fie name for the stub runner app manifest.
Note also the type
is set to stubrunner
, which Spring Cloud Pipelines will recognize as a stanalone stub runner app and treat accordingly.
Provide stub configuration for stub runner app
The final steps in the configuration of the standalone stub runner app are:
fortune-service
stub from BintrayTo accomplish this, we put stub and port configuration information into the properties section of the greeting-ui
pom.xml
file, using a property called stubrunner.ids
. This property can accept a list of stubrunner ids, but for this tutorial, we only have one:
<properties> ... <!--Tell stub runner app to start this stub--> <stubrunner.ids>io.pivotal:fortune-service:1.0.0.M1-20180102_203542-VERSION:stubs:10000</stubrunner.ids> </properties>
Spring Cloud Pipelines will use this information in two ways:
It will provide this information to the stub runner app via the app’s environment variables
$REPO_WITH_BINARIES
as an env var for the stub runner appIt will open the additional port specified on the stub runner app and map a new route to it
<stub-runner-app-name>-<hostname-uuid>-<env>-<app-name>-<port>.<domain>
stubrunner-cyi-test-greeting-ui-10000.cfapps.io
Since we bound our stub runner app to service-registry
(Eureka), the stub runner app will register the stub URL under the application name FORTUNE-SERVICE
on Eureka:
This completes the process of configuring the standalone stub runner application.
![]() | Note |
---|---|
The port configuration may be automated by Spring Cloud Pipelines in the future, such that it will not be necessary to include the port in the |
Edit smoke tests to align with the contract
Finally, we edit our smoke tests for greeting-ui
to ensure the response does not contain the hystrix fallback, since we are now expecting a response from the stub.
package smoke; import org.assertj.core.api.BDDAssertions; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.http.ResponseEntity; import org.springframework.test.context.junit4.SpringRunner; import org.springframework.web.client.RestTemplate; @RunWith(SpringRunner.class) @SpringBootTest(classes = SmokeTests.class, webEnvironment = SpringBootTest.WebEnvironment.NONE) @EnableAutoConfiguration public class SmokeTests { @Value("${application.url}") String applicationUrl; RestTemplate restTemplate = new RestTemplate(); @Test public void should_return_a_fortune() { ResponseEntity<String> response = this.restTemplate .getForEntity("http://" + this.applicationUrl + "/", String.class); BDDAssertions.then(response.getStatusCodeValue()).isEqualTo(200); // Filter out the known Hystrix fallback response BDDAssertions.then(response.getBody()).doesNotContain("This fortune is no good. Try another."); } }
In this case, in contrast to the integration test we created earlier for greeting-ui
, we do not include @AutoConfigureStubRunner
since we are using a standalone stub runner application.
Push contract-based changes for greeting-ui
. You should be pushing the following new or modified files:
At this point, we can run through the full pipeline for greeting-ui
and leverage the contract-based stub in both the build and test environments.
What have we accomplished?
By implementing a contract-driven approach with auto-generation of tests and stubs, we have introduced a clean, structured, and reliable way to define, communicate, document, manage and test APIs
Inter-team communication will be simpler
Developer productivity will increase
This concludes the tutorial on migrating apps for Spring Cloud Pipelines for Cloud Foundry.
Moving forward, the refactoring work needed here can be incorporated into your and/or your team’s standard practices. In short:
Good:
sc-manifest.yml
) in your app repoversion
branch in your app repoBetter
Best
Implementing all the "good" practices above already positions you to instantly create pipelines for your apps usign Spring Cloud Pipelines. This is a huge win in terms of consistency and productivity, and standardization across development teams. Of course, this is an open source project, so it can be modified to meet your needs.
Implementing the "better" practices will ensure the proper tests get run at the proper time. At that point you can add as much test coverage as you need to have high confidence in your pipelines.
Implementing the "best" practices will give you additional confidence in your pipeline and encourage better programming practices for database version and API management across development teams. It will also give you higher confidence in your pipelines and enable you to avoid the cumbersome business of rolling back a database.
Happy coding!
As prerequisites you need to have shellcheck,
bats, jq
and ruby installed. If you’re on a Linux
machine then bats
and shellcheck
will be installed for you.
To install the required software on Linux just type the following commands
$ sudo apt-get install -y ruby jq
If you’re on a Mac then just execute these commands to install the missing software
$ brew install jq $ brew install ruby $ brew install bats $ brew install shellcheck
To make bats
work properly we needed to attach Git submodules. To have them
initialized either clone with appropriate command
$ git clone --recursive https://github.com/spring-cloud/spring-cloud-pipelines.git
or if you have already cloned the project and are just pulling changes
$ git submodule init $ git submodule update
If you forget about this step, then Gradle will execute these steps for you.
Once you have installed all the prerequisites you can execute
$ ./gradlew clean build
to build and test the project.
Spring Cloud Pipelines has a lot of tests, including Git repositories. Those
and the documentation weigh a lot. That’s why under the dist
folder we
publish zip
and tar.gz
distributions of sources without tests and documentation.
Whenever we release a distribution we attach a VERSION
file to it that contains
build and SCM information (e.g. build time, revision). To skip the distribution generation
just pass the skipDist
property
$ ./gradlew build -PskipDist
It’s enough to execute the release
task that will automatically test the project,
build the distributions, change the versions, build the docs, upload them to Spring Cloud Static,
tag the repo and then revert the changed versions back to default.
$ ./gradlew release -PnewVersion=1.0.0.RELEASE
If you want to pick only pieces (e.g. you’re interested only in Cloud Foundry
with
Concourse
combination) it’s enough to execute this command:
$ ./gradlew customize
You’ll see a screen looking more or less like this:
:customize ___ _ ___ _ _ ___ _ _ _ / __|_ __ _ _(_)_ _ __ _ / __| |___ _ _ __| | | _ (_)_ __ ___| (_)_ _ ___ ___ \__ \ '_ \ '_| | ' \/ _` | | (__| / _ \ || / _` | | _/ | '_ \/ -_) | | ' \/ -_|_-< |___/ .__/_| |_|_||_\__, | \___|_\___/\_,_\__,_| |_| |_| .__/\___|_|_|_||_\___/__/ |_| |___/ |_| Follow the instructions presented in the console or terminate the process to quit (ctrl + c) === PAAS TYPE === Which PAAS type do you want to use? Options: [CF, K8S, BOTH] <-------------> 0% EXECUTING > :customize
Now you need to answer a couple of questions. That way whole files and its pieces
will get removed / updated accordingly. If you provide CF
and Concourse
options
thn Kubernetes
and Jenkins
configuration / folders / pieces of code in
the project will get removed.
When doing a release you also need to push a Docker image to Dockerhub.
From the project root, run the following commands replacing <version>
with the
version of the release.
$ docker login $ docker build -t springcloud/spring-cloud-pipeline-jenkins:<version> ./jenkins $ docker push springcloud/spring-cloud-pipeline-jenkins:<version>