11. Jenkins Pipeline (Kubernetes)

[Important]Important

In this chapter, we assume that you deploy your application to Kubernetes PaaS.

The Spring Cloud Pipelines repository contains job definitions and the opinionated setup pipeline that uses Jenkins Job DSL plugin. Those jobs form an empty pipeline and an opinionated sample pipeline that you can use in your company.

The following projects take part in the microservice setup for this demo.

11.1 Step-by-step

This is a guide for a Jenkins Job DSL based pipeline.

If you want only to run the demo as far as possible by using PCF Dev and Docker Compose, do the following:

11.1.1 Fork Repositories

Four applications compose the pipeline

You need to fork only the following repositories, because only then can you tag and push the tag to your repository:

11.1.2 Start Jenkins and Artifactory

Jenkins and Artifactory can be ran locally. To do so, run the start.sh script from this repo. The following listing shows the script:

git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/jenkins
./start.sh yourGitUsername yourGitPassword yourForkedGithubOrg yourDockerRegistryOrganization yourDockerRegistryUsername yourDockerRegistryPassword yourDockerRegistryEmail

Then Jenkins runs on port 8080, and Artifactory runs on port 8081. The provided parameters are passed as environment variables to the Jenkins VM and credentials are set. That way, you need not do any manual work on the Jenkins side. In the preceding script, the third parameter could be yourForkedGithubOrg or yourGithubUsername. Also the REPOS environment variable contains your GitHub org in which you have the forked repositories.

Instead of the Git username and password parameters, you could pass -key <path_to_private_key> if you prefer to use the key-based authentication with your Git repositories.

You need to pass the credentials for the Docker organization (by default, we search for the Docker images at Docker Hub) so that the pipeline can push images to your org.

Deploy the Infra JARs to Artifactory

When Artifactory is running, run the tools/deploy-infra.sh script from this repo. The following listing shows the script:

git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/
./tools/deploy-infra-k8s.sh

As a result, both the eureka and stub runner repos are cloned, built, and uploaded to Artifactory and their docker images are built.

[Important]Important

Your local Docker process is reused by the Jenkins instance running in Docker. That is why you do not have to push these images to Docker Hub. On the other hand, if you run this sample in a remote Kubernetes cluster, the driver is not shared by the Jenkins workers, so you can consider pushing these Docker images to Docker Hub too.

11.1.3 Run the seed job

We created the seed job for you, but you have to run it. When you do run it, you have to provide some properties. By default we create a seed that has all the properties options, but you can delete most of it. If you set the properties as global environment variables, you have to remove them from the seed.

To run the demo, provide a comma-separated list of the URLs of the two aforementioned forks (github-webhook and github-analytics') in the `REPOS variable.

The following images shows the steps involved:

   

Figure 11.1. Click the 'jenkins-pipeline-seed-cf' job for Cloud Foundry and jenkins-pipeline-seed-k8s for Kubernetes

seed click

   

Figure 11.2. Click the 'Build with parameters'

seed run

   

Figure 11.3. The REPOS parameter should already contain your forked repos (you’ll have more properties than the ones in the screenshot)

seed

   

Figure 11.4. This is how the results of seed should look like

seed built

11.1.4 Run the github-webhook pipeline

We already created the seed job for you, but you have to run it. When you do run it, you have to provide some properties. By default, we create a seed that has all the properties options, but you can delete most of it. If you set the properties as global environment variables, you have to remove them from the seed.

To run the demo, provide a comma-separated list of URLs of the two aforementioned forks (github-webhook and github-analytics) in the REPOS variable.

The following images shows the steps involved:

   

Figure 11.5. Click the 'github-webhook' view

seed views

   

Figure 11.6. Run the pipeline

pipeline run

   

[Important]Important

If your build fails on deploy previous version to stage due to a missing jar, that means that you forgot to clear the tags in your repository. Typically, that happens because you removed the Artifactory volume with a deployed jar while a tag in the repository still points there. See here for how to remove the tag.

   

Figure 11.7. Click the manual step to go to stage (remember about killing the apps on test env). To do this click the ARROW next to the job name

pipeline manual

   

[Important]Important

Servers often run run out of resources at the stage step. For that reason, we suggest killing all applications on test. See the FAQ for more detail.

   

Figure 11.8. The full pipeline should look like this

pipeline finished

   

11.2 Declarative pipeline & Blue Ocean

You can also use the declarative pipeline approach with the Blue Ocean UI.

The Blue Ocean UI is available under the blue/ URL (for example, for Docker Machine-based setup: http://192.168.99.100:8080/blue).

The following images show the various steps involved:

   

Figure 11.9. Open Blue Ocean UI and click on github-webhook-declarative-pipeline

blue 1

   

Figure 11.10. Your first run will look like this. Click Run button

blue 2

   

Figure 11.11. Enter parameters required for the build and click run

blue 3

   

Figure 11.12. A list of pipelines will be shown. Click your first run.

blue 4

   

Figure 11.13. State if you want to go to production or not and click Proceed

blue 5

   

Figure 11.14. The build is in progress…​

blue 6

   

Figure 11.15. The pipeline is done!

blue 7

   

[Important]Important

There is no possibility of restarting a pipeline from a specific stage after failure. See this issue for more information

[Warning]Warning

Currently, there is no way to introduce manual steps in a performant way. Jenkins blocks an executor when a manual step is required. That means that you run out of executors pretty quickly. See this issue and this StackOverflow question for more information.

11.3 Jenkins Kubernetes customization

You can customize Jenkins for Cloud Foundry by setting a variety of environment variables.

[Note]Note

You need not see all the environment variables described in this section to run the demo. They are needed only when you want to make custom changes.

11.3.1 All env vars

The environment variables that are used in all of the jobs are as follows:

Property NameProperty DescriptionDefault value

BUILD_OPTIONS

Additional options you would like to pass to the Maven / Gradle build

 

DOCKER_REGISTRY_ORGANIZATION

Name of the docker organization to which Docker images should be deployed

scpipelines

DOCKER_REGISTRY_CREDENTIAL_ID

Credential ID used to push Docker images

docker-registry

DOCKER_SERVER_ID

Server ID in settings.xml and Maven builds

docker-repo

DOCKER_EMAIL

Email used to connect to Docker registry and Maven builds

[email protected]

DOCKER_REGISTRY_ORGANIZATION

URL of the Kubernetes cluster for the test environment

scpipelines

DOCKER_REGISTRY_URL

URL of the docker registry

https://index.docker.io/v1/

PAAS_TEST_API_URL

URL of the API of the Kubernetes cluster for the test environment

192.168.99.100:8443

PAAS_STAGE_API_URL

URL of the API of the Kubernetes cluster for the stage environment

192.168.99.100:8443

PAAS_PROD_API_URL

URL of the API of the Kubernetes cluster for the prod environment

192.168.99.100:8443

PAAS_TEST_CA_PATH

Path to the certificate authority for test the environment

/usr/share/jenkins/cert/ca.crt

PAAS_STAGE_CA_PATH

Path to the certificate authority for stage the environment

/usr/share/jenkins/cert/ca.crt

PAAS_PROD_CA_PATH

Path to the certificate authority for the prod environment

/usr/share/jenkins/cert/ca.crt

PAAS_TEST_CLIENT_CERT_PATH

Path to the client certificate for the test environment

/usr/share/jenkins/cert/apiserver.crt

PAAS_STAGE_CLIENT_CERT_PATH

Path to the client certificate for the stage environment

/usr/share/jenkins/cert/apiserver.crt

PAAS_PROD_CLIENT_CERT_PATH

Path to the client certificate for the prod environment

/usr/share/jenkins/cert/apiserver.crt

PAAS_TEST_CLIENT_KEY_PATH

Path to the client key for the test environment

/usr/share/jenkins/cert/apiserver.key

PAAS_STAGE_CLIENT_KEY_PATH

Path to the client key for the stage environment

/usr/share/jenkins/cert/apiserver.key

PAAS_PROD_CLIENT_KEY_PATH

Path to the client key for the test environment

/usr/share/jenkins/cert/apiserver.key

PAAS_TEST_CLIENT_TOKEN_PATH

Path to the file containing the token for the test environment

 

PAAS_STAGE_CLIENT_TOKEN_PATH

Path to the file containing the token for the stage environment

 

PAAS_PROD_CLIENT_TOKEN_PATH

Path to the file containing the token for the prod environment

 

PAAS_TEST_CLIENT_TOKEN_ID

ID of the credential containing access token for test environment

 

PAAS_STAGE_CLIENT_TOKEN_ID

ID of the credential containing access token for the stage environment

 

PAAS_PROD_CLIENT_TOKEN_ID

ID of the credential containing access token for the prod environment

 

PAAS_TEST_CLUSTER_NAME

Name of the cluster for the test environment

minikube

PAAS_STAGE_CLUSTER_NAME

Name of the cluster for the stage environment

minikube

PAAS_PROD_CLUSTER_NAME

Name of the cluster for the prod environment

minikube

PAAS_TEST_CLUSTER_USERNAME

Name of the user for the test environment

minikube

PAAS_STAGE_CLUSTER_USERNAME

Name of the user for the stage environment

minikube

PAAS_PROD_CLUSTER_USERNAME

Name of the user for the prod environment

minikube

PAAS_TEST_SYSTEM_NAME

Name of the system for the test environment

minikube

PAAS_STAGE_SYSTEM_NAME

Name of the system for the stage environment

minikube

PAAS_PROD_SYSTEM_NAME

Name of the system for the prod environment

minikube

PAAS_TEST_NAMESPACE

Namespace for the test environment

sc-pipelines-test

PAAS_STAGE_NAMESPACE

Namespace for the stage environment

sc-pipelines-stage

PAAS_PROD_NAMESPACE

Namespace for the prod environment

sc-pipelines-prod

KUBERNETES_MINIKUBE

Whether to connect to Minikube

true

REPO_WITH_BINARIES_FOR_UPLOAD

URL of the repository with the deployed jars

http://artifactory:8081/artifactory/libs-release-local

REPO_WITH_BINARIES_CREDENTIAL_ID

Credential ID used for the repository with jars

repo-with-binaries

M2_SETTINGS_REPO_ID

The ID of server from Maven settings.xml

artifactory-local

JDK_VERSION

The name of the JDK installation

jdk8

PIPELINE_VERSION

The version of the pipeline (ultimately, also the version of the jar)

1.0.0.M1-${GROOVY,script ="new Date().format('yyMMdd_HHmmss')"}-VERSION

GIT_EMAIL

The email used by Git to tag the repository

[email protected]

GIT_NAME

The name used by Git to tag the repository

Pivo Tal

AUTO_DEPLOY_TO_STAGE

Whether deployment to stage be automatic

false

AUTO_DEPLOY_TO_PROD

Whether deployment to prod be automatic

false

API_COMPATIBILITY_STEP_REQUIRED

Whether the API compatibility step is required

true

DB_ROLLBACK_STEP_REQUIRED

Whether the DB rollback step is present

true

DEPLOY_TO_STAGE_STEP_REQUIRED

Whether the deploy-to-stage step is present

true

11.4 Preparing to Connect to GCE

[Important]Important

Skip this step if you do not use GCE

In order to use GCE, we need to have gcloud running. If you already have the CLI installed, skip this step. If not run the following command to have the CLI downloaded and an installer started:

$ ./tools/k8s-helper.sh download-gcloud

Next, configure gcloud. Run gcloud init and log in to your cluster. You are redirected to a login page. Pick the proper Google account and log in.

Pick an existing project or create a new one.

Go to your platform page (click on Container Engine) in GCP and connect to your cluster with the following values:

$ CLUSTER_NAME=...
$ ZONE=us-east1-b
$ PROJECT_NAME=...
$ gcloud container clusters get-credentials ${CLUSTER_NAME} --zone ${ZONE} --project ${PROJECT_NAME}
$ kubectl proxy

The Kubernetes dashboard runs at http://localhost:8001/ui/.

We need a Persistent Disk for our Jenkins installation. Create it as follows:

$ ZONE=us-east1-b
$ gcloud compute disks create --size=200GB --zone=${ZONE} sc-pipelines-jenkins-disk

Once the disk has been created, you need to format it. See the instructions at https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting

11.5 Connecting to a Kubo or GCE Cluster

[Important]Important

Skip this step if you do not use Kubo or GCE

This section describes how to deploy Jenkins and Artifactory to a Kubernetes cluster deployed with Kubo.

[Tip]Tip

To see the dashboard, run kubectl proxy and access localhost:8081/ui.

  1. Log in to the cluster.
  2. Deploy Jenkins and Artifactory to the cluster:

    • ./tools/k8s-helper.sh setup-tools-infra-vsphere for a cluster deployed on VSphere
    • ./tools/k8s-helper.sh setup-tools-infra-gce for a cluster deployed to GCE
  3. Forward the ports so that you can access the Jenkins UI from your local machine, by using the following settings
$ NAMESPACE=default
$ JENKINS_POD=jenkins-1430785859-nfhx4
$ LOCAL_PORT=32044
$ CONTAINER_PORT=8080
$ kubectl port-forward --namespace=${NAMESPACE} ${JENKINS_POD} ${LOCAL_PORT}:${CONTAINER_PORT}
----
  1. Go to Credentials, click System and Global credentials, as the following image shows: image::https://raw.githubusercontent.com/spring-cloud/spring-cloud-pipelines/master/docs-sources/src/main/asciidoc/images/jenkins/kubo_credentials.png[caption="Click `Global credentials`"]
  2. Update git, repo-with-binaries and docker-registry credentials
  3. Run the jenkins-pipeline-k8s-seed seed job and fill it out with the following data
  4. Put kubernetes.default:443 here (or KUBERNETES_API:KUBERNETES_PORT)

    • PAAS_TEST_API_URL
    • PAAS_STAGE_API_URL
    • PAAS_PROD_API_URL
  5. Put /var/run/secrets/kubernetes.io/serviceaccount/ca.crt data here:

    • PAAS_TEST_CA_PATH
    • PAAS_STAGE_CA_PATH
    • PAAS_PROD_CA_PATH
  6. Uncheck the Kubernetes Minikube value.

    • Clear the following variables:

      • PAAS_TEST_CLIENT_CERT_PATH
      • PAAS_STAGE_CLIENT_CERT_PATH
      • PAAS_PROD_CLIENT_CERT_PATH
      • PAAS_TEST_CLIENT_KEY_PATH
      • PAAS_STAGE_CLIENT_KEY_PATH
      • PAAS_PROD_CLIENT_KEY_PATH
  7. Set /var/run/secrets/kubernetes.io/serviceaccount/token value to these variables:

    • PAAS_TEST_CLIENT_TOKEN_PATH
    • PAAS_STAGE_CLIENT_TOKEN_PATH
    • PAAS_STAGE_CLIENT_TOKEN_PATH

      • Set the cluster name to these variables (you can get the cluster name by calling kubectl config current-context):
    • PAAS_TEST_CLUSTER_NAME
    • PAAS_STAGE_CLUSTER_NAME
    • PAAS_PROD_CLUSTER_NAME
  8. Set the system name to these variables (you can get the system name by calling kubectl config current-context):

    • PAAS_TEST_SYSTEM_NAME
    • PAAS_STAGE_SYSTEM_NAME
    • PAAS_PROD_SYSTEM_NAME
  9. Update the DOCKER_EMAIL property with your email address.
  10. Update the DOCKER_REGISTRY_ORGANIZATION with your Docker organization name.
  11. If you do not want to upload the images to DockerHub, update DOCKER_REGISTRY_URL. image::https://raw.githubusercontent.com/spring-cloud/spring-cloud-pipelines/master/docs-sources/src/main/asciidoc/images/jenkins/pks_seed.png[caption="Example of a filled out seed job"]
  12. Run the pipeline