3. Opinionated Implementation

This section describes a full flow of the demo applications.

[Important]Important

Your applications need not have the same dependencies (such as Eureka) as this demo.

For demo purposes, we provide Docker Compose setup with Artifactory, Concourse, and Jenkins tools. Regardless of the CD application, for the pipeline to pass, you need one of the following:

[Tip]Tip

In the demos, we show you how to first build the github-webhook project. That is because the github-analytics needs the stubs of github-webhook to pass the tests. We also use references to the github-analytics project, since it contains more interesting pieces as far as testing is concerned.

3.1 Build

The following image shows the results of building the demo pipeline (which the rest of this chapter describes):

Figure 3.1. Build and upload artifacts

build

In this step, we generate a version of the pipeline. Next, we run unit, integration, and contract tests. Finally, we:

  • Publish a fat jar of the application.
  • Publish a Spring Cloud Contract jar containing stubs of the application.
  • For Kubernetes, upload a Docker image of the application.

During this phase, we run a Maven build by using Maven Wrapper or a Gradle build by using Gradle Wrapper, with unit and integration tests. We also tag the repository with dev/${version}. That way, in each subsequent step of the pipeline, we can retrieve the tagged version. Also, we know exactly which version of the pipeline corresponds to which Git hash.

Once the artifact is built, we run API compatibility check, as follows:

  • We search for the latest production deployment.
  • We retrieve the contracts that were used by that deployment.
  • From the contracts, we generat API tests to see if the current implementation is fulfilling the HTTP and messaging contracts that the current production deployment has defined (we check backward compatibility of the API).

3.2 Test

The following image shows the result of doing smoke tests and rolling back:

Figure 3.2. Smoke test and rollback test on test environment

test

Here, we:

  • Start a RabbitMQ service in PaaS.
  • Deploying Eureka infrastructure application to PaaS.
  • Download the fat jar from Nexus and upload it to PaaS. We want the application to run in isolation (be surrounded by stubs).
[Tip]Tip

Currently, due to port constraints in Cloud Foundry, we cannot run multiple stubbed HTTP services in the cloud. To fix this issue, we run the application with the smoke Spring profile, on which you can stub out all HTTP calls to return a mocked response.

  • If the application uses a database, it gets upgraded at this point by Flyway, Liquibase, or any other migration tool once the application gets started.
  • From the project’s Maven or Gradle build, we extract the stubrunner.ids property that contains all the groupId:artifactId:version:classifier notations of dependent projects for which the stubs should be downloaded.
  • We upload Stub Runner Boot and pass the extracted stubrunner.ids to it. That way, we have a running application in Cloud Foundry that downloads all the necessary stubs of our application.
  • From the checked-out code, we run the tests available under the smoke profile. In the case of the GitHub Analytics application, we trigger a message from the GitHub Webhook application’s stub and send the message by RabbitMQ to GitHub Analytics. Then we check whether the message count has increased.
  • Once the tests pass, we search for the last production release. Once the application is deployed to production, we tag it with prod/${version}. If there is no such tag (there was no production release), no rollback tests are run. If there was a production release, the tests get executed.
  • Assuming that there was a production release, we check out the code that corresponds to that release (we check out the tag), download the appropriate artifact (either a JAR for Cloud Foundry or a Docker image for Kubernetes), and we upload it to PaaS.
[Important]Important

The old artifact runs against the NEW version of the database.

We run the old smoke tests against the freshly deployed application, surrounded by stubs. If those tests pass, we have a high probability that the application is backwards compatible. * The default behavior is that, after all of those steps, the user can manually click to deploy the application to a stage environment.

3.3 Stage

The following image shows the result of deploying to a stage environment:

Figure 3.3. End to end tests on stage environment

stage

Here, we:

  • Start a RabbitMQ service in PaaS.
  • Deploy Eureka infrastructure application to PaaS.
  • Download the artifact (either a JAR for Cloud Foundry or a Docker image for Kubernetes) upload it to PaaS.

Next, we have a manual step in which, from the checked-out code, we run the tests available under the e2e profile. In the case of the GitHub Analytics application, we send an HTTP message to the GitHub Analytics endpoint. Then we check whether the received message count has increased.

By default, this step is manual, because the stage environment is often shared between teams and some preparations on databases and infrastructure have to take place before the tests can be run. Ideally, these step should be fully automatic.

3.4 Prod

The following image shows the result of deploying to a production environment:

Figure 3.4. Deployment to production

prod

The step to deploy to production is manual. However, ideally, it should be automatic.

[Important]Important

This step does deployment to production. On production, we assume that you have the infrastructure running. That is why, before you run this step, you must run a script that provisions the services on the production environment. For Cloud Foundry, call tools/cf-helper.sh setup-prod-infra. For Kubernetes, call tools/k8s-helper.sh setup-prod-infra.

Here, we:

  • Tag the Git repo with prod/${version}.
  • Download the application artifact (either a JAR for Cloud Foundry or a Docker image for Kubernetes).
  • We do Blue Green deployment:

    • For Cloud Foundry:

      • We rename the current instance of the application (for example, myService to myService-venerable).
      • We deploy the new instance of the app under the fooService name
      • Now, two instances of the same application are running on production.
    • For Kubernetes:

      • We deploy a service with the name of the application (for example, myService)
      • We do a deployment with the name of the application with version suffix,with the name escaped to fulfill the DNS name requirements (for example, fooService-1-0-0-M1-123-456-VERSION).
      • All deployments of the same application have the same label name, which is equal to the application name (for example, myService).
      • The service routes the traffic by basing on the name label selector.
      • Now two instances of the same application are running in production.
  • In the Complete switch over, which is a manual step, we stop the old instance.

    [Note]Note

    Remember to run this step only after you have confirmed that both instances work.

  • In the Rollback, which is a manual step,

    • We route all the traffic to the old instance.

      • In CF, we do that by ensuring that blue is running and removing green.
      • In K8S, we do that by scaling the number of instances of green to 0.
    • We remov the latest prod Git tag.