Reader small image

You're reading from  The DevOps 2.4 Toolkit

Product typeBook
Published inNov 2019
PublisherPackt
ISBN-139781838643546
Edition1st Edition
Concepts
Right arrow
Author (1)
Viktor Farcic
Viktor Farcic
author image
Viktor Farcic

Viktor Farcic is a senior consultant at CloudBees, a member of the Docker Captains group, and an author. He codes using a plethora of languages starting with Pascal (yes, he is old), Basic (before it got the Visual prefix), ASP (before it got the .NET suffix), C, C++, Perl, Python, ASP.NET, Visual Basic, C#, JavaScript, Java, Scala, and so on. He never worked with Fortran. His current favorite is Go. Viktor's big passions are Microservices, Continuous Deployment, and Test-Driven Development (TDD). He often speaks at community gatherings and conferences. Viktor wrote Test-Driven Java Development by Packt Publishing, and The DevOps 2.0 Toolkit. His random thoughts and tutorials can be found in his blog—Technology Conversations
Read more about Viktor Farcic

Right arrow

Creating a Continuous Deployment Pipeline with Jenkins

Having a continuous deployment pipeline capable of a fully automated application life-cycle is a real sign of maturity of an organization.

This is it. The time has come to put all the knowledge we obtained into good use. We are about to define a "real" continuous deployment pipeline in Jenkins. Our goal is to move every commit through a set of steps until the application is installed (upgraded) and tested in production. We will undoubtedly face some new challenges, but I am confident that we'll manage to overcome them. We already have all the base ingredients, and the main thing left is to put them all together into a continuous deployment pipeline.

Before we move into a practical section, we might want to spend a few moments discussing our goals.

Exploring the continuous deployment process

 

Explaining continuous deployment (CDP) is easy. Implementing it is very hard, and the challenges are often hidden and unexpected. Depending on the maturity of your processes, architecture, and code, you might find out that the real problems do not lie in the code of a continuous deployment pipeline, but everywhere else. As a matter of fact, developing a pipeline is the easiest part.

That being said, you might wonder whether you made a mistake by investing your time in reading this book since we are focused mostly on the pipeline that will be executed inside a Kubernetes cluster.

We did not discuss the changes in your other processes. We did not explore what a good architecture that will support CDP pipelines is. We did not dive into how to code your application to be pipeline-friendly. I assumed that you already know all that...

Creating a cluster

We'll start the practical section of the chapter by going to the vfarcic/k8s-specs repository and by making sure that we have the latest revision.

All the commands from this chapter are available in the 07-jenkins-cdp.sh (https://gist.github.com/vfarcic/d0cbca319360eb000098383a09fd65f7) Gist.
 1  cd k8s-specs
2 3 git pull

Next, we'll merge your go-demo-3 fork with the upstream. If you forgot the commands, they are available in the go-demo-3-merge.sh (https://gist.github.com/vfarcic/171172b69bb75903016f0676a8fe9388) gist.

It is imperative that you change all the references of vfarcic/go-demo-3 to the address of the image in your Docker Hub account. If, for example, your hub user is jdoe, you should change all vfarcic/go-demo-3 references to jdoe/go-demo-3. Even though I invite you to apply the modifications to all the files of the repository, the...

Installing Jenkins

We already automated Jenkins installation so that it provides all the features we need out-of-the-box. Therefore, the exercises that follow should be very straightforward. If you are a Docker for Mac or Windows, minikube, or minishift user, we'll need to bring back up the VM we suspended in the previous chapter. Feel free to skip the commands that follow if you are hosting your cluster in AWS or GCP.

 1  cd cd/docker-build
2
3 vagrant up
4
5 cd ../../
6
7 export DOCKER_VM=true

If you prefer running your cluster in AWS with kops or EKS, we'll need to retrieve the AMI ID we stored in docker-ami.log in the previous chapter.

 1  AMI_ID=$(grep 'artifact,0,id' \
 2      cluster/docker-ami.log \
 3      | cut -d: -f2)
4 5 echo $AMI_ID

If GKE is your cluster of choice, we'll need to define variables G_PROJECT and G_AUTH_FILE which...

Defining the build stage

The primary function of the build stage of the continuous deployment pipeline is to build artifacts and a container image and push it to a registry from which it can be deployed and tested. Of course, we cannot build anything without code, so we'll have to check out the repository as well.

Since building things without running static analysis, unit tests, and other types of validation against static code should be illegal and punishable by public shame, we'll include those steps as well.

We won't deal with building artifacts, nor we are going to run static testing and analysis from inside the pipeline. Instead, we'll continue relying on Docker's multistage builds for all those things, just as we did in the previous chapters.

Finally, we couldn't push to a registry without authentication, so we'll have to log in to Docker...

Defining the functional testing stage

For the functional testing stage, the first step is to install the application under test. To avoid the potential problems of installing the same release twice, we'll use helm upgrade instead of install.

As you already know, Helm only acknowledges that the resources are created, not that all the Pods are running. To mitigate that, we'll wait for rollout status before proceeding with tests.

Once the application is rolled out, we'll run the functional tests. Please note that, in this case, we will run only one set of tests. In the "real" world scenario, there would probably be others like, for example, performance tests or front-end tests for different browsers.

When running multiple sets of different tests, consider using parallel construct. More information can be found in the Parallelism and Distributed Builds with...

Defining the release stage

In the release stage, we'll push Docker images to the registry as well as the project's Helm Chart. The images will be tags of the image under test, but this time they will be named using a convention that clearly indicates that they are production- ready.

In the build stage, we're tagging images by including the branch name. That way, we made it clear that an image is not yet thoroughly tested. Now that we executed all sorts of tests that validated that the release is indeed working as expected, we can re-tag the images so that they do not include branch names. That way, everyone in our organization can easily distinguish yet-to-be-tested from production-ready releases.

Since we cannot know (easily) whether the Chart included in the project's repository changed or not, during this stage, we'll push it to ChartMuseum. If the...

Defining the deploy stage

We're almost finished with the pipeline, at least in its current form.

The purpose of the deploy stage is to install the new release to production and to do the last round of tests that only verify whether the new release integrates with the rest of the system. Those tests are often elementary since they do not validate the release on the functional level. We already know that the features work as expected and immutability of the containers guarantee that what was deployed as a test release is the same as what will be upgraded to production. What we're not yet sure is whether there is a problem related to the configuration of the production environment or, in our case, production Namespace.

If something goes wrong, we need to be able to act swiftly and roll back the release. I'll skip the discussion about the inability to roll back when...

What are we missing in our pipeline?

We already discussed some steps that we might be missing. We might want to store test results in SonarQube. We might want to generate release notes and store them in GitHub. We might need to run performance tests. There are many things we could have done, but we didn't. Those additional steps will differ significantly from one organization to another. Even within a company, one team might have different steps than the other. Guessing which ones you might need would be an exercise in futility. I would have almost certainly guessed wrong.

One step that almost everyone needs is notification of failure. We need to be notified when something goes wrong and fix the issue. However, there are too many destinations where those notifications might need to be sent. Some prefer email, while others opt for chats. In the latter case, it could be Slack...

Reusing pipeline snippets through global pipeline libraries

The pipeline we designed works as we expect. However, we'll have a problem on our hands if other teams start copying and pasting the same script for their pipelines. We'd end up with a lot of duplicated code that will be hard to maintain.

Most likely it will get worse than the simple practice of duplicating code since not all pipelines will be the same. There's a big chance each is going to be different, so copy and paste practice will only be the first action. People will find the pipeline that is closest to what they're trying to accomplish, replicate it, and then change it to suit their needs. Some steps are likely going to be the same for many (if not all) projects, while others will be specific to only one, or just a few pipelines.

The more pipelines we design, the more patterns will emerge. Everyone...

Consulting global pipeline libraries documentation

We already saw that we can open a repository with global pipeline libraries and consult the functions to find out what they do. While the developer in me prefers that option, many might find it too complicated and might prefer something more "non-developer friendly". Fortunately, there is an alternative way to document and consult libraries.

Let's go back to the forked repository with the libraries.

 1  open
"https://github.com/$GH_USER/jenkins-shared-libraries/tree/master/vars"

If you pay closer attention, you'll notice that all Groovy files with names that start with k8s have an accompanying txt file. Let's take a closer look at one of them.

 1  curl "https://raw.githubusercontent.com/$GH_USER/jenkins-shared-
libraries/master/vars/k8sBuildImageBeta.txt"

The output is as follows...

Using Jenkins file and Multistage builds

The pipeline we designed has at least two significant shortcomings. It is not aware of branches, and it is not in source control. Every time we instructed Jenkins to use the git step, it pulled the latest commit from the master branch. While that might be OK for demos, it is unacceptable in real-world situations. Our pipeline must pull the commit that initiated a build from the correct branch. In other words, no matter where we push a commit, that same commit must be used by the pipeline.

If we start processing all commits, no matter from which branch they're coming, we will soon realize that it does not make sense always to execute the same stages. As an example, the release and deploy stages should be executed only if a commit is made to the master branch. Otherwise, we'd create a new production release always, even if...

What now?

We are, finally, finished designing the first iteration of a fully functioning continuous deployment pipeline. All the subjects we explored in previous chapters and all the problems we solved led us to this point. Everything we learned before were prerequisites for the pipeline we just created.

We succeeded! We are victorious! And we deserve a break.

Before you run away, there are two things I'd like to comment.

Our builds were very slow. Realistically, they should be at least twice as fast. However, we are operating in a tiny cluster and, more importantly, go-demo-3-build Namespace has limited resources and very low defaults. Kubernetes throttled CPU usage of the containers involved in builds to maintain the default values we set on that Namespace. That was intentional. I wanted to keep the cluster and the Namespaces small so that the costs are at a minimum....

lock icon
The rest of the chapter is locked
You have been reading a chapter from
The DevOps 2.4 Toolkit
Published in: Nov 2019Publisher: PacktISBN-13: 9781838643546
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Viktor Farcic

Viktor Farcic is a senior consultant at CloudBees, a member of the Docker Captains group, and an author. He codes using a plethora of languages starting with Pascal (yes, he is old), Basic (before it got the Visual prefix), ASP (before it got the .NET suffix), C, C++, Perl, Python, ASP.NET, Visual Basic, C#, JavaScript, Java, Scala, and so on. He never worked with Fortran. His current favorite is Go. Viktor's big passions are Microservices, Continuous Deployment, and Test-Driven Development (TDD). He often speaks at community gatherings and conferences. Viktor wrote Test-Driven Java Development by Packt Publishing, and The DevOps 2.0 Toolkit. His random thoughts and tutorials can be found in his blog—Technology Conversations
Read more about Viktor Farcic