Given that we already explored continuous deployment, you might be wondering why are we even talking at this point about continuous delivery. There are a few reasons for that. First of all, I am conscious that many of you will not or can not implement continuous deployment. Your tests might not be as reliable as you'd need them to be. Your processes might not allow full automation. You might have to follow regulations that prevent you from reaching nirvana. There could be many other reasons. The point is that not everyone...
You're reading from The DevOps 2.4 Toolkit
Creating a cluster
Just as before, we'll start the practical part by making sure that we have the latest version of the k8s-specs repository.
1 cd k8s-specs
2 3 git pull
Unlike the previous chapters, you cannot use an existing cluster this time. The reason behind that lies in reduced requirements. This time, the cluster should NOT have ChartMuseum. Soon you'll see why. What we need are the same hardware specs (excluding GKE), with NGINX Ingress and Tiller running inside the cluster, and with the environment variable LB_IP that holds the address of the IP through which we can access the external load balancer, or with the IP of the VM in case of single VM local clusters like minikube, minishift, and Docker for Mac or Windows...
Defining the whole production environment
All the chapters until this one followed the same pattern. We'd learn about a new tool and, from there on, we'd streamline its installation through Gists in all subsequent chapters. As an example, we introduced ChartMuseum a few chapters ago, we learned how to install it, and, from there on, there was no point reiterating the same set of steps in the chapters that followed. Instead, we had the installation steps in Gists. Knowing that, you might be wondering why we did not follow the same pattern now. Why was ChartMuseum excluded from the Gists we're using in this chapter? Why isn't Jenkins there as well? Are we going to install ChartMuseum and Jenkins with a different configuration? We're not. Both will have the same configuration, but they will be installed in a slightly different way.
We already saw the benefits...
What is the continuous delivery pipeline?
Now that we have a cluster and the third-party applications running in the production environment, we can turn our attention towards defining a continuous delivery pipeline.
Before we proceed, we'll recap the definitions of continuous deployment and continuous delivery.
When...
Exploring application's repository and preparing the environment
Before I wrote this chapter, I forked the vfarcic/go-demo-3 (https://github.com/vfarcic/go-demo-3) repository into vfarcic/go-demo-5 (https://github.com/vfarcic/go-demo-5). Even though most the code of the application is still the same, I thought it would be easier to apply and demonstrate the changes in a new repository instead of creating a new branch or doing some other workaround that would allow us to have both processes in the same repository. All in all, go-demo-5 is a copy of go-demo-3 on top of which I made some changes which we'll comment soon.
Since we'll need to change a few configuration files and push them back to the repository, you should fork vfarcic/go-demo-5, just as you forked vfarcic/k8s-prod.
Next, we'll clone the repository before we explore the relevant files.
1 cd ....
Switching from Scripted to Declarative Pipeline
"A long time ago in a galaxy far, far away, a group of Jenkins contributors decided to reinvent the way Jenkins jobs are defined and how they operate." (A couple of years in software terms is a lot, and Jenkins contributors are indeed spread throughout the galaxy).
The new type of jobs became known as Jenkins pipeline. It was received well by the community, and the adoption started almost instantly. Everything was excellent, and the benefits of using Pipeline compared to FreeStyle jobs were evident from the start. However, it wasn't easy for everyone to adopt Pipeline. Those who were used to scripting, and especially those familiar with Groovy, had no difficulties to switch. But, many used Jenkins without being coders. They did not find Pipeline to be as easy as we thought it would be.
While I do believe that there...
Demystifying Declarative Pipeline through a practical example
Let's take a look at a Jenkinsfile.orig which we'll use as a base to generate Jenkinsfile that will contain the correct address of the cluster and the GitHub user.
1 cat Jenkinsfile.orig
The output is too big for us to explore it in one go, so we'll comment on each section separately. The first in line is the options block.
... options { buildDiscarder logRotator(numToKeepStr: '5') disableConcurrentBuilds() } ...
The first option will result in only the last five builds being preserved in history. Most of the time there is no reason for us to keep all the builds we ever made. The last successful build of a branch is often the only one that matters. We set them to five just to prove that I'm not cheap. By discarding the old builds, we're ensuring that Jenkins will perform faster...
Creating and running a continuous delivery job
That's it. We explored (soon to be) Jenkinsfile that contains our Continuous Delivery Pipeline and KubernetesPod.yaml that contains the Pod definition that will be used to create Jenkins agents. There are a few other things we need to do but, before we discuss them, we'll change the address and the Docker Hub user in Jenkinsfile.orig, store the output as Jenkinsfile and push the changes to the forked GitHub repository.
We'll use a slightly modified version of Jenkins file. Just as in the previous chapter, we'll add the ocCreateEdgeRouteBuild step that will accomplish the same results as if we'd have NGINX Ingress controller. Please use Jenkinsfile.oc instead of Jenkinsfile.orig in the command that follows.
1 cat Jenkinsfile.orig \
2 | sed -e "s@acme.com@$ADDR@g" ...
What is GitOps and do we want it?
"Git is the only source of truth." If you understand that sentence, you know GitOps. Every time we want to apply a change, we need to push a commit to Git. Want to change the configuration of your servers? Commit a change to Git, and let an automated process propagate it to servers. Want to upgrade ChartMuseum? Change requirements.yaml, push the change to the k8s-prod repository, and let an automated process do the rest. Want to review a change before applying it? Make a pull request. Want to rollback a release? You probably get the point, and I can save you from listing hundreds of other "want to" questions.
Did we do GitOps in the previous chapter? Was our continuous deployment process following GitOps? The answer to both questions is no. We did keep the code, configurations, and Kubernetes definitions in Git. Most of it...
Upgrading the production environment using GitOps practices
Right now, our production environment contains Jenkins and ChartMuseum. On the other hand, we created a new production-ready release of go-demo-5. Now we should let our business, marketing, or some other department make a decision on whether they'd like to deploy the new release to production and when should that happen. We'll imagine that they gave us the green light to install the go-demo-5 release and that it should be done now. Our users are ready for it.
We'll deploy the new release manually first. That way we'll confirm that our deployment process works as expected. Later on, we'll try to automate the process through Jenkins.
Our whole production environment is stored in the k8s-prod repository. The applications that constitute it are defined in requirements.yaml file. Let's take another...
Creating a Jenkins job that upgrades the whole production environment
Before we upgrade the production environment, we'll create one more release of go-demo-5 so that we have something new to deploy.
1 open
"http://$JENKINS_ADDR/blue/organizations/jenkins/go-demo-5/branches"
We opened the branches screen of the go-demo-5 job.
Please click the play button from the right side of the master row and wait until the new build is finished.
Lo and behold! Our build failed! If you explore the job in detail, you will know why it's broken.
You'll see Did you forget to increment the Chart version? message.
Our job does not allow us to push a commit to the master branch without bumping the version of the go-demo-5 chart. That way, we guarantee that every production-ready release is versioned correctly. Let's fix that.
...
Automating upgrade of the production environment
Now that we have a new release waiting, we would go through the same process as before. Someone would make a decision whether the release should be deployed to production or to let it rot until the new one comes along. If the decision is made that our users should benefit from the features available in that release, we'd need to update a few files in our k8s-prod repository.
1 cd ../k8s-prod
The first file we'll update is helm/requirements.yaml. Please open it in your favorite editor and change the go-demo-5 version to match the version of the Chart we pushed a few moments ago.
We should also increase the version of the prod-env Chart as a whole. Open helm/Chart.yaml and bump the version.
Let's take a look at Jenkinsfile.orig from the repository.
1 cat Jenkinsfile.orig
The output is as follows.
import java.text...
High-level overview of the continuous delivery pipeline
Let's step back and paint a high-level picture of the continuous delivery pipeline we created. To be more precise, we'll draw a diagram instead of painting anything. But, before we dive into a continuous delivery diagram, we'll refresh our memory with the one we used before for describing continuous deployment.
The continuous deployment pipeline contains all the steps from pushing a commit to deploying and testing a release in production.
Continuous delivery removes one of the stages from the continuous deployment pipeline. We do NOT want to deploy a new release automatically. Instead, we want humans to decide whether a release should be upgraded in production. If it should, we need to decide when will that happen. Those (human) decisions are, in our case, happening...
To continuously deploy or to continuously deliver?
Should we use the continuous deployment (CDP) or the continuous delivery (CD) process? That's a hard question to answer which mainly depends on your internal processes. There are a few questions that might guide us.
- Are your applications truly independent and can be deployed without changing anything else in your cluster?
- Do you have such a high level of trust in your automated tests that you are confident that there's no need for manual actions?
- Are the teams working on applications authorized to make decisions on what to deploy to production and when?
- Are those teams self-sufficient and do not depend on other teams?
- Do you really want to upgrade production with every commit to the master branch?
If you answered with no to at least one of those questions, you cannot do continuous deployment. You should aim for continuous...
What now?
We are finished with the exploration of continuous delivery processes. Destroy the cluster if you created it only for the purpose of this chapter. Have a break. You deserve it.