Reader small image

You're reading from  The DevOps 2.4 Toolkit

Product typeBook
Published inNov 2019
PublisherPackt
ISBN-139781838643546
Edition1st Edition
Concepts
Right arrow
Author (1)
Viktor Farcic
Viktor Farcic
author image
Viktor Farcic

Viktor Farcic is a senior consultant at CloudBees, a member of the Docker Captains group, and an author. He codes using a plethora of languages starting with Pascal (yes, he is old), Basic (before it got the Visual prefix), ASP (before it got the .NET suffix), C, C++, Perl, Python, ASP.NET, Visual Basic, C#, JavaScript, Java, Scala, and so on. He never worked with Fortran. His current favorite is Go. Viktor's big passions are Microservices, Continuous Deployment, and Test-Driven Development (TDD). He often speaks at community gatherings and conferences. Viktor wrote Test-Driven Java Development by Packt Publishing, and The DevOps 2.0 Toolkit. His random thoughts and tutorials can be found in his blog—Technology Conversations
Read more about Viktor Farcic

Right arrow

Defining Continuous Deployment

The work on defining Continuous Deployment (CDP) steps should not start in Jenkins or any other similar tool. Instead, we should focus on Shell commands and scripts and turn our attention to the CI/CD tools only once we are confident that we can execute the full process with only a few commands.

We should be able to execute most of the CDP steps from anywhere. Developers should be able to run them locally from a Shell. Others might want to integrate them into their favorite IDEs. The number of ways all or parts of the CDP steps can be executed might be quite huge. Running them as part of every commit is only one of those permutations. The way we execute CDP steps should be agnostic to the way we define them. If we add the need for very high (if not complete) automation, it is clear that the steps must be simple commands or Shell scripts. Adding anything...

To continuously deliver or to continuously deploy?

Everyone wants to implement continuous delivery or deployment. After all, the benefits are too significant to be ignored. Increase the speed of delivery, increase the quality, decrease the costs, free people to dedicate time to what brings value, and so on and so forth. Those improvements are like music to any decision maker, especially if that person has a business background. If a tech geek can articulate the benefits continuous delivery brings to the table, when he asks a business representative for a budget, the response is almost always "Yes! Do it."

By now, you might be confused with the differences between continuous integration, delivery, and deployment, so I'll do my best to walk you through the primary objectives behind each.

You are doing continuous integration (CI) if you have a set of automated processes...

Defining continuous deployment goals

The continuous deployment process is relatively easy to explain, even though implementation might get tricky. We'll split our requirements into two groups. We'll start with a discussion about the overall goals that should be applied to the whole process. To be more precise, we'll talk about what I consider non-negotiable requirements.

A pipeline needs to be secure. Typically, that would not be a problem. In past before Kubernetes was born, we would run the pipeline steps on separate servers. We'd have one dedicated to building and another for testing. We might have one for integration and another for performance tests. Once we adopt container schedulers and move into clusters, we lose control of the servers. Even though it is possible to run something on a specific server, that is highly discouraged in Kubernetes. We should...

Defining continuous deployment steps

We'll try to define a minimum set of steps any continuous deployment pipeline should execute. Do not take them literally. Every company is different, and every project has something special. You will likely have to extend them to suit your particular needs. However, that should not be a problem. Once we get a grip on those, that are mandatory, extending the process should be relatively straightforward, except if you need to interact with tools that do not have a well-defined API nor a good CLI. If that's the case, my recommendation is to drop those tools. They're not worthy of the suffering they often impose.

We can split the pipeline into several stages. We'll need to build the artifacts (after running static tests and analysis). We have to run functional tests because unit testing is not enough. We need to create a release...

Creating a cluster

We'll start the hands-on part by going back to the local copy of the vfarcic/k8s-specs repository and pulling the latest version.

All the commands from this chapter are available in the 03-manual-cd.sh (https://gist.github.com/vfarcic/bf33bf65299870b68b3de8dbe1b21c36) Gist.
 1  cd k8s-specs
2 3 git pull

Just as in the previous chapters, we'll need a cluster if we are to do the hands-on exercises. The rules are still the same. You can continue using the same cluster as before, or you can switch to a different Kubernetes flavor. You can continue using one of the Kubernetes distributions listed as follows or be adventurous and try something different. If you go with the latter, please let me know how it went, and I'll test it myself and incorporate it into the list.

Beware! The minimum requirements for the cluster are now slightly higher....

Creating Namespaces dedicated to continuous deployment processes

If we are to accomplish a reasonable level of security of our pipelines, we need to run them in dedicated Namespaces. Our cluster already has RBAC enabled, so we'll need a ServiceAccount as well. Since security alone is not enough, we also need to make sure that our pipeline does not affect other applications. We'll accomplish that by creating a LimitRange and a ResourceQuota.

I believe that in most cases we should store everything an application needs in the same repository. That makes maintenance much simpler and enables the team in charge of that application to be in full control, even though that team might not have all the permissions to create the resources in a cluster.

We'll continue using go-demo-3 repository but, since we'll have to change a few things, it is better if you apply the...

Defining a Pod with the tools

Every application is different, and the tools we need for a continuous deployment pipeline vary from one case to another. For now, we'll focus on those we'll need for our go-demo-3 application.

Since the application is written in Go, we'll need golang image to download the dependencies and run the tests. We'll have to build Docker images, so we should probably add a docker container as well. Finally, we'll have to execute quite a few kubectl commands. For those of you using OpenShift, we'll need oc as well. All in all, we need a Pod with golang, docker, kubectl, and (for some of you) oc.

The go-demo-3 repository already contains a definition of a Pod with all those containers, so let's take a closer look at it.

 1  cat k8s/cd.yml

The output is as follows.

apiVersion: v1
kind: Pod
metadata:
  name: cd
  namespace...

Executing continuous integration inside containers

The first stage in our continuous deployment pipeline will contain quite a few steps. We'll need to check out the code, to run unit tests and any other static analysis, to build a Docker image, and to push it to the registry. If we define CI as a set of automated steps followed with manual operations and validations, we can say that the steps we are about to execute can be qualified as CI.

The only thing we truly need to make all those steps work is Docker client with the access to Docker server. One of the containers of the cd Pod already contains it. If you take another look at the definition, you'll see that we are mounting Docker socket so that the Docker client inside the container can issue commands to Docker server running on the host. Otherwise, we would be running Docker-in-Docker, and that is not a very good...

Running functional tests

Which steps do we need to execute in the functional testing phase? We need to deploy the new release of the application. Without it, there would be nothing to test. All the static tests were already executed when we built the image, so everything we do from now on will need a live application.

Deploying the application is not enough, we'll have to validate that at least it rolled out successfully. Otherwise, we'll have to abort the process.

We'll have to be cautious how we deploy the new release. Since we'll run it in the same cluster as production, we need to be careful that one does not affect the other. We already have a Namespace that provides some level of isolation. However, we'll have to be attentive not to use the same path or domain in Ingress as the one used for production. The two need to be accessible separately from...

Creating production releases

We are ready to create our first production release. We trust our tests, and they proved that it is relatively safe to deploy to production. Since we cannot deploy to air, we need to create a production release first.

We will not rebuild the image. The artifact we produced (our Docker image) and confirmed through our tests, is the one we care for. Rebuilding would not only be a waste, and it could potentially be a different artifact than the one we tested. That must never happen!

Please make sure to replace [...] with your Docker Hub user in one of the commands that follow.

 1  kubectl -n go-demo-3-build \
 2      exec -it cd -c docker -- sh
3 4 export DH_USER=[...] 5 6 docker image tag \
7 $DH_USER/go-demo-3:1.0-beta \

 8  $DH_USER/go-demo-3:1.0
 9
10 docker image push \
11 $DH_USER/go-demo-3:1.0

We went back to the docker container...

Deploying to production

We already saw that prod.yml is almost the same as build.yml we deployed earlier, so there's probably no need to go through it in details. The only substantial difference is that we'll create the resources in the go-demo-3 Namespace, and that we'll leave Ingress to its original path /demo.

 1  kubectl -n go-demo-3-build \
 2      exec -it cd -c kubectl -- sh
 3
 4  cat k8s/prod.yml \
 5      | sed -e "s@:latest@:1.0@g" \
 6      | tee /tmp/prod.yml
 7
 8  kubectl apply -f /tmp/prod.yml --record

We used sed to convert latest to the tag we built a short while ago, and we applied the definition. This was the first release, so all the resources were created.

Subsequent releases will follow the rolling update process. Since that is something Kubernetes does out-of-the-box, the command will always be the same.

Next, we'll wait until...

Running production tests

The process for running production tests is the same as functional testing we executed earlier. The difference is in the tests we execute, not how we do it.

The goal of production tests is not to validate all the units of our application. Unit tests did that. It is not going to validate anything on the functional level. Functional tests did that. Instead, they are very light tests with a simple goal of validating whether the newly deployed application is correctly integrated with the rest of the system. Can we connect to the database? Can we access the application from outside the cluster (as our users will)? Those are a few of the questions we're concerned with when running this last round of tests.

The tests are written in Go, and we still have the golang container running. All we have to do it to go through the similar steps as before.

 1  kubectl...

Cleaning up pipeline leftovers

The last step in our manually-executed pipeline is to remove all the resources we created, except the production release. Since they are all Pods in the same Namespace, that should be reasonably easy. We can remove them all from go-demo-3-build.

 1  kubectl -n go-demo-3-build \
 2      delete pods --all

The output is as follows.

pod "cd" deleted
Figure 3-7: The cleanup stage of a continuous deployment pipeline

That's it. Our continuous pipeline is finished. Or, to be more precise, we defined all the steps of the pipeline. We are yet to automate everything.

Did we do it?

We only partially succeeded in defining our continuous deployment stages. We did manage to execute all the necessary steps. We cloned the code, we run unit tests, and we built the binary and the Docker image. We deployed the application under test without affecting the production release, and we run functional tests. Once we confirmed that the application works as expected, we updated production with the new release. The new release was deployed through rolling updates but, since it was the first release, we did not see the effect of it. Finally, we run another round of tests to confirm that rolling updates were successful and that the new release is integrated with the rest of the system.

You might be wondering why I said that "we only partially succeeded." We executed the full pipeline. Didn't we?

One of the problems we're facing is that our...

What now?

We're done, for now. Please destroy the cluster if you're not planning to jump to the next chapter right away and if it is dedicated to the exercises in this book. Otherwise, execute the command that follows to remove everything we did.

 1  kubectl delete ns \
 2      go-demo-3 go-demo-3-build
lock icon
The rest of the chapter is locked
You have been reading a chapter from
The DevOps 2.4 Toolkit
Published in: Nov 2019Publisher: PacktISBN-13: 9781838643546
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Viktor Farcic

Viktor Farcic is a senior consultant at CloudBees, a member of the Docker Captains group, and an author. He codes using a plethora of languages starting with Pascal (yes, he is old), Basic (before it got the Visual prefix), ASP (before it got the .NET suffix), C, C++, Perl, Python, ASP.NET, Visual Basic, C#, JavaScript, Java, Scala, and so on. He never worked with Fortran. His current favorite is Go. Viktor's big passions are Microservices, Continuous Deployment, and Test-Driven Development (TDD). He often speaks at community gatherings and conferences. Viktor wrote Test-Driven Java Development by Packt Publishing, and The DevOps 2.0 Toolkit. His random thoughts and tutorials can be found in his blog—Technology Conversations
Read more about Viktor Farcic