Reader small image

You're reading from  The DevOps 2.4 Toolkit

Product typeBook
Published inNov 2019
PublisherPackt
ISBN-139781838643546
Edition1st Edition
Concepts
Right arrow
Author (1)
Viktor Farcic
Viktor Farcic
author image
Viktor Farcic

Viktor Farcic is a senior consultant at CloudBees, a member of the Docker Captains group, and an author. He codes using a plethora of languages starting with Pascal (yes, he is old), Basic (before it got the Visual prefix), ASP (before it got the .NET suffix), C, C++, Perl, Python, ASP.NET, Visual Basic, C#, JavaScript, Java, Scala, and so on. He never worked with Fortran. His current favorite is Go. Viktor's big passions are Microservices, Continuous Deployment, and Test-Driven Development (TDD). He often speaks at community gatherings and conferences. Viktor wrote Test-Driven Java Development by Packt Publishing, and The DevOps 2.0 Toolkit. His random thoughts and tutorials can be found in his blog—Technology Conversations
Read more about Viktor Farcic

Right arrow

Installing and Setting Up Jenkins

When used by engineers, UIs are evil. They sidetrack us from repeatability and automation.

UIs do have their purpose. They are supposed to provide enough colors and random graphs for CIO, CTO, and other C-level executives and mid-level managers. Management works in multi-color, while engineers should be limited to dual-color terminals, mixed with a slightly increased color pallet of IDEs and editors we use to code. We produce commits, while managers fake interest by looking at UIs.

The preceding phrase is a bit exaggerated. It's not true that UIs are useful only to managers nor that they fake interest. At least, that's not true for all of them. UIs do provide a lot of value but, unfortunately, they are often abused to the level of even postponing or even preventing automation. We'll try to make an additional effort to remove Jenkins...

Creating a Cluster and retrieving its IP

You already know what the first steps are. Create a new cluster or reuse the one you dedicated to the exercises.

We'll start by going to the local copy of the vfarcic/k8s-specs repository and making sure that we have the latest revision.

All the commands from this chapter are available in the 06-jenkins-setup.sh (https://gist.github.com/vfarcic/4ea447d106c96cb088bc8d616719f6e8) Gist.
 1  cd k8s-specs
2 3 git pull

We'll need a few files from the go-demo-3 repository you cloned in one of the previous chapters. To be on the safe side, please merge it the upstream. If you forgot the commands, they are available in the go-demo-3-merge.sh gist (https://gist.github.com/vfarcic/171172b69bb75903016f0676a8fe9388).

The requirements are the same as those from the previous chapters. The only difference is that I will assume that you&apos...

Running Jenkins

We'll need a domain which we'll use to set Ingress' hostname and through which we'll be able to open Jenkins UI. We'll continue using nip.io service to generate domains. Just as before, remember that this is only a temporary solution and that you should use "real" domains with the IP of your external load balancer instead.

 1  JENKINS_ADDR="jenkins.$LB_IP.nip.io"
2 3 echo $JENKINS_ADDR

The output of the latter command should provide a visual confirmation that the address we'll use for Jenkins looks OK. In my case, it is jenkins.52.15.140.221.nip.io.

A note to minishift users
Helm will try to install Jenkins Chart with the process in a container running as user 0. By default, that is not allowed in OpenShift. We'll skip discussing the best approach to correct the issue, and I'll assume you already...

Using Pods to run tools

We won't explore how to write a continuous deployment pipeline in this chapter. That is reserved for the next one. Right now, we are only concerned whether our Jenkins setup is working as expected. We need to know if Jenkins can interact with Kubernetes, whether we can run the tools we need as Pods, and whether they can be spun across different Namespaces. On top of those, we still need to solve the issue of building container images. Since we already established that it is not a good idea to mount a Docker socket, nor to run containers in privileged mode, we need to find a valid alternative. In parallel to solving those and a few other challenges we'll encounter, we cannot lose focus from automation. Everything we do has to be converted into automatic setup unless we make a conscious decision that it is not worth the trouble.

I'm jumping...

Running builds in different Namespaces

One of the significant disadvantages of the script we used inside my-k8s-job is that it runs in the same Namespace as Jenkins. We should separate builds from Jenkins and thus ensure that they do not affect its stability.

We can create a system where each application has two namespaces; one for testing and the other for production. We can define quotas, limitations, and other things we are used to defining on the Namespace level. As a result, we can guarantee that testing an application will not affect the production release. With Namespaces we can separate one set of applications from another. At the same time, we'll reduce the chance that one team will accidentally mess up with the applications of the other. Our end-goal is to be secure without limiting our teams. By giving them freedom in their own Namespace, we can be secure without...

Creating nodes for building container images

We already discussed that mounting a Docker socket is a bad idea due to security risks. Running Docker in Docker would require privileged access, and that is almost as unsafe and Docker socket. On top of that, both options have other downsides. Using Docker socket would introduce processes unknown to Kubernetes and could interfere with its scheduling capabilities. Running Docker in Docker could mess up with networking. There are other reasons why both options are not good, so we need to look for an alternative.

Recently, new projects spun up attempting to help with building container images. Good examples are img (https://github.com/genuinetools/img), orca-build (https://github.com/cyphar/orca-build), umoci (https://github.com/openSUSE/umoci), buildah (https://github.com/containers/buildah), FTL (https://github.com/GoogleCloudPlatform...

Testing Docker builds outside the cluster

No matter whether you choose to use static VMs or you decided to create them dynamically in AWS or GCE, the steps to test them are the same. From Jenkins' perspective, all that matters is that there are agent nodes with the labels docker.

We'll modify our Pipeline to use the node labeled docker.

 1  open "http://$JENKINS_ADDR/job/my-k8s-job/configure"

Please click the Pipeline tab and replace the script with the one that follows.

 1  podTemplate( 
 2      label: "kubernetes",
 3      namespace: "go-demo-3-build", 
 4      serviceAccount: "build", 
 5      yaml: """ 
 6  apiVersion: v1 
 7  kind: Pod 
 8  spec: 
 9    containers: 
10    - name: kubectl 
11      image: vfarcic/kubectl 
12      command: ["sleep"] 
13      args: ["100000"] 
14    - name: oc...

Automating Jenkins installation and setup

One of the critical parts of Jenkins automation is the management of credentials. Jenkins uses hudson.util.Secret and master.key files to encrypt all the credentials. The two are stored in secrets directory inside Jenkins home directory. The credentials we uploaded or pasted are stored in credentials.yml. On top of those, each plugin (for example, Google cloud) can add their files with credentials.

We need the credentials as well and the secrets if we are to automate Jenkins setup. One solution could be to generate the secrets, use them to encrypt credentials, and store them as Kubernetes secrets or config maps. However, that is a tedious process.

Since we already have a fully configured Jenkins, we might just as well copy the files. We'll persist the files we need to the local directories cluster/jenkins and cluster/jenkins...

What now?

If we exclude the case of entering AWS key, our Jenkins setup is fully automated. Kubernetes plugin is preconfigured to support Pods running in other Namespaces, Google and AWS clouds will be set up if we choose to use them, credentials are copied to the correct locations, and they are using the same encryption keys as those used to encrypt the credentials in the first place. All in all, we're finally ready to work on our continuous deployment pipeline. The next chapter will be the culmination of everything we did thus far.

Please note that the current setup is designed to support "one Jenkins master per team" strategy. Even though you could use the experience you gained so far to run a production-ready Jenkins master that will serve everyone in your company, it is often a better strategy to have one master per team. That approach provides quite a few...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
The DevOps 2.4 Toolkit
Published in: Nov 2019Publisher: PacktISBN-13: 9781838643546
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Viktor Farcic

Viktor Farcic is a senior consultant at CloudBees, a member of the Docker Captains group, and an author. He codes using a plethora of languages starting with Pascal (yes, he is old), Basic (before it got the Visual prefix), ASP (before it got the .NET suffix), C, C++, Perl, Python, ASP.NET, Visual Basic, C#, JavaScript, Java, Scala, and so on. He never worked with Fortran. His current favorite is Go. Viktor's big passions are Microservices, Continuous Deployment, and Test-Driven Development (TDD). He often speaks at community gatherings and conferences. Viktor wrote Test-Driven Java Development by Packt Publishing, and The DevOps 2.0 Toolkit. His random thoughts and tutorials can be found in his blog—Technology Conversations
Read more about Viktor Farcic