Reader small image

You're reading from  Kubernetes – An Enterprise Guide - Second Edition

Product typeBook
Published inDec 2021
PublisherPackt
ISBN-139781803230030
Edition2nd Edition
Right arrow
Authors (2):
Marc Boorshtein
Marc Boorshtein
author image
Marc Boorshtein

Marc Boorshtein has been a software engineer and consultant for 20 years and is currently the CTO (Chief Technology Officer) of Tremolo Security, Inc. Marc has spent most of his career building identity management solutions for large enterprises, U.S. Government civilian agencies, and local government public safety systems.
Read more about Marc Boorshtein

Scott Surovich
Scott Surovich
author image
Scott Surovich

Scott Surovich has been involved in the industry for over 25 years and is currently the Global Container Engineering Lead at a tier 1 bank as the Global on-premises Kubernetes product owner architecting and, delivering cluster standards, including the surrounding ecosystem. His previous roles include working on other global engineering teams, including Windows, Linux, and virtualization.
Read more about Scott Surovich

View More author details
Right arrow

Provisioning a Platform

Every chapter in this book, up until this point, has focused on the infrastructure of your cluster. We have explored how to deploy Kubernetes, how to secure it, and how to monitor it. What we haven't talked about is how to deploy applications.

In this, our final chapter, we're going to work on building an application deployment platform using what we've learned about Kubernetes. We're going to build our platform based on some common enterprise requirements. Where we can't directly implement a requirement, because building a platform on Kubernetes can fill its own book, we'll call it out and provide some insights.

In this chapter, we will cover the following topics:

  • Designing a pipeline
  • Preparing our cluster
  • Deploying GitLab
  • Deploying Tekton
  • Deploying ArgoCD
  • Automating project onboarding using OpenUnison

You'll have a good starting point for building out you own GitOps...

Technical requirements

To perform the exercises in this chapter, you will need a clean KinD cluster with a minimum of 16 GB of memory, 75 GB storage, and 4 CPUs. The system we will build is minimalist but still requires considerable horsepower to run.

You can access the code for this chapter at the following GitHub repository: https://github.com/PacktPublishing/Kubernetes---An-Enterprise-Guide-2E/tree/main/chapter14.

Designing a pipeline

The term "pipeline" is used extensively in the Kubernetes and DevOps world. Very simply, a pipeline is a process, usually automated, that takes code and gets it running. This usually involves the following:

Figure 14.1: A simple pipeline

Let's quickly run through the steps involved in this process:

  1. Storing the source code in a central repository, usually Git
  2. When code is committed, building it and generating artifacts, usually a container
  3. Telling the platform – in this case, Kubernetes – to roll out the new containers and shut down the old ones

This is about as basic as a pipeline can get and isn't of much use in most deployments. In addition to building our code and deploying it, we want to make sure we scan containers for known vulnerabilities. We may also want to run our containers through some automated testing before going into production. In enterprise deployments, there&apos...

Preparing our cluster

Before we begin deploying our technology stack, we need to do a couple of things. I recommend starting with a fresh cluster. If you're using the KinD cluster from this book, start with a new cluster. We're deploying several components that need to be integrated and it will be simpler and easier to start fresh rather than potentially struggling with previous configurations. Before we start deploying the applications that will make up our stack, we're going to deploy JetStack's cert-manager to automate certificate issuing, a simple container registry, and OpenUnison for authentication and automation.

Before creating your cluster, let's generate a root certificate for our certificate authority (CA) and make sure our host trusts it. This is important so that we can push a sample container without worrying about trust issues:

  1. Create a self-signed certificate that we'll use as our CA. The chapter14/shell directory of the...

Deploying GitLab

When building a GitOps pipeline, one of the most important components is a Git repository. GitLab has many components besides just Git, including a UI for navigating code, a web-based integrated development environment (IDE) for editing code, and a robust identity implementation to manage access to projects in a multi-tenant environment. This makes it a great solution for our platform since we can map our "roles" to GitLab groups.

In this section, we're going to deploy GitLab into our cluster and create two simple repositories that we'll use later when we deploy Tekton and ArgoCD. We'll focus on the automation steps when we revisit OpenUnison to automate our pipeline deployments.

GitLab deploys with a Helm chart. For this book, we built a custom values file to run a minimal install. While GitLab comes with features that are similar to ArgoCD and Tekton, we won't be using them. We also didn't want to worry about high availability...

Deploying Tekton

Tekton is the pipeline system we're using for our platform. Originally part of the Knative project for building function-as-a-service on Kubernetes, Tekton has broken out into its own project. The biggest difference between Tekton and other pipeline technologies you may have run is that Tekton is Kubernetes-native. Everything from its execution system, definition, and webhooks for automation are able to run on just about any Kubernetes distribution you can find. For example, we'll be running it in KinD and Red Hat has moved to Tekton as the main pipeline technology used for OpenShift, starting with 4.1.

The process of deploying Tekton is pretty straightforward. Tekton is a series of operators that look for the creation of custom resources that define a build pipeline. The deployment itself only takes a couple of kubectl commands:

$ kubectl create ns tekton-pipelines
$ kubectl create -f chapter14/yaml/tekton-pipelines-policy.yaml
$ kubectl apply ...

Deploying ArgoCD

So far, we have a way to get into our cluster, a way to store code, and a system for building our code and generating images. The last component of our platform is our GitOps controller. This is the piece that lets us commit manifests to our Git repository and make changes to our cluster. ArgoCD is a tool from Intuit that provides a great UI and is driven by a combination of custom resources and Kubernetes-native ConfigMap and Secret objects. It has a CLI tool, and both the web and CLI tools are integrated with OpenID Connect, so it will be easy to add SSO with OpenUnison.

Let's deploy ArgoCD and use it to launch our hello-python web service:

  1. Deploy using the standard YAML from https://argo-cd.readthedocs.io/en/stable/:
    $ kubectl create namespace argocd
    $ kubectl apply -f chapter14/argocd/argocd-policy.yaml
    $ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
    
  2. Create...

Automating project onboarding using OpenUnison

Earlier in this chapter, we deployed the OpenUnison NaaS portal. This portal lets users request new namespaces to be created and allows developers to request access to these namespaces via a self-service interface. The workflows built into this portal are very basic but create the namespace and appropriate RoleBinding objects. What we want to do is build a workflow that integrates our platform and creates all of the objects we created manually earlier in this chapter. The goal is that we're able to deploy a new application into our environment without having to run the kubectl command (or at least minimize its use).

This will require careful planning. Here's how our developer workflow will run:

Figure 14.6: Platform developer workflow

Let's quickly run through the workflow that we see in the preceding figure:

  1. An application owner will request an application be created.
  2. The infrastructure...

Deploying an application

So far, we've explored the theory of building pipelines and workflows, and we have also deployed a technology stack that implements that theory. The last step is to walk through the process of deploying an application in our cluster. There will be three actors in this flow.

Summary

Coming into this chapter, we hadn't spent much time on deploying applications. We wanted to close things out with a brief introduction to application deployment and automation. We learned about pipelines, how they are built, and how they run on a Kubernetes cluster. We explored the process of building a platform by deploying GitLab for source control, built out a Tekton pipeline to work in a GitOps model, and used ArgoCD to make the GitOps model a reality. Finally, we automated the entire process with OpenUnison.

Using the information in this chapter should give you some direction as to how you want to build your own platform. Using the practical examples in this chapter will help you map the requirements of your organization to the technology needed to automate your infrastructure. The platform we built in this chapter is far from complete. It should give you a map for planning your own platform that matches your needs.

Finally, thank you! Thank you for joining...

Questions

  1. True or false: A pipeline must be implemented to make Kubernetes work.
    1. True
    2. False
  2. What are the minimum steps of a pipeline?
    1. Build, scan, test, and deploy
    2. Build and deploy
    3. Scan, test, deploy, and build
    4. None of the above
  3. What is GitOps?
    1. Running GitLab on Kubernetes
    2. Using Git as an authoritative source for operations configuration
    3. A silly marketing term
    4. A product from a new start-up
  4. What is the standard for writing pipelines?
    1. All pipelines should be written in YAML.
    2. There are no standards; every project and vendor has its own implementation.
    3. JSON combined with Go.
    4. Rust.
  5. How do you deploy a new instance of a container in a GitOps model?
    1. Use kubectl to update the Deployment or StatefulSet in the namespace.
    2. Update the Deployment...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Kubernetes – An Enterprise Guide - Second Edition
Published in: Dec 2021Publisher: PacktISBN-13: 9781803230030
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at ₹800/month. Cancel anytime

Authors (2)

author image
Marc Boorshtein

Marc Boorshtein has been a software engineer and consultant for 20 years and is currently the CTO (Chief Technology Officer) of Tremolo Security, Inc. Marc has spent most of his career building identity management solutions for large enterprises, U.S. Government civilian agencies, and local government public safety systems.
Read more about Marc Boorshtein

author image
Scott Surovich

Scott Surovich has been involved in the industry for over 25 years and is currently the Global Container Engineering Lead at a tier 1 bank as the Global on-premises Kubernetes product owner architecting and, delivering cluster standards, including the surrounding ecosystem. His previous roles include working on other global engineering teams, including Windows, Linux, and virtualization.
Read more about Scott Surovich

Username

Role

Notes

mmosley

System administrator

Has overall control of the cluster. Responsible for approving new applications.

jjackson

Application owner

Requests a new application. Is responsible for adding developers and merging pull requests.

...