Reader small image

You're reading from  Getting Started with Kubernetes, - Third Edition

Product typeBook
Published inOct 2018
PublisherPackt
ISBN-139781788994729
Edition3rd Edition
Concepts
Right arrow
Authors (2):
Jonathan Baier
Jonathan Baier
author image
Jonathan Baier

Jonathan Baier is an emerging technology leader living in Brooklyn, New York. He has had a passion for technology since an early age. When he was 14 years old, he was so interested in the family computer (an IBM PCjr) that he pored over the several hundred pages of BASIC and DOS manuals. Then, he taught himself to code a very poorly-written version of Tic-Tac-Toe. During his teenage years, he started a computer support business. Throughout his life, he has dabbled in entrepreneurship. He currently works as Senior Vice President of Cloud Engineering and Operations for Moody's corporation in New York.
Read more about Jonathan Baier

Jesse White
Jesse White
author image
Jesse White

Jesse White is a 15-year veteran and technology leader in New York City's very own Silicon Alley, where he is a pillar of the vibrant engineering ecosystem. As founder of DockerNYC and an active participant in the open source community, you can find Jesse at a number of leading industry events, including DockerCon and VelocityConf, giving talks and workshops.
Read more about Jesse White

View More author details
Right arrow

Chapter 10. Designing for High Availability and Scalability

This chapter will cover advanced concepts such as high availability, scalability, and the requirements that Kubernetes operators will need to cover in order to begin to explore the topic of running Kubernetes in production. We'll take a look at the Platform as a Service (PaaS) offerings from Google and Azure and we'll use the familiar principles of running production workloads in a cloud environment.

We'll cover the following topics in this chapter:

  • Introduction to high availability
  • High availability best practices
  • Multi-region setups
  • Security best practices
  • Setting up high availability on the hosted Kubernetes PaaS
  • Cluster life cycle events
  • How to use admission controllers
  • Getting involved with the workloads API
  • What is a custom resource definition (CRD)?

Technical requirements


You'll need to have access to your Google Cloud Platform account in order to explore some of these options. You can also use a local Minikube setup to test some of these features, but many of the principles and approaches we'll discuss here require servers in the cloud.

 

Introduction to high availability


In order to understand our goals for this chapter, we first need to talk about the more general terms of high availability and scalability. Let's look at each individually to understand how the pieces work together.

We'll discuss the required terminology and begin to understand the building blocks that we'll use to conceptualize, construct, and run a Kubernetes cluster in the cloud.

Let's dig into high availability, uptime, and downtime.

How do we measure availability?

High availability (HA) is the idea that your application is available, meaning reachable, to your end users. In order to create highly available applications, your application code and the frontend that users interact with needs to be available the majority of the time. This term comes from the system design field, which defines the architecture, interface, data, and modules of a system in order to satisfy a given set of requirements. There are many examples of system design in disciplines from...

HA best practices


In order to build HA Kubernetes systems, it's important to note that availability is as often a function of people and process as it is a failure in technology. While hardware and software fails often, humans and their involvement in the process is a very predictable drag on the availability of all systems.

It's important to note that this book won't get into how to design a microservices architecture for failure, which is a huge part of coping with some (or all) system failures in a cluster scheduling and networking system such as Kubernetes.

There's another important concept that's important to consider: graceful degradation.

Graceful degradation is the idea that you build functionality in layers and modules, so even with the catastrophic failure of some pieces of the system, you're still able to provide some level of availability. There is a corresponding term for the progressive enhancement that's followed in web design, but we won't be using that pattern here. Graceful...

Cluster life cycle


There are a few more key items that we should cover so that you're armed with full knowledge about the items that can help you with creating highly available Kubernetes clusters. Let's discuss how you can use admission controllers, workloads, and custom resource definitions to extend your cluster.

Admission controllers

Admission controllers are Kubernetes code that allows you to intercept a call to the Kubernetes API server after it has been authenticated and authorized. There are standard admission controllers that are included with the core Kubernetes system, and people also write their own. There are two controllers that are more important than the rest:

  • TheMutatingAdmissionWebhook is responsible for calling Webhooks that mutate, in serial, a given request. This controller only runs during the mutating phase of cluster operating. You can use a controller like this in order to build business logic into your cluster to customize admission logic with operations such as CREATE...

Summary


In this chapter, we looked into the core components of HA. We explored the ideas of availability, uptime, and fragility. We took those concepts and explored how we could achieve five nines of uptime.

Additionally, we explored the key components of a highly available cluster, the etcd and control plane nodes, and left room to imagine the other ways that we'd build HA into our clusters using hosted PaaS offerings from the major cloud providers.

Later, we looked at the cluster life cycle and dug into advanced capabilities with a number of key features of the Kubernetes system: admission controllers, the workload API, and CRS.

Lastly, we created a CRD on a GKE cluster within GCP in order to understand how to begin building these custom pieces of software.

 

 

Questions


  1. What are some ways to measure the quality of an application?
  2. What is the definition of uptime?
  3. How many nines of availability should a Kubernetes system strive for?
  4. What does it mean for a system to fail in predefined ways, while still providing reduced functionality?
  5. Which PaaS provides highly available master and worker nodes across multiple availability zones?
  6. What's a stacked node?
  7. What's the name of the API that collects all of the controllers in a single, unified API?

Further reading


If you'd like to read more about high availability and mastering Kubernetes, check out the following Packt resources:

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Getting Started with Kubernetes, - Third Edition
Published in: Oct 2018Publisher: PacktISBN-13: 9781788994729
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Authors (2)

author image
Jonathan Baier

Jonathan Baier is an emerging technology leader living in Brooklyn, New York. He has had a passion for technology since an early age. When he was 14 years old, he was so interested in the family computer (an IBM PCjr) that he pored over the several hundred pages of BASIC and DOS manuals. Then, he taught himself to code a very poorly-written version of Tic-Tac-Toe. During his teenage years, he started a computer support business. Throughout his life, he has dabbled in entrepreneurship. He currently works as Senior Vice President of Cloud Engineering and Operations for Moody's corporation in New York.
Read more about Jonathan Baier

author image
Jesse White

Jesse White is a 15-year veteran and technology leader in New York City's very own Silicon Alley, where he is a pillar of the vibrant engineering ecosystem. As founder of DockerNYC and an active participant in the open source community, you can find Jesse at a number of leading industry events, including DockerCon and VelocityConf, giving talks and workshops.
Read more about Jesse White