Chapter 14. Hardening Kubernetes
In this chapter, we'll look at considerations for moving to production. We will also show you some helpful tools and third-party projects that are available in the Kubernetes community at large and where you can go to get more help.
This chapter will discuss the following topics:
- Production characteristics
- Lessons learned from Kubernetes production
- Hardening the cluster
- The Kubernetes ecosystem
- Where can you get help?
So far in this book, we have walked through a number of typical operations using Kubernetes. As we have been, K8s offers a variety of features and abstractions that ease the burden of day-to-day management for container deployments.
There are many characteristics that define a production-ready system for containers. The following diagram provides a high-level view of the major concerns for production-ready clusters. This is by no means an exhaustive list, but it's meant to provide some solid ground for heading into production operations:
Production characteristics for container operations
We saw how the core concepts and abstractions of Kubernetes address a few of these concerns. The service abstraction has built-in service discovery and health checking at both the service and application level. We also get seamless application updates and scalability from the replication controller and deployment constructs. All of the core abstractions of services, replication controllers...
Lessons learned from production
Kubernetes has been around long enough now that there are a number of companies running Kubernetes. In our day jobs, we've seen Kubernetes run in production across a number of different industry verticals and in numerous configurations. Let's explore what folks across the industry are doing when providing customer-facing workloads. At a high level, there are several key areas:
- Make sure to set limits in your cluster.
- Use the appropriate workload types for your application.
- Label everything! Labels are very flexible and can contain a lot of information that can help identify an object, route traffic, or determine placement.
- Don't use default values.
- Tweak the default values for the core Kubernetes components.
- Use load balancers as opposed to exposing services directly on a node's port.
- Build your Infrastructure as Code and use provisioning tools such as CloudFormation or Terraform, and configuration tools such as Chef, Ansible, or Puppet.
- Consider not running stateful...
Let's look at some other common recommendations for hardening your cluster in production. These use cases cover both intentional, malicious actions against your cluster, as well as accidental misuse. Let's take a look at what we can do to secure things.
First off, you want to ensure that access to the Kubernetes API is controlled. Given that all actions in Kubernetes are API-driven, we should secure this interface first. We can control access to this API with several settings:
- Encode all traffic: In order to keep communication secure, you should make sure that Transport Level Security (TLS) is set up for API communication in the cluster. Most of the installation methods we've reviewed in this book create the necessary component certificates, but it's always on the cluster operators to identify all in-use local ports that may not use the more secure settings.
- Authenticate your access: Just as with any large scale computer system, you want to ensure that the identity of a user...
Since the Kubernetes project's initial release, there has been a growing ecosystem of partners. We looked at CoreOS, Sysdig, and many others in the previous chapters, but there are a variety of projects and companies in this space. We will highlight a few that may be useful as you move toward production. This is by no means an exhaustive list and it is merely meant to provide some interesting starting points.
In many situations, organizations will not want to place their applications and/or intellectual property in public repositories. For those cases, a private registry solution is helpful in securely integrating deployments end to end.
Google Cloud offers the Google Container Registry at https://cloud.google.com/container-registry/.
Docker has its own trusted registry offering at https://www.docker.com/docker-trusted-registry.
Quay also provides secure private registries, vulnerability scanning, and comes from the CoreOS team, and can be found at https...
In this chapter, we left a few breadcrumbs to guide you on your continuing journey with Kubernetes. You should have a solid set of production characteristics to get you started. There is a wide community in both the Docker and Kubernetes worlds. There are also a few additional resources that we provided if you need a friendly face along the way.
By now, you have seen the full spectrum of container operations with Kubernetes. You should be more confident in how Kubernetes can streamline the management of your container deployments and how you can plan to move containers off developer laptops and onto production servers. Now get out there and start shipping your containers!
The Kubernetes project is an open source effort, so there is a broad community of contributors and enthusiasts. One great resource in order to find more assistance is the Kubernetes Slack channel: http://slack.kubernetes.io/.
There is also a Kubernetes group on Google groups. You can join it at https://groups.google.com/forum/#!forum/kubernetes-users.
If you enjoyed this book, you can find more of my articles, how-tos, and various musings on my blogs and Twitter page: