Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Getting Started with Kubernetes, - Third Edition

You're reading from  Getting Started with Kubernetes, - Third Edition

Product type Book
Published in Oct 2018
Publisher Packt
ISBN-13 9781788994729
Pages 470 pages
Edition 3rd Edition
Languages
Concepts
Authors (2):
Jonathan Baier Jonathan Baier
Profile icon Jonathan Baier
Jesse White Jesse White
Profile icon Jesse White
View More author details

Table of Contents (23) Chapters

Title Page
Dedication
Packt Upsell
Contributors
Preface
1. Introduction to Kubernetes 2. Building a Foundation with Core Kubernetes Constructs 3. Working with Networking, Load Balancers, and Ingress 4. Implementing Reliable Container-Native Applications 5. Exploring Kubernetes Storage Concepts 6. Application Updates, Gradual Rollouts, and Autoscaling 7. Designing for Continuous Integration and Delivery 8. Monitoring and Logging 9. Operating Systems, Platforms, and Cloud and Local Providers 10. Designing for High Availability and Scalability 11. Kubernetes SIGs, Incubation Projects, and the CNCF 12. Cluster Federation and Multi-Tenancy 13. Cluster Authentication, Authorization, and Container Security 14. Hardening Kubernetes 15. Kubernetes Infrastructure Management 1. Assessments 2. Other Books You May Enjoy Index

Chapter 2. Building a Foundation with Core Kubernetes Constructs

This chapter will cover the core Kubernetes constructs, namely pods, services, replication controllers, replica sets, and labels. We will describe Kubernetes components, dimensions of the API, and Kubernetes objects. We will also dig into the major Kubernetes cluster components. A few simple application examples will be included to demonstrate each construct. This chapter will also cover basic operations for your cluster. Finally, health checks and scheduling will be introduced with a few examples.

The following topics will be covered in this chapter:

  • Kubernetes' overall architecture
  • The context of Kubernetes architecture within system theory
  • Introduction to core Kubernetes constructs, architecture, and components
  • How labels can simplify the management of a Kubernetes cluster
  • Monitoring services and container health
  • Setting up scheduling constraints based on available cluster resources

Technical requirements


You'll need to have your Google Cloud Platform account enabled and logged in or you can use a local Minikube instance of Kubernetes. You can also use Play with Kubernetes over the web: https://labs.play-with-k8s.com/.

Here's the GitHub repository for this chapter: https://github.com/PacktPublishing/Getting-Started-with-Kubernetes-third-edition/tree/master/Code-files/Chapter02.

The Kubernetes system

To understand the complex architecture and components of Kubernetes, we should take a step back and look at the landscape of the overall system in order to understand the context and place of each moving piece. This book focuses mainly on the technical pieces and processes of the Kubernetes software, but let's examine the system from a top-down perspective. In the following diagram, you can see the major parts of the Kubernetes system, which is a great way to think about the classification of the parts we'll describe and utilize in this book:

Let's take a look at each piece...

Core constructs


Now, let's dive a little deeper and explore some of the core abstractions Kubernetes provides. These abstractions will make it easier to think about our applications and ease the burden of life cycle management, high availability, and scheduling.

 

Pods

Pods allow you to keep related containers close in terms of the network and hardware infrastructure. Data can live near the application, so processing can be done without incurring a high latency from network traversal. Similarly, common data can be stored on volumes that are shared between a number of containers. Pods essentially allow you to logically group containers and pieces of our application stacks together.

While pods may run one or more containers inside, the pod itself may be one of many that is running on a Kubernetes node (minion). As we'll see, pods give us a logical group of containers across which we can then replicate, schedule, and balance service endpoints.

Pod example

Let's take a quick look at a pod in action...

Our first Kubernetes application


Before we move on, let's take a look at these three concepts in action. Kubernetes ships with a number of examples installed, but we'll create a new example from scratch to illustrate some of the concepts.

 

We already created a pod definition file but, as you learned, there are many advantages to running our pods via replication controllers. Again, using the book-examples/02_example folder we made earlier, we'll create some definition files and start a cluster of Node.js servers using a replication controller approach. Additionally, we'll add a public face to it with a load-balanced service.

Use your favorite editor to create the following file and name it as nodejs-controller.yaml:

apiVersion: v1 
kind: ReplicationController 
metadata: 
  name: node-js 
  labels: 
    name: node-js 
spec: 
  replicas: 3 
  selector: 
    name: node-js 
  template: 
    metadata: 
      labels: 
        name: node-js 
    spec: 
      containers: 
      - name: node-js 
   ...

Health checks


Kubernetes provides three layers of health checking. First, in the form of HTTP or TCP checks, K8s can attempt to connect to a particular endpoint and give a status of healthy on a successful connection. Second, application-specific health checks can be performed using command-line scripts. We can also use the execcontainer to run a health check from within your container. Anything that exits with a 0 status will be considered healthy.

Let's take a look at a few health checks in action. First, we'll create a new controller named nodejs-health-controller.yaml with a health check:

apiVersion: v1 
kind: ReplicationController 
metadata: 
  name: node-js 
  labels: 
    name: node-js 
spec: 
  replicas: 3 
  selector: 
    name: node-js 
  template: 
    metadata: 
      labels: 
        name: node-js 
    spec: 
      containers: 
      - name: node-js 
        image: jonbaier/node-express-info:latest 
        ports: 
        - containerPort: 80 
        livenessProbe: 
        ...

Application scheduling


Now that we understand how to run containers in pods and even recover from failure, it may be useful to understand how new containers are scheduled on our cluster nodes.

As mentioned earlier, the default behavior for the Kubernetes scheduler is to spread container replicas across the nodes in our cluster. In the absence of all other constraints, the scheduler will place new pods on nodes with the least number of other pods belonging to matching services or replication controllers.

Additionally, the scheduler provides the ability to add constraints based on resources available to the node. Today, this includes minimum CPU and memory allocations. In terms of Docker, these use the CPU-shares and memory limit flags under the covers.

When additional constraints are defined, Kubernetes will check a node for available resources. If a node does not meet all the constraints, it will move to the next. If no nodes can be found that meet the criteria, then we will see a scheduling...

Summary


We took a look at the overall architecture for Kubernetes, as well as the core constructs provided to build your services and application stacks. You should have a better understanding of how these abstractions make it easier to manage the life cycle of your stack and/or services as a whole and not just the individual components. Additionally, we took a first-hand look at how to manage some simple day-to-day tasks using pods, services, and replication controllers. We also looked at how to use Kubernetes to automatically respond to outages via health checks. Finally, we explored the Kubernetes scheduler and some of the constraints users can specify to influence scheduling placement.

In the next chapter, we'll dive into the networking layer of Kubernetes. We'll see how networking is done and also look at the core Kubernetes proxy that is used for traffic routing. We'll also look at service discovery and logical namespace groupings.

Questions


  1. What are the three types of health checks?
  2. What is the replacement technology for Replication Controllers?
  3. Name all five layers of the Kubernetes system
  4. Name two network plugins for Kubernetes
  5. What are two of the options for container runtimes available to Kubernetes?
  6. What are the three main components of the Kubernetes architecture?
  7. Which type of selector filters keys and values according to a specific value?

 

 

lock icon The rest of the chapter is locked
You have been reading a chapter from
Getting Started with Kubernetes, - Third Edition
Published in: Oct 2018 Publisher: Packt ISBN-13: 9781788994729
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}