Kubernetes Cookbook

4.5 (2 reviews total)
By Hideto Saito , Hui-Chuan Chloe Lee , Ke-Jou Carol Hsu
  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Building Your Own Kubernetes

About this book

Kubernetes is Google’s solution to managing a cluster of containers. Kubernetes provides a declarative API to manage clusters while giving us a lot of flexibility. This book will provide you with recipes to better manage containers in different scenarios in production using Kubernetes.

We will start by giving you a quick brush up on how Kubernetes works with containers along with an overview of the main Kubernetes features such as Pods, Replication Controllers, and more. Next, we will teach you how to create Kubernetes cluster and how to run programs on Kubernetes. We’ll explain features such as High Availability Kubernetes master setup, using Kubernetes with Docker, and orchestration with Kubernetes using AWS. Later, will show you how to use Kubernetes-UI, and how to set up and manage Kubernetes clusters on the cloud and bare metal.

Upon completion of this book, you will be able use Kubernetes in production and will have a better understanding of how to manage your containers using Kubernetes.

Publication date:
June 2016
Publisher
Packt
Pages
376
ISBN
9781785880063

 

Chapter 1. Building Your Own Kubernetes

In this chapter, we will cover the following topics:

  • Exploring architecture

  • Preparing your environment

  • Building datastore

  • Creating an overlay network

  • Configuring master

  • Configuring nodes

  • Running your first container in Kubernetes

 

Introduction


Welcome to the journey of Kubernetes! In this very first section, you will learn how to build your own Kubernetes cluster. Along with understanding each component and connecting them together, you will learn how to run your first container on Kubernetes. Holding a Kubernetes cluster will help you continue the study in the chapters ahead.

 

Exploring architecture


Kubernetes is an open source container management tool. It is a Go-Lang based (https://golang.org), lightweight, and portable application. You can set up a Kubernetes cluster on a Linux-based OS to deploy, manage, and scale the Docker container applications on multiple hosts.

Getting ready

Kubernetes is constructed using several components, as follows:

  • Kubernetes master

  • Kubernetes nodes

  • etcd

  • Overlay network (flannel)

These components are connected via network, as shown in the following screenshot:

The preceding image can be summarized as follows:

  • Kubernetes master connects to etcd via HTTP or HTTPS to store the data. It also connects flannel to access the container application.

  • Kubernetes nodes connect to the Kubernetes master via HTTP or HTTPS to get a command and report the status.

  • Kubernetes nodes use an overlay network (for example, flannel) to make a connection of their container applications.

How to do it…

In this section, we are going to explain the features of Kubernetes master and nodes; both of them realize the main functions of the Kubernetes system.

Kubernetes master

Kubernetes master is the main component of Kubernetes cluster. It serves several functionalities, such as the following items:

  • Authorization and authentication

  • RESTful API entry point

  • Container deployment scheduler to the Kubernetes nodes

  • Scaling and replicating the controller

  • Read and store the configuration

  • Command Line Interface

The next image shows how master daemons worked together to fulfill the mentioned functionalities:

There are several daemon processes that make the Kubernetes master's functionality, such as kube-apiserver, kube-scheduler, and kube-controller-manager. Hypercube wrapper launched all of them.

In addition, the Kubernetes Command Line Interface kubectl can control the Kubernetes master functionality.

API server (kube-apiserver)

The API server provides an HTTP- or HTTPS-based RESTful API, which is the hub between Kubernetes components, such as kubectl, scheduler, replication controller, etcd datastore, and kubelet and kube-proxy, which runs on Kubernetes nodes and so on.

Scheduler (kube-scheduler)

Scheduler helps to choose which container runs by which nodes. It is a simple algorithm that defines the priority to dispatch and bind containers to nodes, for example:

  • CPU

  • Memory

  • How many containers are running?

Controller manager (kube-controller-manager)

Controller manager performs cluster operations. For example:

  • Manages Kubernetes nodes

  • Creates and updates the Kubernetes internal information

  • Attempts to change the current status to the desired status

Command Line Interface (kubectl)

After you install Kubernetes master, you can use the Kubernetes Command Line Interface kubectl to control the Kubernetes cluster. For example, kubectl get cs returns the status of each component. Also, kubectl get nodes returns a list of Kubernetes nodes:

//see the ComponentStatuses
# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   nil
scheduler            Healthy   ok                   nil
etcd-0               Healthy   {"health": "true"}   nil

//see the nodes
# kubectl get nodes
NAME          LABELS                               STATUS    AGE
kub-node1   kubernetes.io/hostname=kub-node1   Ready     26d
kub-node2   kubernetes.io/hostname=kub-node2   Ready     26d

Kubernetes node

Kubernetes node is a slave node in the Kubernetes cluster. It is controlled by Kubernetes master to run the container application using Docker (http://docker.com) or rkt (http://coreos.com/rkt/docs/latest/) in this book; we will use the Docker container runtime as the default engine.

Tip

Node or slave?

The terminology of slave is used in the computer industry to represent the cluster worker node; however, it is also associated with discrimination. The Kubernetes project uses node instead.

The following image displays the role and tasks of daemon processes in node:

Node also has multiple daemon processes, named kubelet and kube-proxy, to support its functionalities.

kubelet

kubelet is the main process on Kubernetes node that communicates with Kubernetes master to handle the following operations:

  • Periodically access the API Controller to check and report

  • Perform container operations

  • Runs the HTTP server to provide simple APIs

Proxy (kube-proxy)

Proxy handles the network proxy and load balancer for each container. It performs to change the Linux iptables rules (nat table) to control TCP and UDP packets across the containers.

After starting the kube-proxy daemon, it will configure iptables rules; you can see iptables -t nat -L or iptables -t nat -S to check the nat table rules, as follows:

 //the result will be vary and dynamically changed by kube-proxy
# sudo iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-N FLANNEL
-N KUBE-NODEPORT-CONTAINER
-N KUBE-NODEPORT-HOST
-N KUBE-PORTALS-CONTAINER
-N KUBE-PORTALS-HOST
-A PREROUTING -m comment --comment "handle ClusterIPs; NOTE: this must be before the NodePort rules" -j KUBE-PORTALS-CONTAINER
-A PREROUTING -m addrtype --dst-type LOCAL -m comment --comment "handle service NodePorts; NOTE: this must be the last rule in the chain" -j KUBE-NODEPORT-CONTAINER
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "handle ClusterIPs; NOTE: this must be before the NodePort rules" -j KUBE-PORTALS-HOST
-A OUTPUT -m addrtype --dst-type LOCAL -m comment --comment "handle service NodePorts; NOTE: this must be the last rule in the chain" -j KUBE-NODEPORT-HOST
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 192.168.90.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 192.168.0.0/16 -j FLANNEL
-A FLANNEL -d 192.168.0.0/16 -j ACCEPT
-A FLANNEL ! -d 224.0.0.0/4 -j MASQUERADE

How it works…

There are two more components to complement the Kubernetes nodes' functionalities, the datastore etcd and the overlay network flannel. You can learn how they support the Kubernetes system in the following paragraphs.

etcd

The etcd (https://coreos.com/etcd/) is the distributed key-value datastore. It can be accessed via the RESTful API to perform the CRUD operation over the network. Kubernetes uses etcd as the main datastore.

You can explore the Kubernetes configuration and status in etcd (/registry) using the curl command as follows:

//example: etcd server is 10.0.0.1 and default port is 2379
# curl -L "http://10.0.0.1:2379/v2/keys/registry"

{"action":"get","node":{"key":"/registry","dir":true,"nodes":[{"key":"/registry/namespaces","dir":true,"modifiedIndex":15,"createdIndex":15},{"key":"/registry/serviceaccounts","dir":true,"modifiedIndex":16,"createdIndex":16},{"key":"/registry/services","dir":true,"modifiedIndex":17,"createdIndex":17},{"key":"/registry/ranges","dir":true,"modifiedIndex":76,"createdIndex":76},{"key":"/registry/nodes","dir":true,"modifiedIndex":740,"createdIndex":740},{"key":"/registry/pods","dir":true,"modifiedIndex":794,"createdIndex":794},{"key":"/registry/controllers","dir":true,"modifiedIndex":810,"createdIndex":810},{"key":"/registry/events","dir":true,"modifiedIndex":6,"createdIndex":6}],"modifiedIndex":6,"createdIndex":6}}

Overlay network

Network communication between containers is the most difficult part. Because when you start to run the Docker, an IP address will be assigned dynamically; the container application needs to know the peer's IP address and port number.

If the container's network communication is only within the single host, you can use the Docker link to generate the environment variable to discover the peer. However, Kubernetes usually works as a cluster and ambassador pattern or overlay network could help to connect every node. Kubernetes uses overlay network to manage multiple containers' communication.

For overlay network, Kubernetes has several options, but using flannel is the easier solution.

Flannel

Flannel also uses etcd to configure the settings and store the status. You can also perform the curl command to explore the configuration (/coreos.com/network) and status, as follows:

//overlay network CIDR is 192.168.0.0/16
# curl -L "http://10.0.0.1:2379/v2/keys/coreos.com/network/config"

{"action":"get","node":{"key":"/coreos.com/network/config","value":"{ \"Network\": \"192.168.0.0/16\" }","modifiedIndex":144913,"createdIndex":144913}}

//Kubernetes assigns some subnets to containers
# curl -L "http://10.0.0.1:2379/v2/keys/coreos.com/network/subnets"

{"action":"get","node":{"key":"/coreos.com/network/subnets","dir":true,"nodes":[{"key":"/coreos.com/network/subnets/192.168.90.0-24","value":"{\"PublicIP\":\"10.97.217.158\"}","expiration":"2015-11-05T08:16:21.995749971Z","ttl":38993,"modifiedIndex":388599,"createdIndex":388599},{"key":"/coreos.com/network/subnets/192.168.76.0-24","value":"{\"PublicIP\":\"10.97.217.148\"}","expiration":"2015-11-05T04:32:45.528111606Z","ttl":25576,"modifiedIndex":385909,"createdIndex":385909},{"key":"/coreos.com/network/subnets/192.168.40.0-24","value":"{\"PublicIP\":\"10.97.217.51\"}","expiration":"2015-11-05T15:18:27.335916533Z","ttl":64318,"modifiedIndex":393675,"createdIndex":393675}],"modifiedIndex":79,"createdIndex":79}}

See also

This section describes the basic architecture and methodology of Kubernetes and related components. Understanding Kubernetes is not easy, but a step-by-step lesson on how to setup, configure, and manage Kubernetes is really fun.

The following recipes describe how to install and configure related components:

  • Building datastore

  • Creating an overlay network

  • Configuring master

  • Configuring nodes

 

Preparing your environment


Before heading to the journey of building our own cluster, we have to prepare the environment in order to build the following components:

There are different solutions of creating such a Kubernetes cluster, for example:

  • Local-machine solutions that include:

    • Docker-based

    • Vagrant

    • Linux machine

  • Hosted solution that includes:

    • Google Container Engine

  • Custom solutions

A local-machine solution is suitable if we just want to build a development environment or do the proof of concept quickly. By using Docker (https://www.docker.com) or Vagrant (https://www.vagrantup.com), we could easily build the desired environment in one single machine; however, it is not practical if we want to build a production environment. A hosted solution is the easiest starting point if we want to build it in the cloud.

Google Container Engine, which has been used by Google for many years, has the comprehensive support naturally and we do not need to care much about the installation and setting. Kubernetes can also run on different cloud and on-premises VMs by custom solutions. We will build the Kubernetes clusters from scratch on Linux-based virtual machines (CentOS 7.1) in the following chapters. The solution is suitable for any Linux machines in both cloud and on-premises environments.

Getting ready

It is recommended if you have at least four Linux servers for master, etcd, and two nodes. If you want to build it as a high availability cluster, more servers for each component are preferred. We will build three types of servers in the following sections:

  • Kubernetes master

  • Kubernetes node

  • etcd

Flannel will not be located in one machine, which is required in all the nodes. Communication between containers and services are powered by flannel, which is an etcd backend overlay network for containers.

Hardware resource

The hardware spec of each component is suggested in the following table. Please note that it might cause a longer response time when manipulating the cluster if the amount of requests between the API server and etcd is large. In a normal situation, increasing resources can resolve this problem:

Component Spec

Kubernetes master

etcd

CPU Count

1

1

Memory GB

2G

2G

For the nodes, the default maximum number of pods in one node is 40. However, a node capacity is configurable when adding a node. You have to measure how many resources you might need for the hosted services and applications to decide how many nodes should be there with a certain spec and with proper monitoring in production workload.

Tip

Check out your node capacity in node

In your master, you could install jq by yum install jq and use kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, capacity: .status.capacity}' to check the capacity of each node, including CPU, memory, and the maximum capacity of pods:

// check out your node capacity
$ kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, capacity: .status.capacity}'
{
  "name": "kub-node1",
  "capacity": {
    "cpu": "1",
    "memory": "1021536Ki",
    "pods": "40"
  }
}
{
  "name": "kub-node2",
  "capacity": {
    "cpu": "1",
    "memory": "1021536Ki",
    "pods": "40"
  }
}

Operating system

The OS of nodes could be various, but the kernel version must be 3.10 or later. Following are the OSs that are using kernel 3.10+:

  • CentOS 7 or later

  • RHEL 7 or later

  • Ubuntu Vivid 15.04 / Ubuntu Trusty 14.04 (LTS) / Ubuntu Saucy 13.10

Note

Beware of the Linux kernel version

Docker requires that your kernel must be 3.10 at minimum on CentOS or Red Hat Enterprise Linux, 3.13 kernel version on Ubuntu Precise 12.04 (LTS). It will cause data loss or kernel panic sometimes if using unsupported kernels. It is recommended you fully update the system before building Kubernetes. You can use uname -r to check the kernel you're currently using. For more information on checking the kernel version, please refer to http://www.linfo.org/find_kernel_version.html.

How to do it…

To ensure each component works perfectly in Kubernetes cluster, we must install the correct packages on each machine of master, node, and etcd.

Kubernetes master

Kubernetes master should be installed on a Linux-based OS. For the examples listed in this book, we will use CentOS 7.1 as an OS. There are two packages required in master:

  • Kubernetes

  • Flannel (optional)

  • iptables (at least 1.4.11+ is preferred)

Kubernetes (https://github.com/kubernetes/kubernetes/releases) has a couple of fast-paced releases. Flannel daemon is optional in master; if you would like to launch Kubernetes UI, flannel (https://github.com/coreos/flannel/releases) is required. Otherwise, Kubernetes UI will be failed to access via https://<kubernetes-master>/ui.

Note

Beware of iptables version

Kubernetes uses iptables to implement service proxy. iptables with version 1.4.11+ is recommended on Kubernetes. Otherwise, iptables rules might be out of control and keep increasing. You can use yum info iptables to check the current version of iptables.

Kubernetes nodes

On Kubernetes nodes, we have to prepare the following:

  • Kubernetes

  • Flannel daemon

  • Docker (at least 1.6.2+ is preferred)

  • iptables (at least 1.4.11+ is preferred)

Note

Beware of Docker version and dependencies

Sometimes, you'll get an unknown error when using the incompatible Docker version, such as target image is not found. You can always use the docker version command to check the current version you've installed. The recommended versions we tested are at least 1.7.1+. Before building the cluster, you can start the service by using the service docker start command and make sure it can be contacted using docker ps.

Docker has different package names and dependency packages in Linux distributions. In Ubuntu, you could use curl -sSL https://get.docker.com/ | sh. For more information, check out the Docker installation document (http://docs.docker.com/v1.8/installation) to find your preferred Linux OS.

etcd

etcd, which is a distributed reliable key-value store for shared configurations and service discovery, is powered by CoreOS. The release page is https://github.com/coreos/etcd/releases. The prerequisite we need is just the etcd package.

See also

After preparing the environment, it is time to build up your Kubernetes. Check out the following recipes for that:

  • Building datastore

  • Creating an overlay network

  • Configuring master

  • Configuring nodes

  • The Setting resource in nodes recipe in Chapter 7, Advanced Cluster Administration

  • The Monitoring master and node recipe in Chapter 8, Logging and Monitoring

 

Building datastore


In order to persist the Kubernetes cluster information, we need to set up datastore. Kubernetes uses etcd as a standard datastore. This section will guide you to build the etcd server.

How to do it…

The etcd database requires Linux OS; some Linux distributions provide the etcd package and some don't. This section describes how to install etcd.

Red Hat Enterprise Linux 7 or CentOS 7

Red Hat Enterprise Linux (RHEL) 7, CentOS 7 or later has an official package for etcd. You can install via the yum command, as follows:

//it will perform to install etcd package on RHEL/CentOS Linux
sudo yum update -y
sudo yum install etcd 

Ubuntu Linux 15.10 Wily Werewolf

Ubuntu 15.10 or later has an official package for etcd as well. You can install via the apt-get command as follows:

//it will perform to install etcd package on Ubuntu Linux
sudo apt-get update -y
sudo apt-get install etcd

Other Linux

If you are using a different Linux version, such as Amazon Linux, you can download a binary from the official website and install it as follows.

Download a binary

etcd is provided via https://github.com/coreos/etcd/releases. OS X (darwin-amd64), Linux, Windows binary, and source code are available for download.

Tip

Note that there are no 32-bit binaries provided due to the Go runtime issue. You must prepare a 64-bit Linux OS.

On your Linux machine, use the curl command to download the etcd-v2.2.1-linux-amd64.tar.gz binary:

// follow redirection(-L) and use remote name (-O)
curl -L -O https://github.com/coreos/etcd/releases/download/v2.2.1/etcd-v2.2.1-linux-amd64.tar.gz
Creating a user

Due to security reasons, create a local user and group that can own etcd packages:

  1. Run the following useradd command:

    //options
    //    create group(-U), home directory(-d), and create it(-m)
    //    name in GCOS field (-c), login shell(-s)
    $ sudo useradd -U -d /var/lib/etcd -m -c "etcd user" -s /sbin/nologin etcd
    
  2. You can check /etc/passwd to see whether creating etcd user has created a user or not:

    //search etcd user on /etc/passwd, uid and gid is vary
    $ grep etcd /etc/passwd
    etcd:x:997:995:etcd user:/var/lib/etcd:/sbin/nologin
    

    Tip

    You can delete a user any time; type sudo userdel -r etcd to delete etcd user.

Install etcd
  1. After downloading an etcd binary, use the tar command to extract files:

    $ tar xf etcd-v2.2.1-linux-amd64.tar.gz 
    $ cd etcd-v2.2.1-linux-amd64
    
    //use ls command to see that there are documentation and binaries 
    $ ls
    Documentation  README-etcdctl.md  README.md  etcd  etcdctl 
    
  2. There are etcd daemon and etcdctl command that need to be copied to /usr/local/bin. Also, create /etc/etcd/etcd.conf as a setting file:

    $ sudo cp etcd etcdctl /usr/local/bin/
    
    //create etcd.conf
    $ sudo mkdir -p /etc/etcd/
    $ sudo touch /etc/etcd/etcd.conf
    $ sudo chown -R etcd:etcd /etc/etcd
    

How it works…

Let's test run the etcd daemon to explorer the etcd functionalities. Type the etcd command with the name and data-dir argument as follows:

//for the testing purpose, create data file under /tmp
$ etcd --name happy-etcd --data-dir /tmp/happy.etcd &

Then, you will see several output logs as follows:

Now, you can try to use the etcdctl command to access etcd and to load and store the data as follows:

//set value "hello world" to the key /my/happy/data 
$ etcdctl set /my/happy/data "hello world"

//get value for key /my/happy/data
$ etcdctl get /my/happy/data
hello world

In addition, by default, etcd opens TCP port 2379 to access the RESTful API, so you may also try to use an HTTP client, such as the curl command to access data as follows:

//get value for key /my/happy/data using cURL
$ curl -L http://localhost:2379/v2/keys/my/happy/data
{"action":"get","node":{"key":"/my/happy/data","value":"hello world","modifiedIndex":4,"createdIndex":4}}

//set value "My Happy world" to the key /my/happy/data using cURL
$ curl http://127.0.0.1:2379/v2/keys/my/happy/data -XPUT -d value="My Happy world"

//get value for key /my/happy/data using etcdctl 
$ etcdctl get /my/happy/data
My Happy world

Okay! Now, you can delete the key using the curl command as follows:

$ curl http://127.0.0.1:2379/v2/keys/my?recursive=true -XDELETE

//no more data returned afterword
$ curl http://127.0.0.1:2379/v2/keys/my/happy/data
{"errorCode":100,"message":"Key not found","cause":"/my","index":10}

$ curl http://127.0.0.1:2379/v2/keys/my/happy
{"errorCode":100,"message":"Key not found","cause":"/my","index":10}

$ curl http://127.0.0.1:2379/v2/keys/my
{"errorCode":100,"message":"Key not found","cause":"/my","index":10}

Auto startup script

Based on your Linux, either systemd or init, there are different ways to make an auto startup script.

If you are not sure, check the process ID 1 on your system. Type ps -P 1 to see the process name as follows:

//This Linux is systemd based
$ ps -P 1
  PID PSR TTY      STAT   TIME COMMAND
    1   0 ?        Ss     0:03 /usr/lib/systemd/systemd --switched-root –system
//This Linux is init based
# ps -P 1
  PID PSR TTY      STAT   TIME COMMAND
    1   0 ?        Ss     0:01 /sbin/init
Startup script (systemd)

If you are using systemd-based Linux, such as RHEL 7, CentOS 7, Ubuntu 15.4 or later, you need to prepare the /usr/lib/systemd/system/etcd.service file as follows:

[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=/etc/etcd/etcd.conf
User=etcd
ExecStart=/usr/local/bin/etcd

[Install]
WantedBy=multi-user.target

After that, register to systemd using the systemctl command as follows:

# sudo systemctl enable etcd

Then, you restart the system or type sudo systemctl start etcd to launch the etcd daemon. You may check the etcd service status using sudo systemctl status -l etcd.

Startup script (init)

If you are using the init-based Linux, such as Amazon Linux, use the traditional way to prepare the /etc/init.d/etcd script as follows:

#!/bin/bash
#
# etcd This shell script takes care of starting and stopping etcd
#
# chkconfig: - 60 74
# description: etcd

### BEGIN INIT INFO
# Provides: etcd
# Required-Start: $network $local_fs $remote_fs
# Required-Stop: $network $local_fs $remote_fs
# Should-Start: $syslog $named ntpdate
# Should-Stop: $syslog $named
# Short-Description: start and stop etcd
# Description: etcd
### END INIT INFO

# Source function library.
. /etc/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

prog=/usr/local/bin/etcd
etcd_conf=/etc/etcd/etcd.conf
lockfile=/var/lock/subsys/`basename $prog`
hostname=`hostname`

start() {
  # Start daemon.
. $etcd_conf
  echo -n $"Starting $prog: "
  daemon --user=etcd $prog > /var/log/etcd.log 2>&1 &
  RETVAL=$?
  echo
  [ $RETVAL -eq 0 ] && touch $lockfile
  return $RETVAL
}
stop() {
  [ "$EUID" != "0" ] && exit 4
        echo -n $"Shutting down $prog: "
  killproc $prog
  RETVAL=$?
        echo
  [ $RETVAL -eq 0 ] && rm -f $lockfile
  return $RETVAL
}

# See how we were called.
case "$1" in
  start)
  start
  ;;
  stop)
  stop
  ;;
  status)
  status $prog
  ;;
  restart)
  stop
  start
  ;;
  reload)
  exit 3
  ;;
  *)
  echo $"Usage: $0 {start|stop|status|restart|reload}"
  exit 2
esac

After that, register to init script using the chkconfig command as follows:

//set file permission correctly
$ sudo chmod 755 /etc/init.d/etcd
$ sudo chown root:root /etc/init.d/etcd

//auto start when boot Linux
$ sudo chkconfig --add etcd
$ sudo chkconfig etcd on

Then, you restart the system or type /etc/init.d/etcd start to launch the etcd daemon.

Configuration

There is the file /etc/etcd/etcd.conf to change the configuration of etcd, such as data file path and TCP port number.

The minimal configuration is as follows:

NAME

Mean

Example

Note

ETCD_NAME

Instance name

myhappy-etcd

 

ETCD_DATA_DIR

Data file path

/var/lib/etcd/myhappy.etcd

File path must be owned by etcd user

ETCD_LISTEN_CLIENT_URLS

TCP port number

http://0.0.0.0:8080

Specifying 0.0.0.0, binds all IP address, otherwise use localhost to accept only same machine

ETCD_ADVERTISE_CLIENT_URLS

Advertise this etcd URL to other cluster instances

http://localhost:8080

Use for clustering configuration

Note that you need to use the export directive if you want to use the init-based Linux in order to set environment variables as follows:

$ cat /etc/etcd/etcd.conf

export ETCD_NAME=myhappy-etcd
export ETCD_DATA_DIR="/var/lib/etcd/myhappy.etcd"
export ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:8080"
export ETCD_ADVERTISE_CLIENT_URLS="http://localhost:8080"

On the other hand, systemd-based Linux doesn't need the export directive as follows:

$ cat /etc/etcd/etcd.conf
ETCD_NAME=myhappy-etcd
ETCD_DATA_DIR="/var/lib/etcd/myhappy.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:8080"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:8080"

See also

This section described how to configure etcd. It is easy and simple to operate via the RESTful API, but powerful. However, there's a need to be aware of its security and availability. The following recipes will describe how to ensure that etcd is secure and robust:

  • Exploring architecture

  • The Clustering etcd recipe in Chapter 4, Building a High Availability Cluster

  • The Authentication and authorization recipe in Chapter 7, Advanced Cluster Administration

  • The Working with etcd log recipe in Chapter 8, Logging and Monitoring

 

Creating an overlay network


Kubernetes abstracts the networking to enable communication between containers across nodes. The basic unit to make it possible is named pod, which is the smallest deployment unit in Kubernetes with a shared context in a containerized environment. Containers within a pod can communicate with others by port with the localhost. Kubernetes will deploy the pods across the nodes.

Then, how do pods talk to each other?

Kubernetes allocates each pod an IP address in a shared networking namespace so that pods can communicate with other pods across the network. There are a couple of ways to achieve the implementation. The easiest and across the platform way will be using flannel.

Flannel gives each host an IP subnet, which can be accepted by Docker and allocate the IPs to individual containers. Flannel uses etcd to store the IP mapping information, and has a couple of backend choices for forwarding the packets. The easiest backend choice would be using TUN device to encapsulate IP fragment in a UDP packet. The port is 8285 by default.

Flannel also supports in-kernel VXLAN as backend to encapsulate the packets. It might provide better performance than UDP backend while it is not running in user space. Another popular choice is using the advanced routing rule upon Google Cloud Engine (https://cloud.google.com/compute/docs/networking#routing). We'll use both UDP and VXLAN as examples in this section.

Flanneld is the agent of flannel used to watch the information from etcd, allocate the subnet lease on each host, and route the packets. What we will do in this section is let flanneld be up and running and allocate a subnet for each host.

Note

If you're struggling to find out which backend should be used, here is a simple performance test between UDP and VXLAN. We use qperf (http://linux.die.net/man/1/qperf) to measure packet transfer performance between containers. TCP streaming one way bandwidth through UDP is 0.3x slower than VXLAN when there are some loads on the hosts. If you prefer building Kubernetes on the cloud, GCP is the easiest choice.

Getting ready

Before installing flannel, be sure you have the etcd endpoint. Flannel needs etcd as its datastore. If Docker is running, stop the Docker service first and delete docker0, which is a virtual bridge created by Docker:

# Stop docker service
$ service docker stop

# delete docker0
$ ip link delete docker0

Installation

Using the etcdctl command we learned in the previous section on the etcd instance, insert the desired configuration into etcd with the key /coreos.com/network/config:

Configuration Key

Description

Network

IPv4 network for flannel to allocate to entire virtual network

SubnetLen

The subnet prefix length to each host, default is 24.

SubnetMin

The beginning of IP range for flannel subnet allocation

SubnetMax

The end of IP range for flannel subnet allocation

Backend

Backend choices for forwarding the packets. Default is udp.

# insert desired CIDR for the overlay network Flannel creates
$ etcdctl set /coreos.com/network/config '{ "Network": "192.168.0.0/16" }'

Flannel will assign the IP address within 192.168.0.0/16 for overlay network with /24 for each host by default, but you could also overwrite its default setting and insert into etcd:

$ cat flannel-config-udp.json
{
    "Network": "192.168.0.0/16",
    "SubnetLen": 28,
    "SubnetMin": "192.168.10.0",
    "SubnetMax": "192.168.99.0",
    "Backend": {
        "Type": "udp",
        "Port": 7890
    }
}

Use the etcdctl command to insert the flannel-config-udp.json configuration:

# insert the key by json file
$ etcdctl set /coreos.com/network/config < flannel-config-udp.json

Then, flannel will allocate to each host with /28 subnet and only issue the subnets within 192.168.10.0 and 192.168.99.0. Backend will be still udp and the default port will be changed from 8285 to 7890.

We could also use VXLAN to encapsulate the packets and use etcdctl to insert the configuration:

$ cat flannel-config-vxlan.json
{
    "Network": "192.168.0.0/16",
    "SubnetLen": 24,
    "Backend": {
        "Type": "vxlan",
        "VNI": 1
    }
}

# insert the key by json file
$ etcdctl set /coreos.com/network/config < flannel-config-vxlan.json

You might be able to see the configuration you get using etcdctl:

$ etcdctl get /coreos.com/network/config
{
    "Network": "192.168.0.0/16",
    "SubnetLen": 24,
    "Backend": {
        "Type": "vxlan",
        "VNI": 1
     }
}
CentOS 7 or Red Hat Enterprise Linux 7

RHEL 7, CentOS 7, or later have an official package for flannel. You can install them via the yum command:

# install flannel package
$ sudo yum install flannel

After the installation, we have to configure the etcd server in order to use the flannel service:

$ cat /etc/sysconfig/flanneld

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD="<your etcd server>"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/coreos.com/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

We should always keep flanneld up and running all the time when we boot up the server. Using systemctl could do the trick:

# Enable flanneld service by default
$ sudo systemctl enable flanneld

# start flanneld
$ sudo service flanneld start

# check if the service is running
$ sudo service flannel status
Other Linux options

You can always download a binary as an alternative. The CoreOS flannel official release page is here: https://github.com/coreos/flannel/releases. Choose the packages with the Latest release tag; it will always include the latest bug fixes:

# download flannel package
$ curl -L -O https://github.com/coreos/flannel/releases/download/v0.5.5/flannel-0.5.5-linux-amd64.tar.gz

# extract the package
$ tar zxvf flannel-0.5.5-linux-amd64.tar.gz

# copy flanneld to $PATH
$ sudo cp flannel-0.5.5/flanneld /usr/local/bin

If you use a startup script (systemd) in the etcd section, you might probably choose the same way to describe flanneld:

$ cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
Wants=etcd.service
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld -etcd-endpoints=${FLANNEL_ETCD} -etcd-prefix=${FLANNEL_ETCD_KEY} $FLANNEL_OPTIONS
Restart=on-failure

RestartSec=5s

[Install]
WantedBy=multi-user.target

Then, enable the service on bootup using sudo systemctl enable flanneld.

Alternatively, you could use a startup script (init) under /etc/init.d/flanneld if you're using an init-based Linux:

#!/bin/bash

# flanneld  This shell script takes care of starting and stopping flanneld
#

# Source function library.
. /etc/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

prog=/usr/local/bin/flanneld
lockfile=/var/lock/subsys/`basename $prog`

After you have sourced and set the variables, you should implement start, stop status, and restart for the service. The only thing you need to take care of is to ensure to add the etcd endpoint into the configuration when the daemon starts:

start() {
  # Start daemon.
  echo -n $"Starting $prog: "
  daemon $prog \
    --etcd-endpoints=="<your etcd server>" \
    -ip-masq=true \
    > /var/log/flanneld.log 2>&1 &
  RETVAL=$?
  echo
  [ $RETVAL -eq 0 ] && touch $lockfile
  return $RETVAL
}

stop() {
  [ "$EUID" != "0" ] && exit 4
        echo -n $"Shutting down $prog: "
  killproc $prog
  RETVAL=$?
        echo
  [ $RETVAL -eq 0 ] && rm -f $lockfile
  return $RETVAL
}

case "$1" in
  start)
  start
  ;;
  stop)
  stop
  ;;
  status)
  status $prog
  ;;
  restart|force-reload)
  stop
  start
  ;;
  try-restart|condrestart)
  if status $prog > /dev/null; then
      stop
      start
  fi
  ;;
  reload)
  exit 3
  ;;
  *)
  echo $"Usage: $0 {start|stop|status|restart|try-restart|force-reload}"
  exit 2
esac

Tip

If flannel gets stuck when starting up

Check out your etcd endpoint is accessible and the key listed in FLANNEL_ETCD_KEY exists:

# FLANNEL_ETCD_KEY="/coreos.com/network/config"
$ curl -L http://<etcd endpoint>:2379/v2/keys/coreos.com/network/config 

You could also check out flannel logs using sudo journalctl -u flanneld.

After the flannel service starts, you should be able to see a file in /run/flannel/subnet.env and the flannel0 bridge in ifconfig.

How to do it…

To ensure flannel works well and transmits the packets from the Docker virtual interface, we need to integrate it with Docker.

Flannel networking configuration

  1. After flanneld is up and running, use the ifconfig or ip commands to see whether there is a flannel0 virtual bridge in the interface:

    # check current ipv4 range
    $ ip a | grep flannel | grep inet
        inet 192.168.50.0/16 scope global flannel0
    

    We can see from the preceding example, the subnet lease of flannel0 is 192.168.50.0/16.

  2. Whenever your flanneld service starts, flannel will acquire the subnet lease and save in etcd and then write out the environment variable file in /run/flannel/subnet.env by default, or you could change the default path using the --subnet-file parameter when launching it:

    # check out flannel subnet configuration on this host
    $ cat /run/flannel/subnet.env
    FLANNEL_SUBNET=192.168.50.1/24
    FLANNEL_MTU=1472
    FLANNEL_IPMASQ=true
    

Integrating with Docker

There are a couple of parameters that are supported by the Docker daemon. In /run/flannel/subnet.env, flannel already allocated one subnet with the suggested MTU and IPMASQ settings. The corresponding parameters in Docker are:

Parameters

Meaning

--bip=""

Specify network bridge IP (docker0)

--mtu=0

Set the container network MTU (for docker0 and veth)

--ip-masq=true

(Optional) Enable IP masquerading

  1. We could use the variables listed in /run/flannel/subnet.env into the Docker daemon:

    # import the environment variables from subnet.env
    $ . /run/flannel/subnet.env
    
    # launch docker daemon with flannel information
    $ docker -d --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
    # Or if your docker version is 1.8 or higher, use subcommand daemon instead
    $ docker daemon --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
    
  2. Alternatively, you can also specify them into OPTIONS of /etc/sysconfig/docker, which is the Docker configuration file in CentOS:

    ### in the file - /etc/sysconfig/docker
    # set the variables into OPTIONS 
    $ OPTIONS="--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} --ip-masq=${FLANNEL_IPMASQ}"
    

    In the preceding example, specify ${FLANNEL_SUBNET} is replaced by 192.168.50.1/24 and ${FLANNEL_MTU} is 1472 in the /etc/sysconfig/docker.

  3. Start Docker using service docker start and type ifconfig; you might be able to see the virtual network device docker0 and its allocated IP address from flannel.

How it works…

There are two virtual bridges named flannel0 and docker0 that are created in the previous steps. Let's take a look at their IP range using the ip command:

# checkout IPv4 network in local
$ ip -4 a | grep inet
    inet 127.0.0.1/8 scope host lo
    inet 10.42.1.171/24 brd 10.42.21.255 scope global dynamic ens160
    inet 192.168.50.0/16 scope global flannel0
    inet 192.168.50.1/24 scope global docker0

Host IP address is 10.42.1.171/24, flannel0 is 192.168.50.0/16, docker0 is 192.168.50.1/24, and the route is set for the full flat IP range:

# check the route 
$ route -n 
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.42.1.1       0.0.0.0         UG    100    0        0 ens160
192.168.0.0     0.0.0.0         255.255.0.0     U     0      0        0 flannel0
192.168.50.0    0.0.0.0         255.255.255.0   U     0      0        0 docker0

Let's go a little bit deeper to see how etcd stores flannel subnet information. You could retrieve the network configuration by using the etcdctl command in etcd:

# get network config
$ etcdctl get /coreos.com/network/config
{ "Network": "192.168.0.0/16" }

# show all the subnet leases
$ etcdctl ls /coreos.com/network/subnets
/coreos.com/network/subnets/192.168.50.0-24

The preceding example shows that the network CIDR is 192.168.0.0/16. There is one subnet lease. Check the value of the key; it's exactly the IP address of eth0 on the host:

# show the value of the key of `/coreos.com/network/subnets/192.168.50.0-24`
$ etcdctl get /coreos.com/network/subnets/192.168.50.0-24
{"PublicIP":"10.42.1.171"}

If you're using other backend solutions rather than simple UDP, you might see more configuration as follows:

# show the value when using different backend
$ etcdctl get /coreos.com/network/subnets/192.168.50.0-24
{"PublicIP":"10.97.1.171","BackendType":"vxlan","BackendData":{"VtepMAC":"ee:ce:55:32:65:ce"}}

Following is an illustration about how a packet from Pod1 goes through the overlay network to Pod4. As we discussed before, every pod will have its own IP address and the packet is encapsulated so that pod IPs are routable. The packet from Pod1 will go through the veth (virtual network interface) device that connects to docker0, and routes to flannel0. The traffic is encapsulated by flanneld and sent to the host (10.42.1.172) of the target pod.

Let's perform a simple test by running two individual containers to see whether flannel works well. Assume we have two hosts (10.42.1.171 and 10.42.1.172) with different subnets, which are allocated by Flannel with the same etcd backend, and have launched Docker run by docker run -it ubuntu /bin/bash in each host:

Container 1 on host 1 (10.42.1.171)

Container 2 on host 2 (10.42.1.172)

[email protected]:/# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 02:42:c0:a8:3a:08
          inet addr:192.168.50.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::42:c0ff:fea8:3a08/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:8951  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)
[email protected]:/# ping 192.168.65.2
PING 192.168.4.10 (192.168.4.10) 56(84) bytes of data.
64 bytes from 192.168.4.10: icmp_seq=2 ttl=62 time=0.967 ms
64 bytes from 192.168.4.10: icmp_seq=3 ttl=62 time=1.00 ms

[email protected]:/# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 02:42:c0:a8:04:0a
          inet addr:192.168.65.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::42:c0ff:fea8:40a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:8973  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)

We can see that two containers can communicate with each other using ping. Let's observe the packet using tcpdump in host2, which is a command-line tool that can help dump traffic on a network:

# install tcpdump in container
$ yum install -y tcpdump

# observe the UDP traffic from host2
$ tcpdump host 10.42.1.172 and udp
11:20:10.324392 IP 10.42.1.171.52293 > 10.42.1.172.6177: UDP, length 106
11:20:10.324468 IP 10.42.1.172.47081 > 10.42.1.171.6177: UDP, length 106
11:20:11.324639 IP 10.42.1.171.52293 > 10.42.1.172.6177: UDP, length 106
11:20:11.324717 IP 10.42.1.172.47081 > 10.42.1.171.6177: UDP, length 106

The traffic between the containers are encapsulated in UDP through port 6177 using flanneld.

See also

After setting up and understanding the overlay network, we have a good understanding of how flannel acts in Kubernetes. Check out the following recipes:

  • The Working with pods, Working with services recipes in Chapter 2, Walking through Kubernetes Concepts

  • The Forwarding container ports recipe in Chapter 3, Playing with Containers

  • The Authentication and authorization recipe in Chapter 7, Advanced Cluster Administration

 

Configuring master


The master node of Kubernetes works as the control center of containers. The duties of which are taken charge by the master include serving as a portal to end users, assigning tasks to nodes, and gathering information. In this recipe, we will see how to set up Kubernetes master. There are three daemon processes on master:

  • API Server

  • Scheduler

  • Controller Manager

We can either start them using the wrapper command, hyperkube, or individually start them as daemons. Both the solutions are covered in this section.

Getting ready

Before deploying the master node, make sure you have the etcd endpoint ready, which acts like the datastore of Kubernetes. You have to check whether it is accessible and also configured with the overlay network Classless Inter-Domain Routing (CIDR https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). It is possible to check it using the following command line:

// Check both etcd connection and CIDR setting
$ curl -L <etcd endpoint URL>/v2/keys/coreos.com/network/config

If connection is successful, but the etcd configuration has no expected CIDR value, you can push value through curl as well:

$ curl -L <etcd endpoint URL>/v2/keys/coreos.com/network/config -XPUT -d value="{ \"Network\": \"<CIDR of overlay network>\" }"

Tip

Besides this, please record the following items: the URL of etcd endpoint, the port exposed by etcd endpoint, and the CIDR of the overlay network. You will need them while configuring master's services.

How to do it…

In order to build up a master, we propose the following steps for installing the source code, starting with the daemons and then doing verification. Follow the procedure and you'll get a practical master eventually.

Installation

Here, we offer two kinds of installation procedures:

  • One is a RHEL-based OS with package manager; master daemons are controlled by systemd

  • The other one is for other Linux distributions; we build up master with binary files and service init scripts

CentOS 7 or Red Hat Enterprise Linux 7
  1. RHEL 7, CentOS 7, or later have an official package for Kubernetes. You can install them via the yum command:

    // install Kubernetes master package
    # yum install kubernetes-master kubernetes-client
    

    The kubernetes-master package contains master daemons, while kubernetes-client installs a tool called kubectl, which is the Command Line Interface for communicating with the Kubernetes system. Since the master node is served as an endpoint for requests, with kubectl installed, users can easily control container applications and the environment through commands.

    Note

    CentOS 7's RPM of Kubernetes

    There are five Kubernetes RPMs (the .rpm files, https://en.wikipedia.org/wiki/RPM_Package_Manager) for different functionalities: kubernetes, kubernetes-master, kubernetes-client, kubernetes-node, and kubernetes-unit-test.

    The first one, kubernetes, is just like a hyperlink to the following three items. You will install kubernetes-master, kubernetes-client, and kubernetes-node at once. The one named kubernetes-node is for node installation. And the last one, kubernetes-unit-test contains not only testing scripts, but also Kubernetes template examples.

  2. Here are the files after yum install:

    // profiles as environment variables for services
    # ls /etc/kubernetes/
    apiserver  config  controller-manager  scheduler
    // systemd files
    # ls /usr/lib/systemd/system/kube-*
    /usr/lib/systemd/system/kube-apiserver.service           /usr/lib/systemd/system/kube-scheduler.service
    /usr/lib/systemd/system/kube-controller-manager.service
    
  3. Next, we will leave the systemd files as the original ones and modify the values in the configuration files under the directory /etc/kubernetes to build a connection with etcd. The file named config is a shared environment file for several Kubernetes daemon processes. For basic settings, simply change items in apiserver:

    # cat /etc/kubernetes/apiserver
    ###
    # kubernetes system config
    #
    # The following values are used to configure the kube-apiserver
    #
    
    # The address on the local server to listen to.
    KUBE_API_ADDRESS="--address=0.0.0.0"
    
    # The port on the local server to listen on.
    KUBE_API_PORT="--insecure-port=8080"
    
    # Port nodes listen on
    # KUBELET_PORT="--kubelet_port=10250"
    
    # Comma separated list of nodes in the etcd cluster
    KUBE_ETCD_SERVERS="--etcd_servers=<etcd endpoint URL>:<etcd exposed port>"
    
    # Address range to use for services
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=<CIDR of overlay network>"
    
    # default admission control policies
    KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
    
    # Add your own!
    KUBE_API_ARGS="--cluster_name=<your cluster name>"
    
  4. Then, start the daemon kube-apiserver, kube-scheduler, and kube-controller-manager one by one; the command systemctl can help for management. Be aware that kube-apiserver should always start first, since kube-scheduler and kube-controller-manager connect to the Kubernetes API server when they start running:

    // start services
    # systemctl start kube-apiserver
    # systemctl start kube-scheduler
    # systemctl start kube-controller-manager
    // enable services for starting automatically while server boots up.
    # systemctl enable kube-apiserver
    # systemctl enable kube-scheduler
    # systemctl enable kube-controller-manager
    
Adding daemon dependency
  1. Although systemd does not return error messages without the API server running, both kube-scheduler and kube-controller-manager get connection errors and do not provide regular services:

    $ sudo systemctl status kube-scheduler -l—output=cat kube-scheduler.service - Kubernetes Scheduler Plugin
       Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled)
       Active: active (running) since Thu 2015-11-19 07:21:57 UTC; 5min ago
         Docs: https://github.com/GoogleCloudPlatform/kubernetes
     Main PID: 2984 (kube-scheduler)
       CGroup: /system.slice/kube-scheduler.service
               └─2984 /usr/bin/kube-scheduler—logtostderr=true—v=0 --master=127.0.0.1:8080
    E1119 07:27:05.471102    2984 reflector.go:136] Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse: dial tcp 127.0.0.1:8080: connection refused
    
  2. Therefore, in order to prevent the starting order to affect performance, you can add two settings under the section of systemd.unit in /usr/lib/systemd/system/kube-scheduler and /usr/lib/systemd/system/kube-controller-manager:

    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=kube-apiserver.service
    Wants=kube-apiserver.service

    With the preceding settings, we can make sure kube-apiserver is the first started daemon.

  3. Furthermore, if you expect the scheduler and the controller manager to always be running along with a healthy API server, which means if kube-apiserver is stopped, kube-scheduler and kube-controller-manager will be stopped as well; you can change systemd.unit item Wants to Requires, as follows:

    Requires=kube-apiserver.service

    Requires has more strict restrictions. In case the daemon kube-apiserver has crashed, kube-scheduler and kube-controller-manager would also be stopped. On the other hand, configuration with Requires is hard for debugging master installation. It is recommended that you enable this parameter once you make sure every setting is correct.

Other Linux options

It is also possible that we download a binary file for installation. The official website for the latest release is here: https://github.com/kubernetes/kubernetes/releases:

  1. We are going to install the version tagged as Latest release and start all the daemons with the wrapper command hyperkube:

    // download Kubernetes package
    # curl -L -O https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v1.1.2/kubernetes.tar.gz
    
    // extract the tarball to specific local, here we put it under /opt. the KUBE_HOME would be /opt/kubernetes
    # tar zxvf kubernetes.tar.gz -C /opt/
    
    // copy all binary files to system directory
    # cp /opt/kubernetes/server/bin/* /usr/local/bin/
    
  2. The next step is to create a startup script (init), which would cover three master daemons and start them individually:

    # cat /etc/init.d/kubernetes-master
    #!/bin/bash
    #
    # This shell script takes care of starting and stopping kubernetes master
    
    # Source function library.
    . /etc/init.d/functions
    
    # Source networking configuration.
    . /etc/sysconfig/network
    
    prog=/usr/local/bin/hyperkube
    lockfile=/var/lock/subsys/`basename $prog`
    hostname=`hostname`
    logfile=/var/log/kubernetes.log
    
    CLUSTER_NAME="<your cluster name>"
    ETCD_SERVERS="<etcd endpoint URL>:<etcd exposed port>"
    CLUSTER_IP_RANGE="<CIDR of overlay network>"
    MASTER="127.0.0.1:8080"
    
  3. To manage your Kubernetes settings more easily and clearly, we will put the declaration of changeable variables at the beginning of this init script. Please double-check the etcd URL and overlay network CIDR to confirm that they are the same as your previous installation:

    start() {
    
      # Start daemon.
      echo $"Starting apiserver: "
      daemon $prog apiserver \
      --service-cluster-ip-range=${CLUSTER_IP_RANGE} \
      --port=8080 \
      --address=0.0.0.0 \
      --etcd_servers=${ETCD_SERVERS} \
      --cluster_name=${CLUSTER_NAME} \
      > ${logfile}_apiserver 2>&1 &
    
      echo $"Starting controller-manager: "
      daemon $prog controller-manager \
      --master=${MASTER} \
      > ${logfile}_controller-manager 2>&1 &
    
      echo $"Starting scheduler: "
      daemon $prog scheduler \
      --master=${MASTER} \
      > ${logfile}_scheduler 2>&1 &
    
      RETVAL=$?
      [ $RETVAL -eq 0 ] && touch $lockfile
      return $RETVAL
    }
    
    stop() {
      [ "$EUID" != "0" ] && exit 4
            echo -n $"Shutting down $prog: "
      killproc $prog
      RETVAL=$?
            echo
      [ $RETVAL -eq 0 ] && rm -f $lockfile
      return $RETVAL
    }
  4. Next, feel free to attach the following lines as the last part in the script for general service usage:

    # See how we were called.
    case "$1" in
      start)
      start
      ;;
      stop)
      stop
      ;;
      status)
      status $prog
      ;;
      restart|force-reload)
      stop
      start
      ;;
      try-restart|condrestart)
      if status $prog > /dev/null; then
          stop
          start
      fi
      ;;
      reload)
      exit 3
      ;;
      *)
      echo $"Usage: $0 {start|stop|status|restart|try-restart|force-reload}"
      exit 2
    esac
  5. Now, it is good to start the service named kubernetes-master:

    $sudo service kubernetes-master start
    

Note

At the time of writing this book, the latest version of Kubernetes was 1.1.2. So, we will use 1.1.2 in the examples for most of the chapters.

Verification

  1. After starting all the three daemons of the master node, you can verify whether they are running properly by checking the service status. Both the commands, systemd and service, are able to get the logs:

    # systemd status <service name>
    
  2. For a more detailed log in history, you can use the command journalctl:

    # journalctl -u <service name> --no-pager --full
    

    Once you find a line showing Started... in the output, you can confirm that the service setup has passed the verification.

  3. Additionally, the dominant command in Kubernetes, kubectl, can begin the operation:

    // check Kubernetes version
    # kubectl version
    Client Version: version.Info{Major:"1", Minor:"0.3", GitVersion:"v1.0.3.34+b9a88a7d0e357b", GitCommit:"b9a88a7d0e357be2174011dd2b127038c6ea8929", GitTreeState:"clean"}
    Server Version: version.Info{Major:"1", Minor:"0.3", GitVersion:"v1.0.3.34+b9a88a7d0e357b", GitCommit:"b9a88a7d0e357be2174011dd2b127038c6ea8929", GitTreeState:"clean"}
    

See also

From the recipe, you know how to create your own Kubernetes master. You can also check out the following recipes:

  • Exploring architecture

  • Configuring nodes

  • The Building multiple masters recipe in Chapter 4, Building a High Availability Cluster

  • The Building the Kubernetes infrastructure in AWS recipe in Chapter 6, Building Kubernetes on AWS

  • The Authentication and authorization recipe in Chapter 7, Advanced Cluster Administration

 

Configuring nodes


Node is the slave in the Kubernetes cluster. In order to let master take a node under its supervision, node installs an agent called kubelet for registering itself to a specific master. After registering, daemon kubelet also handles container operations and reports resource utilities and container statuses to the master. The other daemon running on the node is kube-proxy, which manages TCP/UDP packets between containers. In this section, we will show you how to configure a node.

Getting ready

Since node is the worker of Kubernetes and the most important duty is running containers, you have to make sure that Docker and flanneld are installed at the beginning. Kubernetes relies on Docker helping applications to run in containers. And through flanneld, the pods on separated nodes can communicate with each other.

After you have installed both the daemons, according to the file /run/flannel/subnet.env, the network interface docker0 should be underneath the same LAN as flannel0:

# cat /run/flannel/subnet.env
FLANNEL_SUBNET=192.168.31.1/24
FLANNEL_MTU=8973
FLANNEL_IPMASQ=true

// check the LAN of both flanneld0 and docker0
# ifconfig docker0 ; ifconfig flannel0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.31.1  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 02:42:6e:b9:a7:51  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
flannel0: flags=81<UP,POINTOPOINT,RUNNING>  mtu 8973
        inet 192.168.31.0  netmask 255.255.0.0  destination 192.168.11.0
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

If docker0 is in a different CIDR range, you may take the following service scripts as a reference for a reliable Docker service setup:

# cat /etc/sysconfig/docker
# /etc/sysconfig/docker
#
# Other arguments to pass to the docker daemon process
# These will be parsed by the sysv initscript and appended
# to the arguments list passed to docker -d, or docker daemon where docker version is 1.8 or higher

. /run/flannel/subnet.env 

other_args="--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}"
DOCKER_CERT_PATH=/etc/docker

Alternatively, by way of systemd, the configuration also originally handles the dependency:

$ cat /etc/systemd/system/docker.service.requires/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld -etcd-endpoints=${FLANNEL_ETCD} -etcd-prefix=${FLANNEL_ETCD_KEY} $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker

[Install]
RequiredBy=docker.service

$ cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=192.168.31.1/24"
DOCKER_OPT_MTU="--mtu=8973"
DOCKER_NETWORK_OPTIONS=" --bip=192.168.31.1/24 --mtu=8973 "

Once you have modified the Docker service script to a correct one, stop the Docker service, clean its network interface, and start it again.

For more details on the flanneld setup and Docker integration, please refer to the recipe Creating an overlay network.

You can even configure a master to the node; just install the necessary daemons.

How to do it…

Once you verify that Docker and flanneld are good to go on your node host, continue to install the Kubernetes package for the node. We'll cover both RPM and tarball setup.

Installation

This will be the same as the Kubernetes master installation, Linux OS having the command line tool yum, the package management utility, can easily install the node package. On the other hand, we are also able to install the latest version through downloading a tarball file and copy binary files to the specified system directory, which is suitable for every Linux distribution. You can try either of the solutions for your deployment.

CentOS 7 or Red Hat Enterprise Linux 7
  1. First, we will install the package kubernetes-node, which is what we need for the node:

    // install kubernetes node package
    $ yum install kubernetes-node
    

    The package kubernetes-node includes two daemon processes, kubelet and kube-proxy.

  2. We need to modify two configuration files to access the master node:

    # cat /etc/kubernetes/config
    ###
    # kubernetes system config
    #
    # The following values are used to configure various aspects of all
    # kubernetes services, including
    #
    #   kube-apiserver.service
    #   kube-controller-manager.service
    #   kube-scheduler.service
    #   kubelet.service
    #   kube-proxy.service
    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
    
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"
    
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow_privileged=false"
    
    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=<master endpoint>:8080"
    
  3. In the configuration file, we will change the master location argument to the machine's URL/IP, where you installed master. If you specified another exposed port for the API server, remember to update it as well, instead of port 8080:

    # cat /etc/kubernetes/kubelet
    ###
    # kubernetes kubelet (node) config
    
    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=0.0.0.0"
    
    # The port for the info server to serve on
    # KUBELET_PORT="--port=10250"
    
    # You may leave this blank to use the actual hostname
    KUBELET_HOSTNAME="--hostname_override=127.0.0.1"
    
    # location of the api-server
    KUBELET_API_SERVER="--api_servers=<master endpoint>:8080"
    
    # Add your own!
    KUBELET_ARGS=""
    

    We open the kubelet address for all the interfaces and attached master location.

  4. Then, it is good to start services using the command systemd. There is no dependency between kubelet and kube-proxy:

    // start services
    # systemctl start kubelet
    # systemctl start kube-proxy
    // enable services for starting automatically while server boots up.
    # systemctl enable kubelet
    # systemctl enable kube-proxy
    // check the status of services
    # systemctl status kubelet
    # systemctl status kube-proxy
    
Other Linux options
  1. We can also download the latest Kubernetes binary files and write a customized service init script for node configuration. The tarball of Kubernetes' latest updates will be released at https://github.com/kubernetes/kubernetes/releases:

    // download Kubernetes package
    # curl -L -O https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v1.1.2/kubernetes.tar.gz
    
    // extract the tarball to specific local, here we put it under /opt. the KUBE_HOME would be /opt/kubernetes
    # tar zxvf kubernetes.tar.gz -C /opt/
    
    // copy all binary files to system directory
    # cp /opt/kubernetes/server/bin/* /usr/local/bin/
    
  2. Next, a file named kubernetes-node is created under /etc/init.d with the following content:

    # cat /etc/init.d/kubernetes-node
    #!/bin/bash
    #
    # kubernetes    This shell script takes care of starting and stopping kubernetes
    
    # Source function library.
    . /etc/init.d/functions
    
    # Source networking configuration.
    . /etc/sysconfig/network
    
    prog=/usr/local/bin/hyperkube
    lockfile=/var/lock/subsys/`basename $prog`
    MASTER_SERVER="<master endpoint>"
    hostname=`hostname`
    logfile=/var/log/kubernetes.log
    
  3. Be sure to provide the master URL/IP for accessing the Kubernetes API server. If you're trying to install a node package on the master host as well, which means make master also work as a node, the API server should work on the local host. If so, you can attach localhost or 127.0.0.1 at <master endpoint>:

    start() {
        # Start daemon.
        echo $"Starting kubelet: "
        daemon $prog kubelet \
            --api_servers=http://${MASTER_SERVER}:8080 \
            --v=2 \
            --address=0.0.0.0 \
            --enable_server \
            --hostname_override=${hostname} \
            > ${logfile}_kubelet 2>&1 &
    
        echo $"Starting proxy: "
        daemon $prog proxy \
            --master=http://${MASTER_SERVER}:8080 \
            --v=2 \
            > ${logfile}_proxy 2>&1 &
    
        RETVAL=$?
        [ $RETVAL -eq 0 ] && touch $lockfile
        return $RETVAL
    }
    stop() {
        [ "$EUID" != "0" ] && exit 4
            echo -n $"Shutting down $prog: "
        killproc $prog
        RETVAL=$?
            echo
        [ $RETVAL -eq 0 ] && rm -f $lockfile
        return $RETVAL
    }
  4. The following lines are for general daemon management, attaching them in the script to get the functionalities:

    # See how we were called.
    case "$1" in
      start)
        start
        ;;
      stop)
        stop
        ;;
      status)
        status $prog
        ;;
      restart|force-reload)
        stop
        start
        ;;
      try-restart|condrestart)
        if status $prog > /dev/null; then
            stop
            start
        fi
        ;;
      reload)
        exit 3
        ;;
      *)
        echo $"Usage: $0 {start|stop|status|restart|try-restart|force-reload}"
        exit 2
    esac
  5. Now, you can start the service with the name of your init script:

    # service kubernetes-node start
    

Verification

In order to check whether a node is well-configured, the straightforward way would be to check it from the master side:

// push command at master
# kubelet get nodes
NAME                               LABELS                                                    STATUS
ip-10-97-217-56.sdi.trendnet.org   kubernetes.io/hostname=ip-10-97-217-56.sdi.trendnet.org   Ready

See also

It is also recommended to read the recipes about the architecture of the cluster and system environment. Since the Kubernetes node is like a worker, who receives tasks and listens to the others; they should be built after the other components. It is good for you to get more familiar with the whole system before building up nodes. Furthermore, you can also manage the resource in nodes. Please check the following recipes for more information:

  • Exploring architecture

  • Preparing your environment

  • The Setting resource in nodes recipe in Chapter 7, Advanced Cluster Administration

 

Run your first container in Kubernetes


Congratulations! You've built your own Kubernetes cluster in the previous sections. Now, let's get on with running your very first container nginx (http://nginx.org/), which is an open source reverse proxy server, load balancer, and web server.

Getting ready

Before we start running the first container in Kubernetes, it's better to check whether every component works as expected. Please follow these steps on master to check whether the environment is ready to use:

  1. Check whether the Kubernetes components are running:

    # check component status are all healthy
    $ kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    controller-manager   Healthy   ok                   nil
    scheduler            Healthy   ok                   nil
    etcd-0               Healthy   {"health": "true"}   nil
    

    Note

    If any one of the components is not running, check out the settings in the previous sections. Restart the related services, such as service kube-apiserver start.

  2. Check the master status:

    # Check master is running
    $ kubectl cluster-info
    Kubernetes master is running at http://localhost:8080
    

    Note

    If the Kubernetes master is not running, restart the service using service kubernetes-master start or /etc/init.d/kubernetes-master start.

  3. Check whether all the nodes are ready:

    # check nodes are all Ready
    $ kubectl get nodes
    NAME          LABELS                               STATUS
    kub-node1   kubernetes.io/hostname=kub-node1   Ready
    kub-node2   kubernetes.io/hostname=kub-node2   Ready
    

    Note

    If one node is expected as Ready but is NotReady, go to that node to restart Docker and the node service using service docker start and service kubernetes-node start.

Before we go to the next section, make sure the nodes are accessible to the Docker registry. We will use the nginx image from Docker Hub (https://hub.docker.com/) as an example. If you want to run your own application, be sure to dockerize it first! What you need to do for your custom application is to write a Dockerfile (https://docs.docker.com/v1.8/reference/builder), build, and push it into the public/private Docker registry.

Tip

Test your node connectivity with the public/private Docker registry

On your node, try docker pull nginx to test whether you can pull the image from Docker Hub. If you're behind a proxy, please add HTTP_PROXY into your Docker configuration file (normally, in /etc/sysconfig/docker). If you want to run the image from the private repository in Docker Hub, using docker login on the node to place your credential in ~/.docker/config.json, copy the credentials into /var/lib/kubelet/.dockercfg in the json format and restart Docker:

# put the credential of docker registry
$ cat /var/lib/kubelet/.dockercfg

{
        "<docker registry endpoint>": {
                "auth": "SAMPLEAUTH=",
                "email": "[email protected]"
        }
}

If you're using your own private registry, specify INSECURE_REGISTRY in the Docker configuration file.

How to do it…

We will use the official Docker image of nginx as an example. The image is prebuilt in Docker Hub (https://hub.docker.com/_/nginx/).

Many official and public images are available on Docker Hub so that you do not need to build it from scratch. Just pull it and set up your custom setting on top of it.

Running an HTTP server (nginx)

  1. On the Kubernetes master, we could use kubectl run to create a certain number of containers. The Kubernetes master will then schedule the pods for the nodes to run:

    $ kubectl run <replication controller name> --image=<image name> --replicas=<number of replicas> [--port=<exposing port>]
    
  2. The following example will create two replicas with the name my-first-nginx from the nginx image and expose port 80. We could deploy one or more containers in what is referred to as a pod. In this case, we will deploy one container per pod. Just like a normal Docker behavior, if the nginx image doesn't exist in local, it will pull it from Docker Hub by default:

    # Pull the nginx image and run with 2 replicas, and expose the container port 80
    $ kubectl run my-first-nginx --image=nginx --replicas=2 --port=80
    CONTROLLER       CONTAINER(S)     IMAGE(S)   SELECTOR             REPLICAS
    my-first-nginx   my-first-nginx   nginx      run=my-first-nginx   2
    

    Tip

    The name of replication controller <my-first-nginx> cannot be duplicate

    The resource (pods, services, replication controllers, and so on) in one Kubernetes namespace cannot be duplicate. If you run the preceding command twice, the following error will pop up:

    Error from server: replicationControllers "my-first-nginx" already exists

  3. Let's get and see the current status of all the pods using kubectl get pods. Normally, the status of the pods will hold on Pending for a while, since it takes some time for the nodes to pull the image from Docker Hub:

    # get all pods
    $ kubectl get pods
    NAME                     READY     STATUS    RESTARTS   AGE
    my-first-nginx-nzygc     1/1       Running   0          1m
    my-first-nginx-yd84h     1/1       Running   0          1m
    

    Tip

    If the pod status is not running for a long time

    You could always use kubectl get pods to check the current status of the pods and kubectl describe pods $pod_name to check the detailed information of a pod. If you make a typo of the image name, you might get the Image not found error message, and if you are pulling the images from a private repository or registry without proper credentials setting, you might get the Authentication error message. If you get the Pending status for a long time and check out the node capacity, make sure you don't run too many replicas that exceed the node capacity described in the Preparing your environment section. If there are other unexpected error messages, you could either stop the pods or the entire replication controller to force master to schedule the tasks again.

  4. After waiting a few seconds, there are two pods running with the Running status:

    # get replication controllers
    $ kubectl get rc
    CONTROLLER         CONTAINER(S)       IMAGE(S)                                                   SELECTOR                                                        REPLICAS
    my-first-nginx     my-first-nginx     nginx                                                      run=my-first-nginx
    2
    

Exposing the port for external access

We might also want to create an external IP address for the nginx replication controller. On cloud providers, which support an external load balancer (such as Google Compute Engine) using the LoadBalancer type, will provision a load balancer for external access. On the other hand, you can still expose the port by creating a Kubernetes service as follows, even though you're not running on the platforms that support an external load balancer. We'll describe how to access this externally later:

# expose port 80 for replication controller named my-first-nginx
$ kubectl expose rc my-first-nginx --port=80 --type=LoadBalancer
NAME             LABELS               SELECTOR             IP(S)     PORT(S)
my-first-nginx   run=my-first-nginx   run=my-first-nginx             80/TCP

We can see the service status we just created:

# get all services
$ kubectl get service
NAME                       LABELS                                                                    SELECTOR                                                                     IP(S)             PORT(S)
my-first-nginx             run=my-first-nginx                                                        run=my-first-nginx                                                           192.168.61.150     80/TCP

Congratulations! You just ran your first container with a Kubernetes pod and exposed port 80 with the Kubernetes service.

Stopping the application

We could stop the application using commands such as the stop replication controller and service. Before this, we suggest you read through the following introduction first to understand more about how it works:

# stop replication controller named my-first-nginx
$ kubectl stop rc my-first-nginx
replicationcontrollers/my-first-nginx

# stop service named my-first-nginx
$ kubectl stop service my-first-nginx
services/my-first-nginx

How it works…

Let's take a look at the insight of the service using describe in the kubectl command. We will create one Kubernetes service with the type LoadBalancer, which will dispatch the traffic into two Endpoints 192.168.50.4 and 192.168.50.5 with port 80:

$ kubectl describe service my-first-nginx
Name:      my-first-nginx
Namespace:    default
Labels:      run=my-first-nginx
Selector:    run=my-first-nginx
Type:      LoadBalancer
IP:      192.168.61.150
Port:      <unnamed>  80/TCP
NodePort:    <unnamed>  32697/TCP
Endpoints:    192.168.50.4:80,192.168.50.5:80
Session Affinity:  None
No events.

Port here is an abstract service port, which will allow any other resources to access the service within the cluster. The nodePort will be indicating the external port for allowing external access. The targetPort is the port the container allows traffic into; by default, it will be the same with Port. The illustration is as follows. External access will access service with nodePort. Service acts as a load balancer to dispatch the traffic to the pod using Port 80. The pod will then pass through the traffic into the corresponding container using targetPort 80:

In any nodes or master (if your master has flannel installed), you should be able to access the nginx service using ClusterIP 192.168.61.150 with port 80:

# curl from service IP
$ curl 192.168.61.150:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

It will be the same result if we curl to the target port of the pod directly:

# curl from endpoint
$ curl 192.168.50.4:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

If you'd like to try out external access, use your browser to access the external IP address. Please note that the external IP address depends on which environment you're running in.

In Google Compute Engine, you could access it via a ClusterIP with proper firewall rules setting:

$ curl http://<clusterIP>

In a custom environment, such as on a premise datacenter, you could go through the IP address of nodes to access to:

$ curl http://<nodeIP>:<nodePort>

You should be able to see the following page using a web browser:

See also

We have run our very first container in this section. Now:

  • To explore more of the concepts in Kubernetes, refer to Chapter 2, Walking through Kubernetes Concepts

About the Authors

  • Hideto Saito

    Hideto Saito has around 20 years of experience in the computer industry. In 1998, while working for Sun Microsystems Japan, he was impressed by Solaris OS, OPENSTEP, and Sun Ultra Enterprise 10000 (also known as StarFire). He then decided to pursue UNIX and macOS operating systems.

    In 2006, he relocated to southern California as a software engineer to develop products and services running on Linux and macOS X. He was especially renowned for his quick Objective-C code when he was drunk. He is also an enthusiast of Japanese anime, drama, and motorsports, and loves Japanese Otaku culture.

    Browse publications by this author
  • Hui-Chuan Chloe Lee

    Hui-Chuan Chloe Lee is a DevOps and software developer. She has worked in the software industry on a wide range of projects for over five years. As a technology enthusiast, she loves trying and learning about new technologies, which makes her life happier and more fulfilling. In her free time, she enjoys reading, traveling, and spending time with the people she love

    Browse publications by this author
  • Ke-Jou Carol Hsu

    Ke-Jou Carol Hsu has three years of experience working as a software engineer and is currently a PhD student in the area of computer systems. Not only involved programming, she also enjoys getting multiple applications and machines perfectly working together to solve big problems. In her free time, she loves movies, music, cooking, and working out.

    Browse publications by this author

Latest Reviews

(2 reviews total)
Excellent
Good
Book Title
Access this book, plus 7,500 other titles for FREE
Access now