Reader small image

You're reading from  Getting Started with Kubernetes, - Third Edition

Product typeBook
Published inOct 2018
PublisherPackt
ISBN-139781788994729
Edition3rd Edition
Concepts
Right arrow
Authors (2):
Jonathan Baier
Jonathan Baier
author image
Jonathan Baier

Jonathan Baier is an emerging technology leader living in Brooklyn, New York. He has had a passion for technology since an early age. When he was 14 years old, he was so interested in the family computer (an IBM PCjr) that he pored over the several hundred pages of BASIC and DOS manuals. Then, he taught himself to code a very poorly-written version of Tic-Tac-Toe. During his teenage years, he started a computer support business. Throughout his life, he has dabbled in entrepreneurship. He currently works as Senior Vice President of Cloud Engineering and Operations for Moody's corporation in New York.
Read more about Jonathan Baier

Jesse White
Jesse White
author image
Jesse White

Jesse White is a 15-year veteran and technology leader in New York City's very own Silicon Alley, where he is a pillar of the vibrant engineering ecosystem. As founder of DockerNYC and an active participant in the open source community, you can find Jesse at a number of leading industry events, including DockerCon and VelocityConf, giving talks and workshops.
Read more about Jesse White

View More author details
Right arrow

Chapter 3. Working with Networking, Load Balancers, and Ingress

In this chapter, we will discuss Kubernetes' approach to cluster networking and how it differs from other approaches. We will describe key requirements for Kubernetes networking solutions and explore why these are essential for simplifying cluster operations. We will investigate DNS in the Kubernetes cluster, dig into the Container Network Interface (CNI) and plugin ecosystems, and will take a deeper dive into services and how the Kubernetes proxy works on each node. Finishing up, we will look at a brief overview of some higher level isolation features for multitenancy.

In this chapter, we will cover the following topics:

  • Kubernetes networking
  • Advanced services concepts
  • Service discovery
  • DNS, CNI, and ingress
  • Namespace limits and quotas

Technical requirements


You'll need a running Kubernetes cluster like the one we created in the previous chapters. You'll also need access to deploy the cluster through the kubectl command.

The GitHub repository for this chapter can be found at https://github.com/PacktPublishing/Getting-Started-with-Kubernetes-third-edition/tree/master/Code-files/Chapter03.

Container networking


Networking is a vital concern for production-level operations. At a service level, we need a reliable way for our application components to find and communicate with each other. Introducing containers and clustering into the mix makes things more complex as we now have multiple networking namespaces to bear in mind. Communication and discovery now becomes a feat that must navigate container IP space, host networking, and sometimes even multiple data center network topologies.

Kubernetes benefits here from getting its ancestry from the clustering tools used by Google for the past decade. Networking is one area where Google has outpaced the competition with one of the largest networks on the planet. Earlier, Google built its own hardware switches and Software-defined Networking (SDN) to give them more control, redundancy, and efficiency in their day-to-day network operations. Many of the lessons learned from running and networking two billion containers per week have been...

Advanced services


Let's explore the IP strategy as it relates to services and communication between containers. If you recall, in the Services section of Chapter 2, Pods, Services, Replication Controllers, and Labels, you learned that Kubernetes is using kube-proxy to determine the proper pod IP address and port serving each request. Behind the scenes, kube-proxy is actually using virtual IPs and iptables to make all this magic work.

kube-proxy now has two modes—userspace and iptables. As of now, 1.2 iptables is the default mode. In both modes, kube-proxy is running on every host. Its first duty is to monitor the API from the Kubernetes master. Any updates to services will trigger an update to iptables from kube-proxy. For example, when a new service is created, a virtual IP address is chosen and a rule in iptables is set, which will direct its traffic to kube-proxy via a random port. Thus, we now have a way to capture service-destined traffic on this node. Since kube-proxy is running on...

Service discovery


As we discussed earlier, the Kubernetes master keeps track of all service definitions and updates. Discovery can occur in one of three ways. The first two methods use Linux environment variables. There is support for the Docker link style of environment variables, but Kubernetes also has its own naming convention. Here is an example of what our node-js service example might look like using K8s environment variables (note that IPs will vary):

NODE_JS_PORT_80_TCP=tcp://10.0.103.215:80
NODE_JS_PORT=tcp://10.0.103.215:80
NODE_JS_PORT_80_TCP_PROTO=tcp
NODE_JS_PORT_80_TCP_PORT=80
NODE_JS_SERVICE_HOST=10.0.103.215
NODE_JS_PORT_80_TCP_ADDR=10.0.103.215
NODE_JS_SERVICE_PORT=80

 

 

 

 

Another option for discovery is through DNS. While environment variables can be useful when DNS is not available, it has drawbacks. The system only creates variables at creation time, so services that come online later will not be discovered or will require some additional tooling to update all the system...

DNS


DNS solves the issues seen with environment variables by allowing us to reference the services by their name. As services restart, scale out, or appear anew, the DNS entries will be updating and ensuring that the service name always points to the latest infrastructure. DNS is set up by default in most of the supported providers. You can add DNS support for your cluster via a cluster add on (https://kubernetes.io/docs/concepts/cluster-administration/addons/).

Note

If DNS is supported by your provider, but is not set up, you can configure the following variables in your default provider config when you create your Kubernetes cluster:ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"DNS_SERVER_IP="10.0.0.10"DNS_DOMAIN="cluster.local"DNS_REPLICAS=1.

With DNS active, services can be accessed in one of two forms—either the service name itself, <service-name>, or a fully qualified name that includes the namespace, <service-name>.<namespace-name>.cluster.local. In our examples...

Multitenancy


Kubernetes also has an additional construct for isolation at the cluster level. In most cases, you can run Kubernetes and never worry about namespaces; everything will run in the default namespace if not specified. However, in cases where you run multitenancy communities or want broad-scale segregation and isolation of the cluster resources, namespaces can be used to this end. True, end-to-end multitenancy is not yet feature complete in Kubernetes, but you can get very close using RBAC, container permissions, ingress rules, and clear network policing. If you're interested in enterprise-strength multitenancy right now, Red Hat's Openshift Origin (OO) would be a good place to learn.

Note

You can check out OO at https://github.com/openshift/origin.

To start, Kubernetes has two namespaces—default and kube-system. The kube-system namespace is used for all the system-level containers we saw in Chapter 1, Introduction to Kubernetes, in the Services running on the minions section. UI,...

A note on resource usage


As most of the examples in this book utilize GCP or AWS, it can be costly to keep everything running. It's also easy to run out of resources using the default cluster size, especially if you keep every example running. Therefore, you may want to delete older pods, replication controllers, replica sets, and services periodically. You can also destroy the cluster and recreate it using Chapter 1, Introduction to Kubernetes, as a way to lower your cloud provider bill.

Summary


In this chapter, we took a deeper look into networking and services in Kubernetes. You should now understand how networking communications are designed in K8s and feel comfortable accessing your services internally and externally. We saw how kube-proxy balances traffic both locally and across the cluster. Additionally, we explored the new Ingress resources that allow us finer control of incoming traffic. We also looked briefly at how DNS and service discovery is achieved in Kubernetes. We finished off with a quick look at namespaces and isolation for multitenancy.

Questions


  1. Give two way in which the Docker networking approach is different than the Kubernetes networking approach.
  2. What does NAT stand for?
  3. What are the two major classes of Kubernetes networking models?
  4. Name at least two of the third-party overlay networking options available to Kubernetes.
  5. At what level (or alternatively, to what object) does Kubernetes assign IP addresses?
  6. What are the available modes for kube-proxy?
  7. What are the three types of services allowed by Kubernetes?
  8. What elements are used to define container and service ports?
  9. Name two or more types of ingress available to Kubernetes.
  10. How can you provide multitenancy for your Kubernetes cluster?

Further reading


lock icon
The rest of the chapter is locked
You have been reading a chapter from
Getting Started with Kubernetes, - Third Edition
Published in: Oct 2018Publisher: PacktISBN-13: 9781788994729
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Authors (2)

author image
Jonathan Baier

Jonathan Baier is an emerging technology leader living in Brooklyn, New York. He has had a passion for technology since an early age. When he was 14 years old, he was so interested in the family computer (an IBM PCjr) that he pored over the several hundred pages of BASIC and DOS manuals. Then, he taught himself to code a very poorly-written version of Tic-Tac-Toe. During his teenage years, he started a computer support business. Throughout his life, he has dabbled in entrepreneurship. He currently works as Senior Vice President of Cloud Engineering and Operations for Moody's corporation in New York.
Read more about Jonathan Baier

author image
Jesse White

Jesse White is a 15-year veteran and technology leader in New York City's very own Silicon Alley, where he is a pillar of the vibrant engineering ecosystem. As founder of DockerNYC and an active participant in the open source community, you can find Jesse at a number of leading industry events, including DockerCon and VelocityConf, giving talks and workshops.
Read more about Jesse White