Reader small image

You're reading from  Kubernetes – An Enterprise Guide - Second Edition

Product typeBook
Published inDec 2021
PublisherPackt
ISBN-139781803230030
Edition2nd Edition
Right arrow
Authors (2):
Marc Boorshtein
Marc Boorshtein
author image
Marc Boorshtein

Marc Boorshtein has been a software engineer and consultant for 20 years and is currently the CTO (Chief Technology Officer) of Tremolo Security, Inc. Marc has spent most of his career building identity management solutions for large enterprises, U.S. Government civilian agencies, and local government public safety systems.
Read more about Marc Boorshtein

Scott Surovich
Scott Surovich
author image
Scott Surovich

Scott Surovich has been involved in the industry for over 25 years and is currently the Global Container Engineering Lead at a tier 1 bank as the Global on-premises Kubernetes product owner architecting and, delivering cluster standards, including the surrounding ecosystem. His previous roles include working on other global engineering teams, including Windows, Linux, and virtualization.
Read more about Scott Surovich

View More author details
Right arrow

Services, Load Balancing, ExternalDNS, and Global Balancing

Before systems like Kubernetes were available, scaling an application often required a manual process that could involve multiple teams, and multiple processes, in many larger organizations. To scale out a common web application, you would have to add additional servers, and update the frontend load balancer to include the additional servers. We will discuss load balancers in this chapter, but for a quick introduction to anyone that may be new to the term, a load balancer provides a single point of entry to an application. The incoming request is handled by the load balancer, which routes traffic to any backend server that hosts the application. This is a very high-level explanation of a load balancer, and most offer very powerful features well beyond simply routing traffic, but for the purpose of this chapter, we are only concerned with the routing features.

When you deploy an application to a Kubernetes cluster, your...

Technical requirements

This chapter has the following technical requirements:

  • An Ubuntu 18.03 or 20.04 server with a minimum of 4 GB of RAM.
  • A KinD cluster configured using the configuration from Chapter 2, Deploying Kubernetes Using KinD.

You can access the code for this chapter by going to this book's GitHub repository: https://github.com/PacktPublishing/Kubernetes---An-Enterprise-Guide-2E/tree/main/chapter4.

Exposing workloads to requests

Over the years, we have discovered that the three most commonly misunderstood concepts in Kubernetes are services, Ingress controllers, and load balancers. In order to expose your workloads, you need to understand how each object works and the options that are available to you. Let's look at these in detail.

Understanding how services work

As we mentioned in the introduction, any pod that is running a workload is assigned an IP address at pod startup. Many events will cause a deployment to restart a pod, and when the pod is restarted, it will likely receive a new IP address. Since the addresses that are assigned to pods may change, you should never target a pod's workload directly.

One of the most powerful features that Kubernetes offers is the ability to scale your deployments. When a deployment is scaled, Kubernetes will create additional pods to handle any additional resource requirements. Each pod will have an IP address, and...

Introduction to load balancers

In this second section, we will discuss the basics between utilizing layer 7 and layer 4 load balancers. To understand the differences between the types of load balancers, it's important to understand the Open Systems Interconnection (OSI) model. Understanding the different layers of the OSI model will help you to understand how different solutions handle incoming requests.

Understanding the OSI model

When you hear about different solutions to expose an application in Kubernetes, you will often hear a reference to layer 7 or layer 4 load balancing. These designations refer to where each operates in the OSI model. Each layer offers different functionality; a component that runs at layer 7 offers different functionality to a component in layer 4.

To begin, let's look at a brief overview of the seven layers and a description of each. For this chapter, we are interested in the two highlighted sections, layer 4 and layer 7:

...

Layer 7 load balancers

Kubernetes provides layer 7 load balancers in the form of an Ingress controller. There are a number of solutions when it comes to providing Ingress to your clusters, including the following:

  • NGINX
  • Envoy
  • Traefik
  • HAproxy

Typically, a layer 7 load balancer is limited in the functions it can perform. In the Kubernetes world, they are implemented as Ingress controllers that can route incoming HTTP/HTTPS requests to your exposed services. We will go into detail on implementing NGINX as a Kubernetes Ingress controller in the Creating Ingress rules section.

Name resolution and layer 7 load balancers

To handle layer 7 traffic in a Kubernetes cluster, you deploy an Ingress controller. Ingress controllers are dependent on incoming names to route traffic to the correct service. In a legacy server deployment model, you would create a DNS entry and map it to an IP address.

Applications that are deployed on a Kubernetes cluster...

Layer 4 load balancers

Layer 4 of the OSI model is responsible for protocols such as TCP and UDP. A load balancer that is running in layer 4 accepts incoming traffic based on the only IP address and port. The incoming request is accepted by the load balancer, and based on a set of rules, the traffic is sent to the destination IP address and port.

There are lower-level networking operations in the process that are beyond the scope of this book. HAproxy has a good summary of the terminology and example configurations on their website at https://www.haproxy.com/fr/blog/loadbalancing-faq/.

Layer 4 load balancer options

There are multiple options available to you if you want to configure a layer 4 load balancer for a Kubernetes cluster. Some of the options include the following:

  • HAproxy
  • NGINX Pro
  • SeeSaw
  • F5 Networks
  • MetalLB
  • And more…

Each option provides layer 4 load balancing, but for the purpose of this book, we felt...

Enhancing load balancers for the enterprise

In this third, and final section, we will discuss some of the limitations of certain load balancer features and how we can configure a cluster to resolve the limitations. Our examples have been good for learning, but in an enterprise, nobody wants to access a workload running on the cluster using an IP address. Also, in an enterprise, you will commonly run services on multiple clusters to provide some failover for your applications. So far, the options discussed can't address these two key points. In this section, we will explain how to resolve these issues so your enterprise can offer easier access to workloads that are highly available by names, including across multiple clusters.

Making service names available externally

You may have been wondering why we were using the IP addresses to test some of the services that we created, while we used domain names for our Ingress examples.

While a Kubernetes load balancer provides a standard IP address to a service, it does not create an external DNS name for users to connect to the service. Using IP addresses to connect to applications running on a cluster is not very efficient, and manually registering names in DNS for each IP assigned by MetalLB would be an impossible method to maintain. So how would you provide a more cloud-like experience to adding name resolution to our LoadBalancer services?

Similar to the team that maintains KinD, there is a Kubernetes SIG that is working on this feature to Kubernetes called ExternalDNS. The main project page can be found on the SIG's GitHub at https://github.com/kubernetes-sigs/external-dns.

At the time of writing, the ExternalDNS project supports a long list...

Load balancing between multiple clusters

Running services in multiple clusters can be configured in multiple ways, usually requiring complex and expensive add-ons such as global load balancers from companies like F5. These are very common in the enterprise, and while many organizations implement clusters using add-ons like F5's Global Service Load Balancers (GSLB), there are projects available that provide similar functionality that are low, or no cost, and native to Kubernetes. These projects do not replace all of the features that the vendor solutions provide, but in many cases, we do not need all of the features from the more expensive solutions – we require only a small subset of the features provided.

A new project that has recently been released is K8GB, a CNCF sandbox project. To learn about the project, browse to the project's main page at https://www.k8gb.io.

Since we are using KinD and a single host for our cluster, this section of the book is meant...

Summary

In this three-part chapter, you learned about exposing your workloads in Kubernetes to other cluster resources and users.

The first part of the chapter went over services and the multiple types that can be assigned. The three major service types are ClusterIP, NodePort, and LoadBalancer. Remember that the selection of the type of service will configure how your application is exposed.

In the second part, we introduced two load balancer types, layer 4 and layer 7, each having a unique functionality for exposing workloads. Typically, services alone are not the only objects that are used to provide access to applications running in the cluster. You will often use a ClusterIP service along with an Ingress controller to provide access to services that use layer 7. Some applications may require additional communication, which is not provided by a layer 7 load balancer. These applications may require a layer 4 load balancer to expose their services to users. In the load balancing...

Questions

  1. How does a service know what pods should be used as endpoints for the service?
    1. By the service port
    2. By the namespace
    3. By the author
    4. By the selector label
  2. What kubectl command helps you to troubleshoot services that may not be working properly?
    1. kubectl get services <service name>
    2. kubectl get ep <service name>
    3. kubectl get pods <service name>
    4. kubectl get servers <service name>
  3. All Kubernetes distributions include support for services that use the LoadBalancer type.
    1. True
    2. False
  4. Which load balancer type supports all TCP/UDP ports and accepts traffic regardless of the packet's contents?
    1. Layer 7
    2. Cisco layer
    3. Layer 2
    4. Layer 4
  5. Without any added components, you can use multiple protocols using which of the following service types...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Kubernetes – An Enterprise Guide - Second Edition
Published in: Dec 2021Publisher: PacktISBN-13: 9781803230030
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Authors (2)

author image
Marc Boorshtein

Marc Boorshtein has been a software engineer and consultant for 20 years and is currently the CTO (Chief Technology Officer) of Tremolo Security, Inc. Marc has spent most of his career building identity management solutions for large enterprises, U.S. Government civilian agencies, and local government public safety systems.
Read more about Marc Boorshtein

author image
Scott Surovich

Scott Surovich has been involved in the industry for over 25 years and is currently the Global Container Engineering Lead at a tier 1 bank as the Global on-premises Kubernetes product owner architecting and, delivering cluster standards, including the surrounding ecosystem. His previous roles include working on other global engineering teams, including Windows, Linux, and virtualization.
Read more about Scott Surovich