Home Cloud & Networking Serverless Architectures with Kubernetes

Serverless Architectures with Kubernetes

By Onur Yılmaz , Sathsara Sarathchandra
books-svg-icon Book
eBook $29.99 $20.98
Print $43.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $29.99 $20.98
Print $43.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    1. Introduction to Serverless
About this book
Kubernetes has established itself as the standard platform for container management, orchestration, and deployment. By learning Kubernetes, you’ll be able to design your own serverless architecture by implementing the function-as-a-service (FaaS) model. After an accelerated, hands-on overview of the serverless architecture and various Kubernetes concepts, you’ll cover a wide range of real-world development challenges faced by real-world developers, and explore various techniques to overcome them. You’ll learn how to create production-ready Kubernetes clusters and run serverless applications on them. You'll see how Kubernetes platforms and serverless frameworks such as Kubeless, Apache OpenWhisk and OpenFaaS provide the tooling to help you develop serverless applications on Kubernetes. You'll also learn ways to select the appropriate framework for your upcoming project. By the end of this book, you’ll have the skills and confidence to design your own serverless applications using the power and flexibility of Kubernetes.
Publication date:
November 2019
Publisher
Packt
Pages
474
ISBN
9781838983277

 

1. Introduction to Serverless

Learning Objectives

By the end of this chapter, you will be able to:

  • Identify the benefits of serverless architectures
  • Create and invoke simple functions on a serverless platform
  • Create a cloud-native serverless function and package it as a container using Kubernetes
  • Create a Twitter Bot Backend application and package it in a Docker container

In this chapter, we will explain the serverless architecture, then create our first serverless function and package it as a container.

 

Introduction to Serverless

Cloud technology right now is in a state of constant transformation to create scalable, reliable, and robust environments. In order to create such an environment, every improvement in cloud technology aims to increase both the end user experience and the developer experience. End users demand fast and robust applications that are reachable from everywhere in the world. At the same time, developers demand a better development environment to design, deploy, and maintain their applications in. In the last decade, the journey of cloud technology has started with cloud computing, where servers are provisioned in cloud data centers and applications are deployed on the servers. The transition to cloud data centers decreased costs and removed the need for responsibility for data centers. However, as billions of people are accessing the internet and demanding more services, scalability has become a necessity. In order to scale applications, developers have created smaller microservices that can scale independently of each other. Microservices are packaged into containers as building blocks of software architectures to better both the developer and end user experience. Microservices enhance the developer experience by providing better maintainability while offering high scalability to end users. However, the flexibility and scalability of microservices cannot keep up with the enormous user demand. Today, for instance, millions of banking transactions take place daily, and millions of business-to-business requests are made to backend systems.

Finally, serverless started gaining attention for creating future-proof and ad hoc-scalable applications. Serverless designs focus on creating even smaller services than microservices and they are designed to last much longer into the future. These nanoservices, or functions, help developers to create more flexible and easier-to-maintain applications. On the other hand, serverless designs are ad hoc-scalable, which means if you adopt a serverless design, your services are naturally scaled up or down with the user requests. These characteristics of serverless have made it the latest big trend in the industry, and it is now shaping the cloud technology landscape. In this section, an introduction to serverless technology will be presented, looking at serverless's evolution, origin, and use cases.

Before diving deeper into serverless design, let's understand the evolution of cloud technology. In bygone days, the expected process of deploying applications started with the procurement and deployment of hardware, namely servers. Following that, operating systems were installed on the servers, and then application packages were deployed. Finally, the actual code in application packages was executed to implement business requirements. These four steps are shown in Figure 1.1:

Figure 1.1: Traditional software development
Figure 1.1: Traditional software development

Organizations started to outsource their data center operations to cloud providers to improve the scalability and utilization of servers. For instance, if you were developing an online shopping application, you first needed to buy some servers, wait for their installation, and operate them daily and deal with their potential problems, caused by electricity, networking, and misconfiguration. It was difficult to predict the usage level of servers and not feasible to make huge investments in servers to run applications. Therefore, both start-ups and large enterprises started to outsource data center operations to cloud providers. This cleared away the problems related to the first step of hardware deployment, as shown in Figure 1.2:

Figure 1.2: Software development with cloud computing
Figure 1.2: Software development with cloud computing

With the start of virtualization in cloud computing, operating systems became virtualized so that multiple virtual machines (VMs) could run on the same bare-metal machine. This transition removed the second step, and service providers provision VMs as shown in Fig 1.3. With multiple VMs running on the same hardware, the costs of running servers decreases and the flexibility of operations increases. In other words, the low-level concerns of software developers are cleared since both the hardware and the operating system are now someone else's problem:

Figure 1.3: Software development with virtualization
Figure 1.3: Software development with virtualization

VMs enable the running of multiple instances on the same hardware. However, using VMs requires installing a complete operating system for every application. Even for a basic frontend application, you need to install an operating system, which results in an overhead of operating system management, leading to limited scalability. Application developers and the high-level usage of modern applications requires faster and simpler solutions with better isolation than creating and managing VMs. Containerization technology solves this issue by running multiple instances of "containerized" applications on the same operating system. With this level of abstraction, problems related to operating systems are also removed, and containers are delivered as application packages, as illustrated in Figure 1.4. Containerization technology enables a microservices architecture where software is designed as small and scalable services that interact with each other.

This architectural approach makes it possible to run modern applications such as collaborative spreadsheets in Google Drive, live streams of sports events on YouTube, video conferences on Skype, and many more:

Figure 1.4: Software development with containerization
Figure 1.4: Software development with containerization

The next architectural phenomena, serverless, removes the burden of managing containers and focuses on running the actual code itself. The essential characteristic of serverless architecture is ad hoc scalability. Applications in serverless architecture are ad hoc-scalable, which means they are scaled up or down automatically when they are needed. They could also be scaled down to zero, which means no hardware, network, or operation costs. With serverless applications, all low-level concerns are outsourced and managed, and the focus is on the last step – Run the code – as shown in Figure 1.5. With the serverless design, the focus is on the last step of traditional software development. In the following section, we will focus on the origin and manifesto of serverless for a more in-depth introduction:

Figure 1.5: Software development with serverless
Figure 1.5: Software development with serverless

Serverless Origin and Manifesto

Serverless is a confusing term since there are various definitions used in conferences, books, and blogs. Although it theoretically means not having any servers, it practically means leaving the responsibility of servers to third-party organizations. In other words, it means not getting rid of servers but server operations. When you run serverless, someone else handles the procurement, shipping, and installation of your server operations. This decreases your costs because you do not need to operate servers or even data centers; furthermore, it lets you focus on the application logic, which implements your core business function.

The first uses of serverless were seen in articles related to continuous integration around 2010. When it was first discussed, serverless was considered for building and packaging applications on the servers of cloud providers. The dramatic increase in popularity came with the Amazon Web Services (AWS) Lambda launch in 2014. Furthermore, in 2015, AWS presented API Gateway for the management and triggering of Lambda functions as it's a single entry point for multiple functions. Therefore, serverless functions gained traction in 2014 and it became possible to create serverless architecture applications by using AWS API Gateway in 2015.

However, the most definitive and complete explanation of serverless was presented in 2016, at the AWS developer conference, as the Serverless Compute Manifesto. It consists of eight strict rules that define the core ideas behind serverless architecture:

Note

Although it was discussed in various talks at the AWS Summit 2016 conference, the Serverless Compute Manifesto has no official website or documentation. A complete list of what the manifesto details can be seen in a presentation by Dr. Tim Wagner: https://www.slideshare.net/AmazonWebServices/getting-started-with-aws-lambda-and-the-serverless-cloud.

  • Functions as the building blocks: In serverless architecture, the building blocks of development, deployment, and scaling should be the functions. Each function should be deployed and scaled in isolation, independently of other functions.
  • No servers, VMs, or containers: The service provider should operate all computation abstractions for serverless functions, including servers, VMs, and containers. Users of serverless architecture should not need any further information about the underlying infrastructure.
  • No storage: Serverless applications should be designed as ephemeral workloads that have a fresh environment for every request. If they need to persist some data, they should use a remote service such as a Database as a Service (DbaaS).
  • Implicitly fault-tolerant functions: Both the serverless infrastructure and the deployed applications should be fault-tolerant in order to create a robust, scalable, and reliable application environment.
  • Scalability with the request: The underlying infrastructure, including the computation and network resources, should enable a high level of scalability. In other words, it is not an option for a serverless environment to fail to scale up when requests are rising.
  • No cost for idle time: Serverless providers should only incur costs when serverless workloads are running. If your function has not received an HTTP request for a long period, you should not pay any money for the idleness.
  • Bring Your Own Code (BYOC): Serverless architectures should enable the running of any code developed and packaged by end users. If you are a Node.Js should appear together or Go developer, it should be possible for you to deploy your function within your preferred language to the serverless infrastructure.
  • Instrumentation: Logs of the functions and the metrics collected over the function calls should be available to the developers. This makes it possible to debug and solve problems related to functions. Since they are already running on remote servers, instrumentation should not create any further burden in terms of analyzing potential problems.

The original manifesto introduced some best practices and limitations; however, as cloud technology evolves, the world of serverless applications evolves. This evolution will make some rules from the manifesto obsolete and will add new rules. In the following section, use cases of serverless applications are discussed to explain how serverless is adopted in the industry.

Serverless Use Cases

Serverless applications and designs seem to be avant-garde technologies; however, they are highly adopted in the industry for reliable, robust, and scalable applications. Any traditional application that is running on VMs, Docker containers, or Kubernetes can be designed to run serverless if you want the benefits of serverless designs. Some of the well-known use cases of serverless architectures are listed here:

  • Data processing: Interpreting, analyzing, cleansing, and formatting data are essential steps in big data applications. With the scalability of serverless architectures, you can quickly filter millions of photos and count the number of people in them, for instance, without buying any pricey servers. According to a case report (https://azure.microsoft.com/en-in/blog/a-fast-serverless-big-data-pipeline-powered-by-a-single-azure-function/), it is possible to create a serverless application to detect fraudulent transitions from multiple sources with Azure Functions. To handle 8 million data processing requests, serverless platforms would be the appropriate choice, with their ad hoc scalability.
  • Webhooks: Webhooks are HTTP API calls to third-party services to deliver real-time data. Instead of having servers up and running for webhook backends, serverless infrastructures can be utilized with lower costs and less maintenance.
  • Check-out and payment: It is possible to create shopping systems as serverless applications where each core functionality is designed as an isolated component. For instance, you can integrate the Stripe API as a remote payment service and use the Shopify service for cart management in your serverless backend.
  • Real-time chat applications: Real-time chat applications integrated into Facebook Messenger, Telegram, or Slack, for instance, are very popular for handling customer operations, distributing news, tracking sports results, or just for entertainment. It is possible to create ephemeral serverless functions to respond to messages or take actions based on message content. The main advantage of serverless for real-time chat is that it can scale when many people are using it. It could also scale to zero and cost no money when there is no one using the chat application.

These use cases illustrate that serverless architectures can be used to design any modern application. It is also possible to move some parts of monolithic applications and convert them into serverless functions. If your current online shop is a single Java web application packaged as a JAR file, you can separate its business functions and convert them into serverless components. The dissolution of giant monoliths into small serverless functions helps to solve multiple problems at once. First of all, scalability will never be an issue for the serverless components of your application. For instance, if you cannot handle a high amount of payments during holidays, a serverless platform will automatically scale up the payment functions with the usage levels. Secondly, you do not need to limit yourself to the programming language of the monolith; you can develop your functions in any programming language. For instance, if your database clients are better implemented with Node.js, you can code the database operations of your online shop in Node.js.

Finally, you can reuse the logic implemented in your monolith since now it is a shared serverless service. For instance, if you separate the payment operations of your online shop and create serverless payment functions, you can reuse these payment functions in your next project. All these benefits make it appealing for start-ups as well as large enterprises to adopt serverless architectures. In the following section, serverless architectures will be discussed in more depth, looking specifically at some implementations.

Possible answers:

  • Applications with high latency
  • When observability and metrics are critical for business
  • When vendor lock-in and ecosystem dependencies are an issue
 

Serverless Architecture and Function as a Service (FaaS)

Serverless is a cloud computing design where cloud providers handle the provisioning of servers. In the previous section, we discussed how operational concerns are layered and handed over. In this section, we will focus on serverless architectures and application design using serverless architecture.

In traditional software architecture, all of the components of an application are installed on servers. For instance, let's assume that you are developing an e-commerce website in Java and your product information is stored in MySQL. In this case, the frontend, backend, and database are installed on the same server. End users are expected to reach the shopping website with the IP address of the server, and thus an application server such as Apache Tomcat should be running on the server. In addition, user information and security components are also included in the package, which is installed on the server. A monolithic e-commerce application is shown in Figure 1.6, with all four parts, namely the frontend, backend, security, and database:

Figure 1.6: Traditional software architecture
Figure 1.6: Traditional software architecture

Microservices architecture focuses on creating a loosely coupled and independently deployable collection of services. For the same e-commerce system, you would still have frontend, backend, database, and security components, but they would be isolated units. Furthermore, these components would be packaged as containers and would be managed by a container orchestrator such as Kubernetes. This enables the installing and scaling of components independently since they are distributed over multiple servers. In Figure 1.7, the same four components are installed on the servers and communicating with each other via Kubernetes networking:

Figure 1.7: Microservices software architecture
Figure 1.7: Microservices software architecture

Microservices are deployed to the servers, which are still managed by the operations teams. With the serverless architecture, the components are converted into third-party services or functions. For instance, the security of the e-commerce website could be handled by an Authentication-as-a-Service offering such as Auth0. AWS Relational Database Service (RDS) can be used as the database of the system. The best option for the backend logic is to convert it into functions and deploy them into a serverless platform such as AWS Lambda or Google Cloud Functions. Finally, the frontend could be served by storage services such as AWS Simple Storage Service (S3) or Google Cloud Storage.

With a serverless design, it is only required to define these services for you to have scalable, robust, and managed applications running in harmony, as shown in Figure 1.8:

Note

Auth0 is a platform for providing authentication and authorization for web, mobile, and legacy applications. In short, it provides authentication and authorization as a service, where you can connect any application written in any language. Further details can be found on its official website: https://auth0.com.

Figure 1.8: Serverless software architecture
Figure 1.8: Serverless software architecture

Starting from a monolith architecture and first dissolving it into microservice, and then serverless components is beneficial for multiple reasons:

  • Cost: Serverless architecture helps to decrease costs in two critical ways. The first is that the management of the servers is outsourced, and the second is that it only costs money when the serverless applications are in use.
  • Scalability: If an application is expected to grow, the current best choice is to design it as a serverless application since that removes the scalability constraints related to the infrastructure.
  • Flexibility: When the scope of deployable units is decreased, serverless provides more flexibility to innovate, choose better programming languages, and manage with smaller teams.

These dimensions and how they vary between software architectures is visualized in Figure 1.9:

Figure 1.9: Benefits of the transition from cost to serverless
Figure 1.9: Benefits of the transition from cost to serverless

When you start with a traditional software development architecture, the transition to microservices increases scalability and flexibility. However, it does not directly decrease the cost of running the applications since you are still dealing with the servers. Further transition to serverless improves both scalability and flexibility while decreasing the cost. Therefore, it is essential to learn about and implement serverless architectures for future-proof applications. In the following section, the implementation of serverless architecture, namely Function as a Service (FaaS), will be presented.

Function as a Service (FaaS)

FaaS is the most popular and widely adopted implementation of serverless architecture. All major cloud providers have FaaS products, such as AWS Lambda, Google Cloud Functions, and Azure Functions. As its name implies, the unit of deployment and management in FaaS is the function. Functions in this context are no different from any other function in any other programming language. They are expected to take some arguments and return values to implement business needs. FaaS platforms handle the management of servers and make it possible to run event-driven, scalable functions. The essential properties of a FaaS offering are these:

  • Stateless: Functions are designed to be stateless and ephemeral operations where no file is saved to disk and no caches are managed. At every invocation of a function, it starts quickly with a new environment, and it is removed when it is done.
  • Event-triggered: Functions are designed to be triggered directly and based on events such as cron time expressions, HTTP requests, message queues, and database operations. For instance, it is possible to call the startConversation function via an HTTP request when a new chat is started. Likewise, it is possible to launch the syncUsers function when a new user is added to a database.
  • Scalable: Functions are designed to run as much as needed in parallel so that every incoming request is answered and every event is covered.
  • Managed: Functions are governed by their platform so that the servers and underlying infrastructure is not a concern for FaaS users.

These properties of functions are covered by cloud providers' offerings, such as AWS Lambda, Google Cloud Functions, and Azure Functions; and on-premises offerings, such as Kubeless, Apache OpenWhisk, and OpenFass. With its high popularity, the term FaaS is mostly used interchangeably with the term serverless. In the following exercise, we will create a function to handle HTTP requests and illustrate how a serverless function should be developed.

Exercise 1: Creating an HTTP Function

In this exercise, we will create an HTTP function to be a part of a serverless platform and then invoke it via an HTTP request. In order to execute the steps of the exercise, you will use Docker, text editors, and a terminal.

Note

The code files for the exercises in this chapter can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Exercise1.

To successfully complete the exercise, we need to ensure the following steps are executed:

  1. Create a file named function.go with the following content in your favorite text editor:
    package main
    import (
        "fmt"
        "net/http"
    )
    func WelcomeServerless(w http.ResponseWriter, r *http.Request) {
    	fmt.Fprintf(w, "Hello Serverless World!")
    }

    In this file, we have created an actual function handler to respond when this function is invoked.

  2. Create a file named main.go with the following content:
    package main
    import (
        "fmt"
        "net/http"
    )
    func main() {
        fmt.Println("Starting the serverless environment..")
        http.HandleFunc("/", WelcomeServerless)
        fmt.Println("Function handlers are registered.")
        http.ListenAndServe(":8080", nil)
    }

    In this file, we have created the environment to serve this function. In general, this part is expected to be handled by the serverless platform.

  3. Start a Go development environment with the following command in your terminal:
    docker run -it --rm -p 8080:8080 -v "$(pwd)":/go/src --workdir=/go/src golang:1.12.5

    With that command, a shell prompt will start inside a Docker container for Go version 1.12.5. In addition, port 8080 of the host system is mapped to the container, and the current working directory is mapped to /go/src. You will be able to run commands inside the started Docker container:

    Figure 1.10: The Go development environment inside the container
    Figure 1.10: The Go development environment inside the container
  4. Start the function handlers with the following command in the shell prompt opened in step 3: go run *.go.

    With the start of the applications, you will see the following lines:

    Figure 1.11: The start of the function server
    Figure 1.11: The start of the function server

    These lines indicate that the main function inside the main.go file is running as

    expected.

  5. Open http://localhost:8080 in your browser:
    Figure 1.12: The WelcomeServerless output
    Figure 1.12: The WelcomeServerless output

    The message displayed on the web page reveals that the WelcomeServerless function is successfully invoked via the HTTP request and the response is retrieved.

  6. Press Ctrl + C to exit the function handler and then write exit to stop the container:
    Figure 1.13: Exiting the function handler and container
Figure 1.13: Exiting the function handler and container

With this exercise, we demonstrated how we can create a simple function. In addition, the serverless environment was presented to show how functions are served and invoked. In the following section, an introduction to Kubernetes and the serverless environment is given to connect the two cloud computing phenomena.

 

Kubernetes and Serverless

Serverless and Kubernetes arrived on the cloud computing scene at about the same time, in 2014. AWS supports serverless through AWS Lambda, whereas Kubernetes became open source with the support of Google and its long and successful history in container management. Organizations started to create AWS Lambda functions for their short-lived temporary tasks, and many start-ups have been focused on products running on the serverless infrastructure. On the other hand, Kubernetes gained dramatic adoption in the industry and became the de facto container management system. It enables running both stateless applications, such as web frontends and data analysis tools, and stateful applications, such as databases, inside containers. The containerization of applications and microservices architectures have proven to be effective for both large enterprises and start-ups.

Therefore, running microservices and containerized applications is a crucial factor for successful, scalable, and reliable cloud-native applications. Also, the following two essential elements strengthen the connection between Kubernetes and serverless architectures:

  • Vendor lock-in: Kubernetes isolates the cloud provider and creates a managed environment for running serverless workloads. In other words, it is not straightforward to run your AWS Lambda functions in Google Cloud Functions if you want to move to a new provider next year. However, if you use a Kubernetes-backed serverless platform, you will be able to quickly move between cloud providers or even on-premises systems.
  • Reuse of services: As the mainstream container management system, Kubernetes runs most of its workload in your cloud environment. It offers an opportunity to deploy serverless functions side by side with existing services. It makes it easier to operate, install, connect, and manage both serverless and containerized applications.

Cloud computing and deployment strategies are always evolving to create more developer-friendly environments with lower costs. Kubernetes and containerization adoption has already won the market and the love of developers such that any cloud computation without Kubernetes won't be seen for a very long time. By providing the same benefits, serverless architectures are gaining popularity; however, this does not pose a threat to Kubernetes. On the contrary, serverless applications will make containerization more accessible, and consequently, Kubernetes will profit. Therefore, it is essential to learn how to run serverless architectures on Kubernetes to create future-proof, cloud-native, scalable applications. In the following exercise, we will combine functions and containers and package our functions as containers.

Possible answers:

  • Serverless – data preparation
  • Serverless – ephemeral API operations
  • Kubernetes – databases
  • Kubernetes – server-related operations

Exercise 2: Packaging an HTTP Function as a Container

In this exercise, we will package the HTTP function from Exercise 1 as a container to be a part of a Kubernetes workload. Also, we will run the container and trigger the function via its container.

Note

The code files for the exercises in this chapter can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Exercise2.

To successfully complete the exercise, we need to ensure the following steps are executed:

  1. Create a file named Dockerfile in the same folder as the files from Exercise 1:
    FROM golang:1.12.5-alpine3.9 AS builder
    ADD . .
    RUN go build *.go
    FROM alpine:3.9
    COPY --from=builder /go/function ./function
    RUN chmod +x ./function
    ENTRYPOINT ["./function"]

    In this multi-stage Dockerfile, the function is built inside the golang:1.12.5-alpine3.9 container. Then, the binary is copied into the alpine:3.9 container as the final application package.

  2. Build the Docker image with the following command in the terminal: docker build . -t hello-serverless.

    Each line of Dockerfile is executed sequentially, and finally, with the last step, the Docker image is built and tagged: Successfully tagged hello-serverless:latest:

    Figure 1.14: The build of the Docker container
    Figure 1.14: The build of the Docker container
  3. Start a Docker container from the hello-serverless image with the following command in your Terminal: docker run -it --rm -p 8080:8080 hello-serverless.

    With that command, an instance of the Docker image is instantiated with port 8080 mapping the host system to the container. Furthermore, the --rm flag will remove the container when it is exited. The log lines indicate that the container of the function is running as expected:

    Figure 1.15: The start of the function container
    Figure 1.15: The start of the function container
  4. Open http://localhost:8080 in your browser:
    Figure 1.16: The WelcomeServerless output
    Figure 1.16: The WelcomeServerless output

    It reveals that the WelcomeServerless function running in the container was successfully invoked via the HTTP request, and the response is retrieved.

  5. Press Ctrl + C to exit the container:
    Figure 1.17: Exiting the container
Figure 1.17: Exiting the container

In this exercise, we saw how we can package a simple function as a container. In addition, the container was started and the function was triggered with the help of Docker's networking capabilities. In the following exercise, we will implement a parameterized function to show how to pass values to functions and return different responses.

Exercise 3: Parameterized HTTP Functions

In this exercise, we will convert the WelcomeServerless function from Exercise 2 into a parameterized HTTP function. Also, we will run the container and trigger the function via its container.

Note

The code files for the exercises in this chapter can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Exercise3.

To successfully complete the exercise, we need to ensure that the following steps are executed:

  1. Change the contents of function.go from Exercise 2 to the following:
    package main
    import (
    	"fmt"
    	"net/http"
    )
    func WelcomeServerless(w http.ResponseWriter, r *http.Request) {
    	names, ok := r.URL.Query()["name"]
        
        if ok && len(names[0]) > 0 {
            fmt.Fprintf(w, names[0] + ", Hello Serverless World!")
    	} else {
    		fmt.Fprintf(w, "Hello Serverless World!")
    	}
    }

    In the new version of the WelcomeServerless function, we now take URL parameters and return responses accordingly.

  2. Build the Docker image with the following command in your terminal: docker build . -t hello-serverless.

    Each line of Dockerfile is executed sequentially, and with the last step, the Docker image is built and tagged: Successfully tagged hello-serverless:latest:

    Figure 1.18: The build of the Docker container
    Figure 1.18: The build of the Docker container
  3. Start a Docker container from the hello-serverless image with the following command in the terminal: docker run -it –rm -p 8080:8080 hello-serverless.

    With that command, the function handlers will start on port 8080 of the host system:

    Figure 1.19: The start of the function container
    Figure 1.19: The start of the function container
  4. Open http://localhost:8080 in your browser:
    Figure 1.20: The WelcomeServerless output
    Figure 1.20: The WelcomeServerless output

    It reveals the same response as in the previous exercise. If we provide URL parameters, we should get personalized Hello Serverless World messages.

  5. Change the address to http://localhost:8080?name=Ece in your browser and reload the page. We are now expecting to see a personalized Hello Serverless World message with the name provided in URL parameters:
    Figure 1.21: Personalized WelcomeServerless output
    Figure 1.21: Personalized WelcomeServerless output
  6. Press Ctrl + C to exit the container:
    Figure 1.22: Exiting the container
Figure 1.22: Exiting the container

In this exercise, how generic functions are used with different parameters was shown. Personal messages based on input values were returned by a single function that we deployed. In the following activity, a more complex function will be created and managed as a container to show how they are implemented in real life.

Activity 1: Twitter Bot Backend for Bike Points in London

The aim of this activity is to create a real-life function for a Twitter bot backend. The Twitter bot will be used to search for available bike points in London and the number of available bikes in the corresponding locations. The bot will answer in a natural language form; therefore, your function will take input for the street name or landmark and output a complete human-readable sentence.

Transportation data for London is publicly available and accessible via the Transport for London (TFL) Unified API (https://api.tfl.gov.uk). You are required to use the TFL API and run your functions inside containers.

Once completed, you will have a container running for the function:

Figure 1.23: The running function inside the container
Figure 1.23: The running function inside the container

When you query via an HTTP REST API, it should return sentences similar to the following when bike points are found with available bikes:

Figure 1.24: Function response when bikes are available
Figure 1.24: Function response when bikes are available

When there are no bike points found or no bikes are available at those locations, the function will return a response similar to the following:

Figure 1.25: Function response when a bike point is located but no bike is found
Figure 1.25: Function response when a bike point is located but no bike is found

The function may also provide the following response:

Figure 1.26: Function response when no bike point or bike is found
Figure 1.26: Function response when no bike point or bike is found

Execute the following steps to complete this activity:

  1. Create a main.go file to register function handlers, as in Exercise 1.
  2. Create a function.go file for the FindBikes function.
  3. Create a Dockerfile for building and packaging the function, as in Exercise 2.
  4. Build the container image with Docker commands.
  5. Run the container image as a Docker container and make the ports available from the host system.
  6. Test the function's HTTP endpoint with different queries.
  7. Exit the container.

    Note

    The files main.go, function.go and Dockerfile can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Activity1.

    The solution for the activity can be found on page 372.

In this activity, we built the backend of a Twitter bot. We started by defining main and FindBikes functions. Then we built and packaged this serverless backend as a Docker container. Finally, we tested it with various inputs to find the closest bike station. With this real-life example, the background operations of a serverless platform and how to write serverless functions were illustrated.

 

Summary

In this chapter, we first described the journey from traditional to serverless software development. We discussed how software development has changed over the years to create a more developer-friendly environment. Following that, we presented the origin of serverless technology and its official manifesto. Since serverless is a popular term in the industry, defining some rules helps to design better serverless applications that integrate easily into various platforms. We then listed use cases for serverless technology to illustrate how serverless architectures can be used to create any modern application.

Following an introduction to serverless, FaaS was explored as an implementation of serverless architectures. We showed how applications are designed in traditional, microservices, and serverless designs. In addition, the benefits of the transition to serverless architectures were discussed in detail.

Finally, Kubernetes and serverless technologies were discussed to show how they support each other. As the mainstream container management system, Kubernetes was presented, which involved looking at the advantages of running serverless platforms with it. Containerization and microservices are highly adopted in the industry, and therefore running serverless workloads as containers was covered, with exercises. Finally, a real-life example of functions as a backend for a Twitter bot was explored. In this activity, functions were packaged as containers to show the relationship between microservices-based, containerized, and FaaS-backed designs.

In the next chapter, we will be introducing serverless architecture in the cloud and working with cloud services.

About the Authors
  • Onur Yılmaz

    Onur Ylmaz is a senior software engineer in a multinational enterprise software company. He is a certified Kubernetes administrator (CKA) and works on Kubernetes and cloud management systems. He is a keen supporter of cutting-edge technologies including Docker, Kubernetes, and cloud-native applications. He has one master's and two bachelor's degrees in the engineering field.

    Browse publications by this author
  • Sathsara Sarathchandra

    Sathsara Sarathchandra is a DevOps engineer, experienced in building and managing Kubernetes based production deployments both in cloud and on-premise. He has over 8 years of experience working in several companies ranging from small start-ups to enterprises. He is a Certified Kubernetes Administrator (CKA) and a certified Kubernetes application developer (CKAD). He holds a master's degree in business administration and a bachelor's degree in computer science.

    Browse publications by this author
Latest Reviews (1 reviews total)
アーキテクチャを学習するには説明が難しかったから。
Serverless Architectures with Kubernetes
Unlock this book and the full library FREE for 7 days
Start now