Home Cloud & Networking Docker for Serverless Applications

Docker for Serverless Applications

By Chanwit Kaewkasi
books-svg-icon Book
eBook $35.99 $24.99
Print $43.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $35.99 $24.99
Print $43.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Serverless and Docker
About this book
Serverless applications have gained a lot of popularity among developers and are currently the buzzwords in the tech market. Docker and serverless are two terms that go hand-in-hand. This book will start by explaining serverless and Function-as-a-Service (FaaS) concepts, and why they are important. Then, it will introduce the concepts of containerization and how Docker fits into the Serverless ideology. It will explore the architectures and components of three major Docker-based FaaS platforms, how to deploy and how to use their CLI. Then, this book will discuss how to set up and operate a production-grade Docker cluster. We will cover all concepts of FaaS frameworks with practical use cases, followed by deploying and orchestrating these serverless systems using Docker. Finally, we will also explore advanced topics and prototypes for FaaS architectures in the last chapter. By the end of this book, you will be in a position to build and deploy your own FaaS platform using Docker.
Publication date:
April 2018
Publisher
Packt
Pages
250
ISBN
9781788835268

 

Chapter 1. Serverless and Docker

When talking about containers, most of us already know how to pack an application into a container as a deployment unit. Docker allows us to deploy applications in its de facto standard format to virtually everywhere, ranging from our laptop, a QA cluster, a customer site, or a public cloud, as shown in the following diagram:

Figure 1.1: Deploying a Docker container to various infrastructures and platforms

Running Docker containers on public clouds is considered normal these days. We have already gained benefits such as starting cloud instances on demand with pay-as-you-go billing. Without the need to wait for hardware purchase, we can also move faster using an Agile method with a continuous delivery pipeline to optimize our resources.

According to a Docker report, the total cost of ownership (TCO) of one of their customers was cut by 66% when using Docker to migrate existing applications to the cloud. Not only can the TCO be dramatically reduced, the companies using Docker can also accelerate the time to market from months to days. This is a huge win.

Deploying containers to cloud infrastructures, such as AWS, Google Cloud, or Microsoft Azure, already makes things simpler. Cloud infrastructures eliminate the need for organizations to buy their own hardware and to have a dedicated team for maintaining it.

However, organizations still require the role, such as that of the architect, to take care of site reliability and scalability even when they use the public cloud infrastructure. Some of these people are known as SREs, the site reliability engineers.

In addition, organizations also need to take care of system-level packages and dependencies. They need to perform patching for application security and the OS kernel on their own because the software stack will be constantly changing. In many scenarios, the team in these organizations must scale the size of their clusters to unexpectedly serve requests when loads are peaking. Also, the engineers need to do their best to scale the clusters down, when possible, so as to reduce the cloud costs as it is a pay-as-you-go model.

Developers and engineering teams always work hard to deliver great user experience and site availability. While doing that, over provisioning on-demand instances or under utilizing them, can be costly. According to an AWS white paper, https://d0.awsstatic.com/whitepapers/optimizing-enterprise-economics-serverless-architectures.pdf, the amount of underutilized instances is as much as 85% of the provisioned machines.

Serverless computing platforms, such as AWS Lambda, Google Cloud Functions, Azure Functions, and IBM Cloud Functions, are designed to address these overprovisioning and underutilization problems.

The following topics will be covered in this chapter:

  • Serverless
  • The common architecture of a serverless FaaS
  • Serverless/FaaS use cases
  • Hello world, the FaaS/Docker way
 

What is serverless?


Try to imagine that we live in a world fully driven by software with a kind of intelligence.

It would be a world where we could develop software without doing anything. Just say what kind of software we would like to run, and minutes later, it would be there somewhere on the internet serving many users. And we would only pay for the number of requests made by our users. Well, that kind of world is too unreal.

Now, let's be more realistic and think of the world where we still need to develop software by ourselves. At least for now, we do not need to take care of any server provisioning and management. This is actually, at least, the best world for developers, where we can deploy our applications to reach millions of users without taking care of any server, or even not needing to know where these servers are. The only thing we actually want is to create an application that addresses the needs of the business at scale, at an affordable price. The serverless platforms have been created to address these problems.

As a response to developers and fast-growing businesses, serverless platforms seem to be a very huge win. But what exactly are they?

The relationship between serverless and FaaS

The following diagram illustrates the position of event-driven programming, FaaS, and serverless FaaS, where serverless FaaS is the intersection area between FaaS and serverless:

Figure 1.2: A Venn diagram illustrating the relationship between serverless and FaaS

Serverless is a paradigm shift that enables developers to not worry about server provisioning and operations. Billing would be pay-per-request. Also, many useful services are there on the public cloud for us to choose, connecting them together and use them to solve the business problems to get the job done.

Applications in the serverless architecture typically use third-party services to do other jobs such as authentication, database systems, or file storage. It is not necessary for serverless applications to use these third-party services, but architecting the application this way takes full advantage of the cloud-based serverless platforms. The frontend applications in this kind of architecture are usually a thick, fat, and powerful frontend, such as single-page applications or mobile applications.

The execution engine for this serverless computing shift is a Function as a Service or FaaS platform. A FaaS platform is a computing engine, that allows us to write a simple, self-contained, single-purpose function to process or compute a task. A compute unit of a FaaS platform is a function that is recommended to be stateless. This stateless property makes functions fully manageable and scalable by the platform.

A FaaS platform does not necessarily run on a serverless environment, such as AWS Lambda, but there are many FaaS implementations, such as OpenFaaS, the Fn Project, and OpenWhisk, that allow us to deploy and run FaaS on our own hardware. If a FaaS platform runs in the serverless environment, it would be called serverless FaaS. For example, we have OpenWhisk running locally, so it is our FaaS platform. But when it is running on IBM Cloud as IBM Cloud Functions, it is a serverless FaaS.

Every FaaS platform has been designed to use the event-driven programming model, to be able to connect efficiently to other services on the public cloud. With the asynchronous event model and the stateless property of functions, this environment makes serverless FaaS an ideal model for next-generation computing.

The disadvantages of serverless FaaS

But what are the drawbacks of this approach? They are as follows:

  • We basically do not own the servers. The serverless model is not suitable when we need fine-grained control over our infrastructure.
  • Serverless FaaS has a lot of limitations, notably the time limits of function execution, and memory limits for each function instance. It also introduces a fixed and specific way to develop applications. Maybe it is a bit hard to migrate the existing systems directly to FaaS.
  • It is impossible to fully use serverless platforms with private or hybrid infrastructure, if we are not allowed to migrate all workload out of the organization. One of the real benefits of serverless architectures is the existence of convenient public services on the cloud.

Docker to the rescue

This book discusses the balance between FaaS on our own infrastructure and serverless FaaS. We try to simplify and unify the deployment model of FaaS by choosing three major FaaS platforms that allow us to deploy Docker containers as functions, which we discuss in this book.

With Docker containers as deployment units (functions), Docker as a development tool, and Docker as the orchestration engine and networking layer, we can develop serverless applications and deploy them on our available hardware, on our own private cloud infrastructure, or a hybrid cloud that mixes our hardware together with the public cloud's hardware.

One of the most important points is that it is easy enough to take care of this kind of infrastructure using a small team of developers with Docker skills.

Looking back the previous Figure 1.2. If you're getting the clue after reading this chapter up to here, let's guess a bit that what would be the contents to be discussed in this book. Where should we be in this diagram? The answer is at the end of this chapter.

 

Common architecture of a serverless FaaS


Before getting into other technical chapters, the common architecture of at least six serverless FaaS platforms surveyed and studied during the writing of this book is presented in the following diagram. It is a distilled overview of the existing FaaS platforms and a recommended architecture, if you want to create a new one:

Figure 1.3: A block diagram describing the common architecture for FaaS platforms

System layers

A description of the architecture from bottom to top is as follows:

  • We have somephysical or virtual machines. These machines could be on a public or private cloud. Some of them may be a physical box running inside a firewall or an organization. They may be mixed together as a hybrid infrastructure.
  • The next layer is the Operating Systemand, of course, the kernels. We need an OS with a modern kernel that supports container isolation, such as Linux, or that is at least compatible with runC. Windows or Windows Server 2016 has its own Hyper-V based isolation that is compatible with Docker.
  • The next layer in the architecture is the Container Runtime (System-Level). We emphasize that it is the system-level container runtime as it is not for running FaaS functions directly. This layer is responsible for provisioning the cluster.
  • Next is the optional container orchestration engine, or Container Orchestrator, layer. This layer is Docker Swarm or Kubernetes. We use Docker Swarm in this book, but you may find that some FaaS platforms presented in this book do not use any kind of orchestration. Basically, just Docker alone with container networking is enough for a FaaS platform to get up and running effectively.

FaaS layers

Now, we will discuss the actual FaaS layers. We will go from left to right:

  • The frontier component of the whole architecture is the FaaS Gateway. The gateway in some implementations is optional, but in many implementations, this component helps serve HTTPS and cache some static content, such as UI parts, of the platform. Gateway instances help for making better throughput. It is usually a stateless HTTP-based reverse proxy. So this component is easy to scale-out.
  • The Initiator is one of the most important components of FaaS. An initiator is responsible for imitating the real invocation request to the rest of the platform. In OpenWhisk, this component is called the controller, for example. In Fn, the part inside its Fn server acts as the Initiator.
  • The Message Busis the message backbone of a FaaS platform. Some architectures that do not have this component will have a difficulty to properly implement asynchronous calls, or the retry pattern to make the platform robust. The message bus decouples initiators out of executors.
  • The Executor is the component that does the real function invocation. It connects to its own container runtime (application-level) to start the real sequence of function execution. All results and logging will be written to the central log storage.
  • Log Storage is the platform's single source of truth. It should be designed to store almost everything, ranging from the function activities to the error logs of each invocation.
  • Container Runtime (application level) is a component responsible for starting the function container. We simply use Docker and its underlying engine as the runtime component in this book.
 

Serverless/FaaS use cases


Serverless/FaaS is a generic computing model. Therefore, it would be possible to implement virtually any kind of workloads using this programming paradigm. The use cases of serverless/FaaS could range from an API for normal web applications, a RESTful backend for mobile applications, a function for log or video processing, a backend for WebHook-based systems, to a stream data processing program:

Figure 1.4: The block diagram of the demo project

In Chapter 8, Putting Them all Together, we will discuss a system, as shown in the previous diagram, with the following use cases:

  • APIs for a WebHook-based system: In the previous diagram, you may have spotted theBackend for UI. This system allows us to define a WebHook and it will be implemented as a FaaS function using one of the frameworks discussed in a later chapter.
  • APIs to wrap around a legacy system: In the upper right-hand corner of the previous diagram, we will find a set of functions connecting to a Chrome Headless (a fully-functional running Google Chrome instance). The function there wraps around a set of commands to instruct Google Chrome to work on a legacy system for us.
  • APIs as abstractions for other services: In the lower right-hand corner there are two simple blocks. The first one is a function running on a FaaS platform connecting to the second one, Mock Core Bank System, which is a more complex REST API. This part of the system demonstrates how a FaaS function could be used as an abstraction to simplify the interface of a complex system.
  • Stream data processing: We will also implement a data processing agent, an event listener, which listens to an event source—you may find the Ethereum logo there with a circle that connects from the left. This agent will listen to the data stream from the source and then call a function running on a FaaS platform.
 

Hello world, the FaaS/Docker way


This book covers all three major frameworks of FaaS on Docker. So it would not be fair, if I were the one to choose a specific framework for the hello world program in this first chapter. I will let you choose one from your very own preference.

The following is the common setup on a Linux machine. For Mac or Windows users, please skip this step and download Docker for Mac, or Docker for Windows:

$ curl -sSL https://get.docker.com | sudo sh

If you choose to go with OpenFaaS in this chapter, you can simplify this setup process by using Play with Docker (https://labs.play-with-docker.com/), which automatically installs OpenFaaS on a single-node Docker Swarm.

When we get Docker installed, just initialize Swarm to make our single-node cluster ready to run:

$ docker swarm init --advertise-addr=eth0

If the previous command failed, try changing the network interface name to match yours. But if it still fails, just put one of the machine's IP addresses there.

If everything is set up successfully, let's start the series of hello world programs on various FaaS platforms.

Hello OpenFaas

We will try the echoit function to hello world with OpenFaaS. First, clone the project from https://github.com/openfaas/faas with one level of depth to just make the clone process quicker:

$ git clone --depth=1 https://github.com/openfaas/faas

Then, change the directory into faas, and simply deploy the OpenFaaS default stack, using the following command:

$ cd faas
$ docker stack deploy -c docker-compose.yml func

Wait until the stack is going up. Then, we do hello world with the curl command:

$ curl -d "hello world." -v http://localhost:8080/function/func_echoit
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> POST /function/func_echoit HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.47.0
> Accept: */*
> Content-Length: 12
> Content-Type: application/x-www-form-urlencoded
> 
* upload completely sent off: 12 out of 12 bytes
< HTTP/1.1 200 OK
< Content-Length: 12
< Content-Type: application/x-www-form-urlencoded
< Date: Fri, 23 Mar 2018 16:37:30 GMT
< X-Call-Id: 866c9294-e243-417c-827c-fe0683c652cd
< X-Duration-Seconds: 0.000886
< X-Start-Time: 1521823050543598099
< 
* Connection #0 to host localhost left intact
hello world.

After playing around it, we could also use docker stack rm to remove all running services:

$ docker stack rm func

Hello OpenWhisk

Let's quickly move to OpenWhisk. To hello world with OpenWhisk, we also need a docker-compose binary. Please visit https://github.com/docker/compose/releases and follow instructions there to install it.

With OpenWhisk, the whole stack would be a bit longer to get up and running than with OpenFaaS. But the overall command will be simpler as the hello world is already built in.

First, clone the OpenWhisk development tool from its GitHub repository:

$ git clone --depth=1 https://github.com/apache/incubator-openwhisk-devtools devtools

Then change the directory into devtools/docker-compose, and manually do image pulling, using the following commands:

$ cd devtools/docker-compse
$ docker-compose pull
$ docker pull openwhisk/nodejs6action

After that, just call make quick-start to perform the setup:

$ make quick-start

Wait until the OpenWhisk cluster has started. This could take up to 10 minutes.

After that, run the following command, make hello-world, to register and invoke the hello world action:

$ make hello-world
creating the hello.js function ...
invoking the hello-world function ... 
adding the function to whisk ...
ok: created action hello
invoking the function ...
invokation result: { "payload": "Hello, World!" }
{ "payload": "Hello, World!" }
deleting the function ...
ok: deleted action hello

Make sure that you're on a fast internet connection. The slowness associated with OpenWhisk pulling the invoke and controller often causesmake quick-start to fail.

To clean up, just use the make destroy command to terminate the target:

$ make destroy

Say hello to the Fn project

This is another FaaS project covered by this book. We quickly do hello world by installing the Fn CLI. Then use it to start a local Fn server, create an app, and then create a route that links to a pre-built Go function under the app. After that, we will use the curl command to test the deployed hello world function.

Here's the standard command to install the Fn client:

$ curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install | sudo sh

After that, we can use the fn command. Let's start an Fn server. Use --detach to make it run in the background:

$ fn start --detach

Well, if we see a container ID, it is good to go. Next, quickly create an Fn app and call it goapp:

$ fn apps create goapp

Then, we already have a pre-built image called chanwit/fn_ch1:0.0.2 on the Docker Hub. Just use it. We use the fn routes create command to link the new route to the image. The purpose of this step is to actually define a function:

$ fn routes create --image chanwit/fn_ch1:0.0.2 goapp /fn_ch1
/fn_ch1 created with chanwit/fn_ch1:0.0.2

OK, the route is ready. Now, we can use the curl command to just call our hello worldprogram on Fn:

$ curl -v http://localhost:8080/r/goapp/fn_ch1
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /r/goapp/fn_ch1 HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.47.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Content-Length: 26
< Content-Type: application/json; charset=utf-8
< Fn_call_id: 01C99YJXCE47WG200000000000
< Xxx-Fxlb-Wait: 383.180124ms
< Date: Fri, 23 Mar 2018 17:30:34 GMT
< 
{"message":"Hello World"}
* Connection #0 to host localhost left intact

OK, it seems all things are working as well as expected for Fn. Let's remove the server after it has finished:

$ docker rm -f fnserver
 

Exercise


At the end of every chapter, there will be a set of questions to help us review the content of the current chapter. Let's try to answer each of them without going back to the chapter's contents:

  1. What is the definition of serverless?
  2. What is the definition of FaaS?
  3. Describe the difference between FaaS and serverless?
  4. What are the roles of Docker in the world of serverless applications?
  5. What does the common architecture of FaaS look like?
  1. Try to explain why we are in the shaded area in the following diagram:

Figure 1.5: Scope of FaaS and serverless area covered by this book

 

Summary


This chapter has introduced serverless and Docker, the definition of serverless, and the definition of FaaS. We learned the benefits of serverless, when to use it, and when to avoid it. A serverless FaaS is a FaaS platform run by a vendor on a public cloud, while a FaaS may be required to run on a private, a hybrid, or an on-premises environment. This is where we can use Docker. Docker will help us build FaaS applications, and prepare container infrastructure to run container-based functions.

We previewed the demo project that will be built step by step in later chapters. We then quickly did hello world with all three leading FaaS platforms for Docker to demonstrate how easy it is to run FaaS platforms on our own Docker cluster.

In the next chapter, we will review the concepts of the container, and the technologies behind it. We will also introduce Docker and its workflow, then we will learn the concept of the Docker Swarm cluster and how to prepare it. And finally, we will discuss how Docker fits into the world of serverless.

About the Author
  • Chanwit Kaewkasi

    Chanwit Kaewkasi is an assistant professor at the School of Computer Engineering, Suranaree University of Technology, Thailand. He started contributing code to the Docker Swarm project in its early day around 0.1. Later in 2016, he led the Swarm2K project together with contributors around the world to form the largest Docker Swarm cluster. Beside teaching and doing research in the field of software engineering, he provides consulting to several companies to help them adopt Docker, microservices, and FaaS technologies. He currently serves the Docker community as a Docker Captain.

    Browse publications by this author
Latest Reviews (2 reviews total)
The content is very up to date and in trends.
After this material, it is easy to make a general impression of the capabilities of Docker.
Docker for Serverless Applications
Unlock this book and the full library FREE for 7 days
Start now