Reader small image

You're reading from  Building Enterprise JavaScript Applications

Product typeBook
Published inSep 2018
Reading LevelIntermediate
PublisherPackt
ISBN-139781788477321
Edition1st Edition
Languages
Right arrow
Author (1)
Daniel Li
Daniel Li
author image
Daniel Li

Daniel Li is a full-stack JavaScript developer at Nexmo. Previously, he was also the Managing Director of Brew, a digital agency in Hong Kong that specializes in MeteorJS. A proponent of knowledge-sharing and open source, Daniel has written over 100 blog posts and in-depth tutorials, helping hundreds of thousands of readers navigate the world of JavaScript and the web.
Read more about Daniel Li

Right arrow

Chapter 17. Migrating to Docker

So far, we have focused on developing the backend and frontend of our application, and have paid little attention to our infrastructure. In the next two chapters, we will focus on creating a scalable infrastructure using Docker and Kubernetes.

So far, we’ve manually configured two Virtual Private Servers (VPSs), and deployed each of our backend APIs and client applications on them. As we continue to develop our applications on our local machine, we test each commit locally, on Travis CI, and on our own Jenkins CI server. If all tests pass, we use Git to pull changes from our centralized remote repository on GitHub and restart our application. While this approach works for simple apps with a small user base, it will not hold up for enterprise software.

Therefore, we'll begin this chapter by understanding why manual deployment should be a thing of the past, and the steps we can make towards full automation of the deployment process. Specifically, by following...

Problems with manual deployment


Here are some of the weaknesses in our current approach:

  • Lack of consistency: Most enterprise-level applications are developed by a team. It is likely that each team member will use a different operating system, or otherwise configure their machine differently from others. This means that the environment of each team members’ local machine will be different from each other, and by extension, from the production servers'. Therefore, even if all our tests pass locally, it does not guarantee that it will pass on production.
  • Lack of independence: When a few services depend on a shared library, they must all use the same version of the library.
  • Time-consuming and error-prone: Every time we want a new environment (staging/production) or the same environment in multiple locations, we need to manually deploy a new VPS instance and repeat the same steps to configure users, firewalls, and install the necessary packages. This produces two problems:
    • Time-consuming: Manual...

Introduction to Docker


Docker is an open source project that provides the tools and ecosystem for developers to build and run applications inside containers.

What are containers?

Containerization is a method of virtualization. Virtualization is a method of running a virtual instance of a computer system inside a layer abstracted from the hardware. Virtualization allows you to run multiple operating systems on the same physical host machine.

 

From the view of an application running inside a virtualized system, it has no knowledge or interaction with the host machine, and may not even know that it is running in a virtual environment.

Containers are a type of virtual system. Each container is allocated a set amount of resources (CPU, RAM, storage). When a program is running inside a container, its processes and child processes can only manipulate the resources allocated to the container, and nothing more.

You can view a container as an isolated environment, or sandbox, on which to run your application...

Mechanics of Docker


So, now that you understandwhy we need Docker, and, at a high level, how to work with Docker, let’s turn our attention to what a Docker container and image actually are.

What is a Docker container?

Docker is based on Linux Containers (LXC), a containerization technology built into Linux. LXC itself relies on two Linux kernel mechanisms –control groups and namespaces. So, let's briefly examine each one in more detail.

Control groups

Control groups (cgroups) separate processes by groups, and attach one or more subsystems to each group:

The subsystem can restrict the resource usage of each attached group. For example, we can place our application's process into the foo cgroup, attach the memory subsystem to it, and restrict our application to using, say, 50% of the host’s memory.

There are many different subsystems, each responsible for different types of resources, such as CPU, block I/O, and network bandwidth.

Namespaces

Namespaces package system resources, such as filesystems...

Following best practices


Next, let's improve our Dockerfile by applying best practices.

Shell versus exec forms

The RUN, CMD, and ENTRYPOINT Dockerfile instructions are all used to run commands. However, there are two ways to specify the command to run:

  • shell form; RUN yarn run build: The command is run inside a new shell process, which, by default, is /bin/sh -c on Linux and cmd /S /C on Windows
  • exec form; RUN ["yarn", "run", "build"]: The command is not run inside a new shell process

The shell form exists to allow you to use shell processing features like variable substitution and to chain multiple commands together. However, not every command requires these features. In those cases, you should use the exec form.

When shell processing is not required, the exec form is preferred because it saves resources by running one less process (the shell process).

We can demonstrate this by using ps, which is a Linux command-line tool that shows you a snapshot of the current processes. First, let’s enter...

Summary


We have now encapsulated our application’s component services into portable, self-contained Docker images, which can be run as containers. In doing so, we have improved our deployment process by making it:

  • Portable: The Docker images can be distributed just like any other file. They can also be run in any environment.
  • Predictable/Consistent: The image is self-contained and pre-built, which means it will run in the same way wherever it is deployed.
  • Automated: All instructions are specified inside a Dockerfile, meaning our computer can run them like code.

However, despite containerizing our application, we are still manually running the docker run commands. Furthermore, we are running single instances of these containers on a single server. If the server fails, our application will go down. Moreover, if we have to make an update to our application, there'll still be downtime (although now it's a shorter downtime because deployment can be automated).

Therefore, while Docker is part of the...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Building Enterprise JavaScript Applications
Published in: Sep 2018Publisher: PacktISBN-13: 9781788477321
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Daniel Li

Daniel Li is a full-stack JavaScript developer at Nexmo. Previously, he was also the Managing Director of Brew, a digital agency in Hong Kong that specializes in MeteorJS. A proponent of knowledge-sharing and open source, Daniel has written over 100 blog posts and in-depth tutorials, helping hundreds of thousands of readers navigate the world of JavaScript and the web.
Read more about Daniel Li