Reader small image

You're reading from  Learn Docker - Fundamentals of Docker 19.x - Second Edition

Product typeBook
Published inMar 2020
PublisherPackt
ISBN-139781838827472
Edition2nd Edition
Tools
Right arrow
Author (1)
Dr. Gabriel N. Schenker
Dr. Gabriel N. Schenker
author image
Dr. Gabriel N. Schenker

Dr. Gabriel N. Schenker has more than 25 years of experience as an independent consultant, architect, leader, trainer, mentor, and developer. Currently, Gabriel works as Lead Solution Architect at Techgroup Switzerland. Prior to that, Gabriel worked as Lead Curriculum Developer at Docker and at Confluent. Gabriel has a Ph.D. in Physics, and he is a Docker Captain, a Certified Docker Associate, a Certified Kafka Developer and Operator, and an ASP Insider. When not working, Gabriel enjoys time with his wonderful wife Veronicah and his children.
Read more about Dr. Gabriel N. Schenker

Right arrow

Assessments

Chapter 1

Here are some sample answers to the questions presented in this chapter:

  1. The correct answers are D and E.
  2. A Docker container is to IT what a shipping container is to the transportation industry. It defines a standard on how to package goods. In this case, goods are the application(s) developers write. The suppliers (in this case, the developers) are responsible for packaging the goods into the container and making sure everything fits as expected. Once the goods are packaged into a container, it can be shipped. Since it is a standard container, the shippers can standardize their means of transportation, such as lorries, trains, or ships. The shipper doesn't really care what's in a container. Also, the loading and unloading process from one transportation means to another (for example, train to ship) can be highly standardized...

Chapter 2

Here are some sample answers to the questions presented in this chapter:

  1. docker-machine can be used for the following scenarios:
    1. To create a VM on various providers such as VirtualBox, Hyper-V, AWS, MS Azure, or Google Compute Engine that will serve as a Docker Host.
    2. To start, stop, or kill a previously generated VM.
    3. To SSH into a local or remote Docker Host VM created with this tool.
    4. To re-generate certificates for the secure use of a Docker Host VM.
  2. A. True. Yes, with Docker for Windows, you can develop and run Linux containers. It is also possible, but not discussed in this book, to develop and run native Windows containers with this edition of Docker for Desktop. With the macOS edition, you can only develop and run Linux containers.

 

  1. Scripts are used to automate processes and hence avoid human errors. Building, testing, sharing, and running Docker...

Chapter 3

Here are some sample answers to the questions presented in this chapter:

  1. The possible states of a Docker container are as follows:
    • created: A container that has been created but not started
    • restarting: A container that is in the process of being restarted
    • running: A currently running container
    • paused: A container whose processes have been paused
    • exited: A container that ran and completed
    • dead: A container that the Docker engine tried and failed to stop
  2. We can use docker container ls (or the old, shorter version, docker ps) to list all containers that are currently running on our Docker host. Note that this will NOT list the stopped containers, for which you need the extra parameter--all (or -a).
  3. To list all IDs of containers, running or stopped, we can use docker container ls -a -q, where -q stands for output ID only.
...

Chapter 4

Here are some sample answers to the questions presented in this chapter:

  1. The Dockerfile could look like this:
FROM ubuntu:19.04
RUN apt-get update && \
apt-get install -y iputils-ping
CMD ping 127.0.0.1

Note that in Ubuntu, the ping tool is part of the iputils-ping package. Build the image called pinger—for example— with docker image build -t my-pinger

2. The Dockerfile could look like this:

FROM alpine:latest
RUN apk update && \
apk add curl

Build the image with docker image build -t my-alpine:1.0

3. The Dockerfile for a Go application could look like this:

FROM golang:alpine
WORKDIR /app
ADD . /app
RUN cd /app && go build -o goapp
ENTRYPOINT ./goapp

You can find the full solution in the ~/fod/ch04/answer03 folder. 

4. A Docker image...

Chapter 5

Here are some sample answers to the questions presented in this chapter:

The easiest way to play with volumes is to use the Docker Toolbox because when directly using Docker for Desktop, the volumes are stored inside a (somewhat hidden) Linux VM that Docker for Desktop uses transparently.
Thus, we suggest the following:

$ docker-machine create --driver virtualbox volume-test
$ docker-machine ssh volume-test

And now that you're inside a Linux VM called volume-test, you can do the following exercise:

  1. To create a named volume, run the following command:
$ docker volume create my-products
  1. Execute the following command:
$ docker container run -it --rm \
-v my-products:/data:ro \
alpine /bin/sh
  1. To get the path on the host for the volume, use this command:
$ docker volume inspect my-products | grep Mountpoint

This (if you're using docker...

Chapter 6

Here are some sample answers to the questions presented in this chapter:

  1. Possible answers: a) Volume mount your source code in the container; b) use a tool that automatically restarts the app running inside the container when code changes are detected; c) configure your container for remote debugging.
  2. You can mount the folder containing the source code on your host in the container.
  3. If you cannot cover certain scenarios easily with unit or integration tests and if the observed behavior of the application cannot be reproduced when the application runs on the host. Another scenario is a situation where you cannot run the application on the host directly due to the lack of the necessary language or framework.
  4. Once the application is running in production, we cannot easily gain access to it as developers. If the application shows unexpected behavior or even crashes, logs...

Chapter 7 

Here are some sample answers to the questions presented in this chapter:

  1. Pros and cons:
    • Pro: We don't need to have the particular shell, tool, or language required by the task installed on our host.
    • Pro: We can run on any Docker host, from Raspberry Pi to a mainframe computer; the only requirement is that the host can run containers.
    • Pro: After a successful run, the tool is removed without leaving any traces from the host when the container is removed.
    • Con: We need to have Docker installed on the host.
    • Con: The user needs to have a basic understanding of Docker containers.
    • Con: Use of the tool is a bit more indirect than when using it natively.
  2. Running tests in a container has the following advantages:
    • They run equally well on a developer machine than on a test or CI system.
    • It is easier to start each test run with the same initial conditions...

Chapter 8

Here are some sample answers to the questions presented in this chapter:

  1. You could be working on a workstation with limited resources or capabilities, or your workstation could be locked down by your company so that you are not allowed to install any software that is not officially approved. Sometimes, you might need to do proof of concepts or experiments using languages or frameworks that are not yet approved by your company (but might be in the future if the proof of concept is successful).
  2. Bind-mounting a Docker socket into a container is the recommended method when a containerized application needs to automate some container-related tasks. This can be an application such as an automation server such as Jenkins that you are using to build, test, and deploy Docker images.
  1. Most business applications do not need root-level authorizations to do their job. From a security...

Chapter 9

Here are some sample answers to the questions presented in this chapter:

  1. In a distributed application architecture, every piece of the software and infrastructure needs to be redundant in a production environment, where the continuous uptime of the application is mission-critical. A highly distributed application consists of many parts and the likelihood of one of the pieces failing or misbehaving increases with the number of parts. It is guaranteed that, given enough time, every part will eventually fail. To avoid outages of the application, we need redundancy in every part, be it a server, a network switch, or a service running on a cluster node in a container.

 

  1. In highly distributed, scalable, and fault-tolerant systems, individual services of the application can move around due to scaling needs or due to component failures. Thus, we cannot hardwire different...

Chapter 10

Here are some sample answers to the questions presented in this chapter:

  1. The three core elements are sandbox, endpoint, and network.
  2. Execute this command:
$ docker network create --driver bridge frontend
  1.  Run this command:

$ docker container run -d --name n1 \
--network frontend -p 8080:80 nginx:alpine
$ docker container run -d --name n2 \
--network frontend -p 8081:80 nginx:alpine

Test that both NGINX instances are up and running:

$ curl -4 localhost:8080
$ curl -4 localhost:8081

You should be seeing the welcome page of NGINX in both cases.

  1. To get the IPs of all attached containers, run this command:
$ docker network inspect frontend | grep IPv4Address

You should see something similar to the following:

"IPv4Address": "172.18.0.2/16",
"IPv4Address": "172.18.0.3/16",

To get the subnet used by the network, use the...

Chapter 11

Here are some sample answers to the questions presented in this chapter:

  1. The following code can be used to run the application in detached or daemon mode:
$ docker-compose up -d
  1. Execute the following command to display the details of the running service:
$ docker-compose ps

This should result in the following output:

Name               Command               State  Ports
-------------------------------------------------------------------
mycontent_nginx_1 nginx -g daemon off; Up 0.0.0.0:3000->80/tcp
  1. The following command can be used to scale up the web service:
$ docker-compose up --scale web=3

Chapter 12

Here are some sample answers to the questions presented in this chapter:

  1. A mission-critical, highly available application that is implemented as a highly distributed system of interconnected application services that are just too complex to manually monitor, operate, and manage. Container orchestrators help in this regard. They automate most of the typical tasks, such as reconciling a desired state, or collecting and aggregating key metrics of the system. Humans cannot react quick enough to make such an application elastic or self-healing. Software support is needed for this in the form of the mentioned container orchestrators.
  2. A container orchestrator frees us from mundane and cumbersome tasks such as the following:
    • Scaling services up and down
    • Load balancing requests
    • Routing requests to the desired target
    • Monitoring the health of service instances
    • Securing...

Chapter 13

Here are some sample answers to the questions presented in this chapter:

  1. The correct answer is as follows:
$ docker swarm init [--advertise-addr <IP address>]

The --advertise-addr is optional and is only needed if you the host have more than one IP address.

  1. On the worker node that you want to remove, execute the following command:
 $ docker swarm leave

On one of the master nodes, execute the command $ docker node rm -f<node ID>, where <node ID> is the ID of the worker node to remove.

  1. The correct answer is as follows:
$ docker network create \
--driver overlay \
--attachable \
front-tier
  1. The correct answer is as follows:
$ docker service create --name web \
--network front-tier \
--replicas 5 \
-p 3000:80 \
nginx:alpine
  1. The correct answer is as follows:
$ docker service update --replicas 3 web
...

Chapter 14

Here are some sample answers to the questions presented in this chapter:

  1. Zero-downtime deployment means that a new version of a service in a distributed application is updated to a new version without the application needing to stop working. Usually, with Docker SwarmKit or Kubernetes (as we will see), this is done in a rolling fashion. A service consists of multiple instances and those are updated in batches so that the majority of the instances are up and running at all times.
  2. By default, Docker SwarmKit uses a rolling updated strategy to achieve zero-downtime deployments.
  3. Containers are self-contained units of deployment. If a new version of a service is deployed and does not work as expected, we (or the system) need to only roll back to the previous version. The previous version of the service is also deployed in the form of self-contained containers. Conceptually...

Chapter 15

Here are some sample answers to the questions presented in this chapter:

  1. The Kubernetes master is responsible for managing the cluster. All requests to create objects, reschedule pods, manage ReplicaSets, and more happen on the master. The master does not run the application workload in a production or production-like cluster.
  2. On each worker node, we have the kubelet, the proxy, and container runtime.
  3. The answer is A. Yes. You cannot run standalone containers on a Kubernetes cluster. Pods are the atomic units of deployment in such a cluster.
  4. All containers running inside a pod share the same Linux kernel network namespace. Thus, all processes running inside those containers can communicate with each other through localhost in a similar way to how processes or applications directly running on the host can communicate with each...

Chapter 16

Here are some sample answers to the questions presented in this chapter:

  1. Assuming we have a Docker image in a registry for the two application services, the web API and Mongo DB, we then need to do the following:
    • Define a deployment for Mongo DB using a StatefulSet; let's call this deployment db-deployment. The StatefulSet should have one replica (replicating Mongo DB is a bit more involved and is outside the scope of this book).
    • Define a Kubernetes service called db of the ClusterIP type for db-deployment.
    • Define a deployment for the web API; let's call it web-deployment. Let's scale this service to three instances.
    • Define a Kubernetes service called api of the NodePort type for web-deployment.
    • If we use secrets, then define those secrets directly in the cluster using kubectl...

Chapter 17

Here are some sample answers to the questions presented in this chapter:

  1. We cannot do any live debugging on a production system for performance and security reasons. This includes interactive or remote debugging. Yet application services can show unexpected behavior to code defects or other infrastructure-related issues such as network glitches or external services that are not available. To quickly pinpoint the reason for the misbehavior or failure of a service, we need as much logging information as possible. This information should give us a clue about, and guide us to, the root cause of the error. When we instrument a service, we do exactly this — we produce as much information as reasonable in the form of log entries and published metrics.

  2. Prometheus is a service that is used to collect functional or non-functional metrics that are provided by other...

Chapter 18

Here are some sample answers to the questions presented in this chapter:

  1. To install UCP in AWS, we do the following:
    • Create a VPC with subnets and an SG.
    • Then, provision a cluster of Linux VMs, possibly as part of an ASG. Many Linux distributions are supported, such as CentOS, RHEL, and Ubuntu.
    • Next, install Docker on each VM.
    • Finally, select one VM on which to install UCP using the docker/ucp image.
    • Once UCP is installed, join the other VMs to the cluster either as worker nodes or manager nodes.
  1. Here are a few reasons to consider a hosted Kubernetes offering:
    • You do not want to, or do not have the resources to, install and manage a Kubernetes cluster.
    • You want to concentrate on what brings value to your business, which in most cases is the applications that are supposed to run on Kubernetes and not Kubernetes itself.
    • You prefer...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Learn Docker - Fundamentals of Docker 19.x - Second Edition
Published in: Mar 2020Publisher: PacktISBN-13: 9781838827472
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Dr. Gabriel N. Schenker

Dr. Gabriel N. Schenker has more than 25 years of experience as an independent consultant, architect, leader, trainer, mentor, and developer. Currently, Gabriel works as Lead Solution Architect at Techgroup Switzerland. Prior to that, Gabriel worked as Lead Curriculum Developer at Docker and at Confluent. Gabriel has a Ph.D. in Physics, and he is a Docker Captain, a Certified Docker Associate, a Certified Kafka Developer and Operator, and an ASP Insider. When not working, Gabriel enjoys time with his wonderful wife Veronicah and his children.
Read more about Dr. Gabriel N. Schenker