





















































In this article by Neependra Khare author of the book Docker Cookbook, when the Docker daemon starts, it creates a virtual Ethernet bridge with the name docker0. For example, we will see the following with the ip addr command on the system that runs the Docker daemon:
(For more resources related to this topic, see here.)
As we can see, docker0 has the IP address 172.17.42.1/16. Docker randomly chooses an address and subnet from a private range defined in RFC 1918 (https://tools.ietf.org/html/rfc1918). Using this bridged interface, containers can communicate with each other and with the host system.
By default, every time Docker starts a container, it creates a pair of virtual interfaces, one end of which is attached to the host system and other end to the created container. Let's start a container and see what happens:
The end that is attached to the eth0 interface of the container gets the 172.17.0.1/16 IP address. We also see the following entry for the other end of the interface on the host system:
Now, let's create a few more containers and look at the docker0 bridge with the brctl command, which manages Ethernet bridges:
Every veth* binds to the docker0 bridge, which creates a virtual subnet shared between the host and every Docker container. Apart from setting up the docker0 bridge, Docker creates IPtables NAT rules, such that all containers can talk to the external world by default but not the other way around. Let's look at the NAT rules on the Docker host:
If we try to connect to the external world from a container, we will have to go through the Docker bridge that was created by default:
When starting a container, we have a few modes to select its networking:
$ docker run -i -t --net=bridge centos /bin/bash
$ docker run -i -t --net=host centos bash
We can then run the ip addr command within the container as seen here:
We can see all the network devices attached to the host. An example of using such a configuration is to run the nginx reverse proxy within a container to serve the web applications running on the host.
$ docker run -i -t --name=centos centos bash
Now start another as follows:
$ docker run -i -t --net=container:centos ubuntu bash
As we can see, both containers contain the same IP address.
Containers in a Kubernetes (http://kubernetes.io/) Pod use this trick to connect with each other.
For more information about the different networking, visit https://docs.docker.com/articles/networking/#how-docker-networks-a-container.
From Docker 1.2 onwards, it is also possible to change /etc/host, /etc/hostname, and /etc/resolv.conf on a running container. However, note that these are just used to run a container. If it restarts, we will have to make the changes again.
So far, we have looked at networking on a single host, but in the real world, we would like to connect multiple hosts and have a container from one host to talk to a container from another host. Flannel (https://github.com/coreos/flannel), Weave (https://github.com/weaveworks/weave), Calio (http://www.projectcalico.org/getting-started/docker/), and Socketplane (http://socketplane.io/) are some solutions that offer this functionality Socketplane joined Docker Inc in March '15.
Community and Docker are building a Container Network Model (CNM) with libnetwork (https://github.com/docker/libnetwork), which provides a native Go implementation to connect containers. More information on this development can be found at http://blog.docker.com/2015/04/docker-networking-takes-a-step-in-the-right-direction-2/.
Once the container is up, we would like to access it from outside. If you have started the container with the --net=host option, then it can be accessed through the Docker host IP. With --net=none, you can attach the network interface from the public end or through other complex settings. Let's see what happens in by default—where packets are forwarded from the host network interface to the container.
Make sure the Docker daemon is running on the host and you can connect through the Docker client.
$ docker run --expose 80 -i -d -P --name f20 fedora /bin/bash
This automatically maps any network port of the container to a random high port of the Docker host between 49000 to 49900.
In the PORTS section, we see 0.0.0.0:49159->80/tcp, which is of the following form:
<Host Interface>:<Host Port> -> <Container Interface>/<protocol>
So, in case any request comes on port 49159 from any interface on the Docker host, the request will be forwarded to port 80 of the centos1 container.
We can also map a specific port of the container to the specific port of the host using the -p option:
$ docker run -i -d -p 5000:22 --name centos2 centos /bin/bash
In this case, all requests coming on port 5000 from any interface on the Docker host will be forwarded to port 22 of the centos2 container.
With the default configuration, Docker sets up the firewall rule to forward the connection from the host to the container and enables IP forwarding on the Docker host:
As we can see from the preceding example, a DNAT rule has been set up to forward all traffic on port 5000 of the host to port 22 of the container.
By default, with the -p option, Docker will forward all the requests coming to any interface to the host. To bind to a specific interface, we can specify something like the following:
$ docker run -i -d -p 192.168.1.10:5000:22 --name f20 fedora /bin/bash
In this case, only requests coming to port 5000 on the interface that has the IP 192.168.1.10 on the Docker host will be forwarded to port 22 of the f20 container. To map port 22 of the container to the dynamic port of the host, we can run following command:
$ docker run -i -d -p 192.168.1.10::22 --name f20 fedora /bin/bash
We can bind multiple ports on containers to ports on hosts as follows:
$ docker run -d -i -p 5000:22 -p 8080:80 --name f20 fedora /bin/bash
We can look up the public-facing port that is mapped to the container's port as follows:
$ docker port f20 80 0.0.0.0:8080
To look at all the network settings of a container, we can run the following command:
$ docker inspect -f "{{ .NetworkSettings }}" f20
Any uncommitted data or changes in containers get lost as soon as containers are deleted. For example, if you have configured the Docker registry in a container and pushed some images, as soon as the registry container is deleted, all of those images will get lost if you have not committed them. Even if you commit, it is not the best practice. We should try to keep containers as light as possible. The following are two primary ways to manage data with Docker:
Make sure that the Docker daemon is running on the host and you can connect through the Docker client.
$ docker run -t -d -P -v /data --name f20 fedora /bin/bash
We can have multiple data volumes within a container, which can be created by adding -v multiple times:
$ docker run -t -d -P -v /data -v /logs --name f20 fedora /bin/bash
The VOLUME instruction can be used in a Dockerfile to add data volume as well by adding something similar to VOLUME ["/data"].
We can use the inspect command to look at the data volume details of a container:
$ docker inspect -f "{{ .Config.Volumes }}" f20 $ docker inspect -f "{{ .Volumes }}" f20
If the target directory is not there within the container, it will be created.
$ docker run -i -t -v /source_on_host:/destination_on_container fedora /bin/bash
Consider the following example:
$ docker run -i -t -v /srv:/mnt/code fedora /bin/bash
This can be very useful in cases such as testing code in different environments, collecting logs in central locations, and so on. We can also map the host directory in read-only mode as follows:
$ docker run -i -t -v /srv:/mnt/code:ro fedora /bin/bash
We can also mount the entire root filesystem of the host within the container with the following command:
$ docker run -i -t -v /:/host:ro fedora /bin/bash
If the directory on the host (/srv) does not exist, then it will be created, given that you have permission to create one. Also, on the Docker host where SELinux is enabled and if the Docker daemon is configured to use SELinux (docker -d --selinux-enabled), you will see the permission denied error if you try to access files on mounted volumes until you relabel them. To relabel them, use either of the following commands:
$ docker run -i -t -v /srv:/mnt/code:z fedora /bin/bash $ docker run -i -t -v /srv:/mnt/code:Z fedora /bin/bash
$ docker run -d -v /data --name data fedora echo "data volume container"
This will just create a volume that will be mapped to a directory managed by Docker. Now, other containers can mount the volume from the data container using the --volumes-from option as follows:
$ docker run -d -i -t --volumes-from data --name client1 fedora /bin/bash
We can mount a volume from the data volume container to multiple containers:
$ docker run -d -i -t --volumes-from data --name client2 fedora /bin/bash
We can also use --volumes-from multiple times to get the data volumes from multiple containers. We can also create a chain by mounting volumes from the container that mounts from some other container.
In case of data volume, when the host directory is not shared, Docker creates a directory within /var/lib/docker/ and then shares it with other containers.
$ docker run -v /srv:/tmp/registry -p 5000:5000 registry
To push an image, we run the following command:
$ docker push registry-host:5000/nkhare/f20
After the image is successfully pushed, we can look at the content of the directory that we mounted within the Docker registry. In our case, we should see a directory structure as follows:
/srv/ ├── images │ ├── 3f2fed40e4b0941403cd928b6b94e0fd236dfc54656c00e456747093d10157ac │ │ ├── ancestry │ │ ├── _checksum │ │ ├── json │ │ └── layer │ ├── 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 │ │ ├── ancestry │ │ ├── _checksum │ │ ├── json │ │ └── layer │ ├── 53263a18c28e1e54a8d7666cb835e9fa6a4b7b17385d46a7afe55bc5a7c1994c │ │ ├── ancestry │ │ ├── _checksum │ │ ├── json │ │ └── layer │ └── fd241224e9cf32f33a7332346a4f2ea39c4d5087b76392c1ac5490bf2ec55b68 │ ├── ancestry │ ├── _checksum │ ├── json │ └── layer ├── repositories │ └── nkhare │ └── f20 │ ├── _index_images │ ├── json │ ├── tag_latest │ └── taglatest_json
With containerization, we would like to create our stack by running services on different containers and then linking them together. However, we can also put them in different containers and link them together. Container linking creates a parent-child relationship between them, in which the parent can see selected information of its children. Linking relies on the naming of containers.
Make sure the Docker daemon is running on the host and you can connect through the Docker client.
$ docker run -d -i -t --name centos_server centos /bin/bash
$ docker run -i -t --link centos_server:server --name client fedora /bin/bash
In the preceding example, we linked the centos_server container to the client container with an alias server. By linking the two containers, an entry of the first container, which is centos_server in this case, is added to the /etc/hosts file in the client container. Also, an environment variable called SERVER_NAME is set within the client to refer to the server.
Now, let's create a mysql container:
$ docker run --name mysql -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mysql
Then, let's link it from a client and check the environment variables:
$ docker run -i -t --link mysql:mysql-server --name client fedora /bin/bash
Also, let's look at the docker ps output:
If you look closely, we did not specify the -P or -p options to map ports between two containers while starting the client container. Depending on the ports exposed by a container, Docker creates an internal secure tunnel in the containers that links to it. And, to do that, Docker sets environment variables within the linker container. In the preceding case, mysql is the linked container and client is the linker container. As the mysql container exposes port 3306, we see corresponding environment variables (MYSQL_SERVER_*) within the client container.
As linking depends on the name of the container, if you want to reuse a name, you must delete the old container.
In this article, we learned how to connect a container with another container, in the external world. We also learned how we can share external storage from other containers and the host system.