Linux is a powerful operating system with many robust networking constructs. Much like any networking technology, they are powerful individually but become much more powerful when combined in creative ways. Docker is a great example of a tool that combines many of the individual components of the Linux network stack into a complete solution. While Docker manages most of this for you, it's still helpful to know your way around when looking at the Linux networking components that Docker uses.
In this chapter, we'll spend some time looking at these constructs individually outside of Docker. We'll learn how to make network configuration changes on Linux hosts and validate the current state of the network configuration. While this chapter is not dedicated to Docker itself, it is important to understand the primitives for later chapters, where we discuss how Docker uses these constructs to network containers.
Understanding how Linux handles networking is an integral part of understanding how Docker handles networking. In this recipe, we'll focus on Linux networking basics by learning how to define and manipulate interfaces and IP addresses on a Linux host. To demonstrate the configuration, we'll start building a lab topology in this recipe and continue it through the other recipes in this chapter.
In order to view and manipulate networking settings, you'll want to ensure that you have the iproute2
toolset installed. If it's not present on the system, it can be installed using the following command:
sudo apt-get install iproute2
In order to make network changes to the host, you'll also need root-level access.
For the purpose of demonstration in this chapter, we'll be using a simple lab topology. The initial network layout of the host looks like this:

In this case, we have three hosts, each with a single eth0
interface already defined:
net1
:10.10.10.110/24
with a default gateway of10.10.10.1
net2
:172.16.10.2/26
net3
:172.16.10.66/26
The network configuration on most end hosts is generally limited to the IP address, the subnet mask, and the default gateway of a single interface. This is because most hosts are network endpoints offering a discrete set of services on a single IP interface. But what happens if we want to define more interfaces or manipulate the existing one? To answer that question, let's first look at simple single-homed server such as net2
or net3
in the preceding example.
On Ubuntu hosts, all of the interface configuration is done in the /etc/network/interfaces
file. Let's examine that file on the host net2
:
# The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 172.16.10.2 netmask 255.255.255.192
We can see that this file defines two interfaces—the local loopback
interface and the interface eth0
. The eth0
interface defines the following information:
address
: The IP address of the hosts interfacenetmask
: The subnet mask associated with the IP interface
The information in this file will be processed each time the interface attempts to come into the up or operational state. We can validate that this configuration file was processed at system boot by checking the current IP address of the interface eth0
with the ip addr show <interface name>
command:
user@net2:~$ ip addr show eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:59:ca:ca brd ff:ff:ff:ff:ff:ff inet 172.16.10.2/26 brd 172.16.10.63 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe59:caca/64 scope link valid_lft forever preferred_lft forever user@net2:~$
Now that we've reviewed a single-homed configuration, let's take a look and see what it would take to configure multiple interfaces on a single host. As things stand, the net1
host is the only host that has any sort of reachability off its local subnet. This is because it has a defined default gateway pointing back to the rest of the network. In order to make net2
and net3
reachable we need to find a way to connect them back to the rest of the network as well. To do this, let's assume that the host net1
has two additional network interfaces that we can connect directly to hosts net2
and net3
:

Let's walk through how to configure additional interfaces and IP addresses on the net1
to complete the topology.
The first thing we want to do is verify that we have additional interfaces available to work with on net1
. To do this, we would use the ip link show
command:
user@net1:~$ ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:2d:dd:79 brd ff:ff:ff:ff:ff:ff 3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:0c:29:2d:dd:83 brd ff:ff:ff:ff:ff:ff 4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:0c:29:2d:dd:8d brd ff:ff:ff:ff:ff:ff user@net1:~$
We can see from the output that in addition to the eth0
interface, we also have interfaces eth1
and eth2
available to us. To see which interfaces have IP addresses associated with them, we can use the ip address show
command:
user@net1:~$ ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:2d:dd:79 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.110/24 brd 10.10.10.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe2d:dd79/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:0c:29:2d:dd:83 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:0c:29:2d:dd:8d brd ff:ff:ff:ff:ff:ff
user@net1:~$
The preceding output proves that we currently only have a single IP address allocated on the interface eth0
. This means that we can use the interface eth1
for connectivity to server net2
and eth2
for connectivity to the server net3
.
There are two ways we can configure these new interfaces. The first is to update the network configuration file on net1
with the relevant IP address information. Let's do that for the link facing the host net2
. To configure this connectivity, simply edit the file /etc/network/interfaces
and add the relevant configuration for both interfaces. The finished configuration should look like this:
# The primary network interface auto eth0 iface eth0 inet static address 10.10.10.110 netmask 255.255.255.0 gateway 10.10.10.1 auto eth1 iface eth1 inet static address 172.16.10.1 netmask 255.255.255.192
Once the file is saved, you need to find a way to tell the system to reload the configuration file. One way to do this would be to reload the system. A simpler method would be to reload the interfaces. For instance, we could execute the following commands to reload interface eth1
:
user@net1:~$ sudo ifdown eth1 && sudo ifup eth1
ifdown: interface eth1 not configured
user@net1:~$
Note
While not required in this case, bringing the interface down and up at the same time is a good habit to get into. This ensures that you don't cut yourself off if you take down the interface you're managing the host from.
In some cases, you may find that this method of updating the interface configuration doesn't work as expected. Depending on your version of Linux, you may experience a condition where the previous IP address is not removed from the interface causing the interface to have multiple IP addresses. To resolve this, you can manually delete the old IP address or alternatively reboot the host, which will prevent legacy configurations from persisting.
After the commands are executed, we should be able to see that the interface eth1
is now properly addressed:
user@net1:~$ ip addr show dev eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:2d:dd:83 brd ff:ff:ff:ff:ff:ff inet 172.16.10.1/26 brd 172.16.10.63 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe2d:dd83/64 scope link valid_lft forever preferred_lft forever user@net1:~$
To configure the interface eth2
on host net1
, we'll use a different approach. Rather than relying on configuration files, we'll use the iproute2
command-line to update the configuration of the interface. To do this, we simply execute the following commands:
user@net1:~$ sudo ip address add 172.16.10.65/26 dev eth2 user@net1:~$ sudo ip link set eth2 up
It should be noted here that this configuration is not persistent. That is, since it's not part of a configuration file that's loaded at system initialization, this configuration will be lost on reboot. This is the same case for any network-related configuration done manually with the iproute2
or other command-line toolsets.
Note
It is the best practice to configure interface information and addressing in the network configuration file. Altering interface configuration outside of the configuration file is done in these recipes for the purpose of example only.
Up to this point, we've only modified existing interfaces by adding IP information to them. We have not actually added a new interface to any of the systems. Adding interfaces is a fairly common task, and, as later recipes will show, there are a variety of interface types that can be added. For now, let's focus on adding what Linux refers to as dummy interfaces. Dummy interfaces act like loopback interfaces in networking and describe an interface type that is always up and online. Interfaces are defined or created by using the ip link add
syntax. You then specify a name and define what type of interface it is you are defining. For instance, let's define a dummy interface on the hosts net2
and net3
:
user@net2:~$ sudo ip link add dummy0 type dummy user@net2:~$ sudo ip address add 172.16.10.129/26 dev dummy0 user@net2:~$ sudo ip link set dummy0 up user@net3:~$ sudo ip link add dummy0 type dummy user@net3:~$ sudo ip address add 172.16.10.193/26 dev dummy0 user@net3:~$ sudo ip link set dummy0 up
After defining the interface, each host should be able to ping their own dummy0
interface:
user@net2:~$ ping 172.16.10.129 -c 2 PING 172.16.10.129 (172.16.10.129) 56(84) bytes of data. 64 bytes from 172.16.10.129: icmp_seq=1 ttl=64 time=0.030 ms 64 bytes from 172.16.10.129: icmp_seq=2 ttl=64 time=0.031 ms --- 172.16.10.129 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.030/0.030/0.031/0.005 ms user@net2:~$ user@net3:~$ ping 172.16.10.193 -c 2 PING 172.16.10.193 (172.16.10.193) 56(84) bytes of data. 64 bytes from 172.16.10.193: icmp_seq=1 ttl=64 time=0.035 ms 64 bytes from 172.16.10.193: icmp_seq=2 ttl=64 time=0.032 ms --- 172.16.10.193 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.032/0.033/0.035/0.006 ms user@net3:~$
Note
You might be wondering why we had to turn up the dummy0
interface if they're considered to be always up. In reality, the interface is reachable without turning up the interface. However, the local route for the interface will not appear in the systems routing table without turning the interface up.
Once you've defined new IP interfaces, the next step is to configure routing. In most cases, Linux host routing configuration is limited solely to specifying a host's default gateway. While that's typically as far as most need to go, a Linux host is capable of being a full-fledged router. In this recipe, we'll learn how to interrogate a Linux hosts routing table as well as manually configure routes.
In order to view and manipulate networking settings, you'll want to ensure that you have the iproute2
toolset installed. If not present on the system, it can be installed by using the following command:
sudo apt-get install iproute2
In order to make network changes to the host, you'll also need root-level access. This recipe will continue the lab topology from the previous recipe. We left the topology looking like this after the previous recipe:

Despite Linux hosts being capable of routing, they do not do so by default. In order for routing to occur, we need to modify a kernel-level parameter to enable IP forwarding. We can check the current state of the setting a couple of different ways:
By using the
sysctl
command:sysctl net.ipv4.ip_forward
By querying the
/proc/
filesystem directly:more /proc/sys/net/ipv4/ip_forward
In either case, if the returned value is 1
, IP forwarding is enabled. If you do not receive a 1
, you'll need to enable IP forwarding in order for the Linux host to route packets through the system. You can manually enable IP forwarding by using the sysctl
command or again by directly interacting with the /proc/
filesystem:
sudo sysctl -w net.ipv4.ip_forward=1 echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
While this enables IP forwarding on the fly, this setting does not persist through a reboot. To make the setting persistent, you need to modify /etc/sysctl.conf
, uncomment the line for IP forwarding, and ensure it's set to 1
:
…<Additional output removed for brevity>…
# Uncomment the next line to enable packet forwarding for IPv4
net.ipv4.ip_forward=1
…<Additional output removed for brevity>…
Note
You may note that we're only modifying settings related to IPv4 at this time. Don't worry; we'll cover IPv6 and Docker networking later on in Chapter 10, Leveraging IPv6.
Once we've verified forwarding is configured, let's look at the routing table on all three lab hosts by using the ip route show
command:
user@net1:~$ ip route show default via 10.10.10.1 dev eth0 10.10.10.0/24 dev eth0 proto kernel scope link src 10.10.10.110 172.16.10.0/26 dev eth1 proto kernel scope link src 172.16.10.1 172.16.10.64/26 dev eth2 proto kernel scope link src 172.16.10.65 user@net2:~$ ip route show 172.16.10.0/26 dev eth0 proto kernel scope link src 172.16.10.2 172.16.10.128/26 dev dummy0 proto kernel scope link src 172.16.10.129 user@net3:~$ ip route show 172.16.10.64/26 dev eth0 proto kernel scope link src 172.16.10.66 172.16.10.192/26 dev dummy0 proto kernel scope link src 172.16.10.193
There are a couple of interesting items to note here. First off, we notice that the hosts have routes listed that are associated with each of their IP interfaces. Based on the subnet mask associated with the interface, the host can determine the network the interface is associated with. This route is inherent and would be said to be directly connected. Directly connected routes are how the system knows what IP destinations are directly connected versus which ones need to be forwarded to a next hop to reach a remote destination.
Second, in the last recipe, we added two additional interfaces to the host net1
to provide connectivity to hosts net2
and net3
. However, this alone only allows net1
to talk to net2
and net3
. If we want net2
and net3
to be reachable via the rest of the network, they'll need a default route pointing at their respective interfaces on net1
. Once again, let's do this in two separate manners. On net2
, we'll update the network configuration file and reload the interface, and on net3
, we'll add the default route directly through the command line.
On host net2
, update the file /etc/network/interfaces
and add a gateway on the eth0
interface pointing at the connected interface on the host net1
:
# The primary network interface
auto eth0
iface eth0 inet static
address 172.16.10.2
netmask 255.255.255.192
gateway 172.16.10.1
To activate the new configuration, we'll reload the interface:
user@net2:~$ sudo ifdown eth0 && sudo ifup eth0
Now we should be able to see the default route in the net2
host's routing table pointing out of eth0
at the net1
host's directly connected interface (172.16.10.1
):
user@net2:~$ ip route show
default via 172.16.10.1 dev eth0
172.16.10.0/26 dev eth0 proto kernel scope link src 172.16.10.2
172.16.10.128/26 dev dummy0 proto kernel scope link src 172.16.10.129
user@net2:~$
On the host net3
, we'll use the iproute2
toolset to modify the hosts routing table dynamically. To do this, we'll execute the following command:
user@net3:~$ sudo ip route add default via 172.16.10.65
Note
Note that we use the keyword default
. This represents the default gateway or the destination of 0.0.0.0/0
in Classless Inter-domain Routing (CIDR) notation. We could have executed the command using the 0.0.0.0/0
syntax as well.
After executing the command, we'll check the routing table to make sure that we now have a default route pointing at net1
(172.16.10.65
):
user@net3:~$ ip route show
default via 172.16.10.65 dev eth0
172.16.10.64/26 dev eth0 proto kernel scope link src 172.16.10.66
172.16.10.192/26 dev dummy0 proto kernel scope link src 172.16.10.193
user@net3:~$
At this point, the hosts and the rest of the network should have full network reachability to all of their physical interfaces. However, the dummy interfaces created in the previous recipe are not reachable by any other hosts than the ones they are defined on. In order to make those reachable, we're going to need to add some static routes.
The dummy interface networks are 172.16.10.128/26
and 172.16.10.192/26
. Because these networks are part of the larger 172.16.10.0/24
summary, the rest of the network already knows to route to the net1
host's 10.10.10.110
interface to get to these prefixes. However, net1
currently doesn't know where those prefixes live and will, in turn, loop the traffic right back to where it came from following its default route. To solve this, we need to add two static routes on net1
:

We can add these routes ad hoc through the iproute2
command-line tools or we can add them in a more persistent fashion as part of the host's network script. Let's do one of each:
To add the 172.16.10.128/26
route pointing at net2
, we'll use the command-line tool:
user@net1:~$ sudo ip route add 172.16.10.128/26 via 172.16.10.2
As you can see, adding manual routes is done through the ip route add
command syntax. The subnet that needs to be reached is specified along with the associated next hop address. The command takes effect immediately as the host populates the routing table instantly to reflect the change:
user@net1:~$ ip route
default via 10.10.10.1 dev eth0
10.10.10.0/24 dev eth0 proto kernel scope link src 10.10.10.110
172.16.10.0/26 dev eth1 proto kernel scope link src 172.16.10.1
172.16.10.64/26 dev eth2 proto kernel scope link src 172.16.10.65
172.16.10.128/26 via 172.16.10.2 dev eth1
user@net1:~$
If we wish to make a route persistent, we can allocate it as a post-up
interface configuration. The post-up
interface configurations take place directly after an interface is loaded. If we want the route 172.16.10.192/26
to be added to the hosts routing table the instant eth2
comes online, we can edit the /etc/network/interfaces
configuration script as follows:
auto eth2
iface eth2 inet static
address 172.16.10.65
netmask 255.255.255.192
post-up ip route add 172.16.10.192/26 via 172.16.10.66
After adding the configuration, we can reload the interface to force the configuration file to reprocess:
user@net1:~$ sudo ifdown eth2 && sudo ifup eth2
Note
In some cases, the host may not process the post-up
command because we defined the address on the interface manually in an earlier recipe. Deleting the IP address before reloading the interface would resolve this issue; however, in these cases, rebooting the host is the easiest (and cleanest) course of action.
And our routing table will now show both routes:
user@net1:~$ ip route default via 10.10.10.1 dev eth0 10.10.10.0/24 dev eth0 proto kernel scope link src 10.10.10.110 172.16.10.0/26 dev eth1 proto kernel scope link src 172.16.10.1 172.16.10.64/26 dev eth2 proto kernel scope link src 172.16.10.65 172.16.10.128/26 via 172.16.10.2 dev eth1 172.16.10.192/26 via 172.16.10.66 dev eth2 user@net1:~$
To verify this is working as expected, let's do some testing from a remote workstation that's attempting to ping the dummy interface on the host net2
(172.16.10.129
). Assuming the workstation is connected to an interface that's not on the external network, the flow might look like this:

A workstation with an IP address of
192.168.127.55
is attempting to reach the dummy interface connected tonet2
at its IP address of172.16.10.129
. The workstation sends the traffic towards its default gateway since the destination it's looking for is not directly connected.The network has a route for
172.16.10.0/24
pointing atnet1's eth0
interface (10.10.10.110
). The destination IP address (172.16.10.129
) is a member of that larger prefix, so the network forwards the workstation's traffic on to the hostnet1
.The
net1
host examines the traffic, interrogates its routing table, and determines that it has a route for that prefix pointing towards thenet2
with a next hop of172.16.10.2
.The
net2
receives the request, realizes that the dummy interface is directly connected, and attempts to send a reply back to the workstation. Not having a specific route for the destination of192.168.127.55
, the hostnet2
sends its reply to its default gateway, which isnet1
(172.16.10.1
).Similarly,
net1
does not have a specific route for the destination of192.168.127.55
, so it forwards the traffic back to the network via its default gateway. It is assumed that the network has reachability to return the traffic to the workstation.
In the case that we'd like to remove statically defined routes, we can do so with the ip route delete
subcommand. For instance, here's an example of adding a route and then deleting it:
user@net1:~$ sudo ip route add 172.16.10.128/26 via 172.16.10.2 user@net1:~$ sudo ip route delete 172.16.10.128/26
Notice how we only need to specify the destination prefix when deleting the route, not the next hop.
Bridges in Linux are a key building block for network connectivity. Docker uses them extensively in many of its own network drivers that are included with docker-engine
. Bridges have been around for a long time and are, in most cases, very similar to a physical network switch. Bridges in Linux can act like layer 2 or layer 3 bridges.
Note
Layer 2 versus layer 3
The nomenclature refers to different layers of the OSI network model. Layer 2 represents the data link layer and is associated with switching frames between hosts. Layer 3 represents the network layer and is associated with routing packets across the network. The major difference between the two is switching versus routing. A layer 2 switch is capable of sending frames between hosts on the same network but is not capable of routing them based on IP information. If you wish to route between two hosts on different networks or subnets, you'll need a layer 3 capable device that can route between the two subnets. Another way to look at this is that layer 2 switches can only deal with MAC addresses and layer 3 devices can deal with IP addresses.
By default, Linux bridges are layer 2 constructs. In this manner, they are often referred to as protocol independent. That is, any number of higher level (layer 3) protocols can run on the same bridge implementation. However, you can also assign an IP address to a bridge that turns it into a layer 3 capable networking construct. In this recipe, we'll show you how to create, manage, and inspect Linux bridges by walking through a couple of examples.
In order to view and manipulate networking settings, you'll want to ensure that you have the iproute2
toolset installed. If not present on the system, it can be installed by using the following command:
sudo apt-get install iproute2
In order to make network changes to the host, you'll also need root-level access. This recipe will continue the lab topology from the previous recipe. All of the prerequisites mentioned earlier still apply.
To demonstrate how bridges work, let's consider making a slight change to the lab topology we've been working with:

Rather than having the servers directly connect to each other via physical interfaces, we'll instead leverage bridges on the host net1
for connectivity to downstream hosts. Previously, we relied on a one-to-one mapping for connections between net1
and any other hosts. This meant that we'd need a unique subnet and IP address configuration for each physical interface. While that's certainly doable, it's not very practical. Leveraging bridge interfaces rather than standard interfaces affords us some flexibility we didn't have in the earlier configurations. We can assign a single IP address to a bridge interface and then plumb many physical connections into the same bridge. For example, a net4
host could be added to the topology and its interface on net1
could simply be added to host_bridge2
. That would allow it to use the same gateway (172.16.10.65
) as net3
. So while the physical cabling requirement for adding hosts won't change, this does prevent us from having to define one-to-one IP address mappings for each host.
Note
From the perspective of the hosts net2
and net3
, nothing will change when we reconfigure to use bridges.
Since we're changing how we define the net1
host's eth1
and eth2
interface, we'll start by flushing their configuration:
user@net1:~$ sudo ip address flush dev eth1 user@net1:~$ sudo ip address flush dev eth2
Flushing the interface simply clears any IP-related configuration off of the interface. The next thing we have to do is to create the bridges themselves. The syntax we use is much like we saw in the previous recipe when we created the dummy interfaces. We use the ip link add
command and specify a type of bridge:
user@net1:~$ sudo ip link add host_bridge1 type bridge user@net1:~$ sudo ip link add host_bridge2 type bridge
After creating the bridges, we can verify that they exist by examining the available interfaces with the ip link show <interface>
command:
user@net1:~$ ip link show host_bridge1 5: host_bridge1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default link/ether f6:f1:57:72:28:a7 brd ff:ff:ff:ff:ff:ff user@net1:~$ ip link show host_bridge2 6: host_bridge2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default link/ether be:5e:0b:ea:4c:52 brd ff:ff:ff:ff:ff:ff user@net1:~$
Next, we want to make them layer 3 aware, so we assign an IP address to the bridge interface. This is very similar to how we assigned IP addressing to physical interfaces in previous recipes:
user@net1:~$ sudo ip address add 172.16.10.1/26 dev host_bridge1 user@net1:~$ sudo ip address add 172.16.10.65/26 dev host_bridge2
We can verify that the IP addresses were assigned by using the ip addr show dev <interface>
command:
user@net1:~$ ip addr show dev host_bridge1 5: host_bridge1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default link/ether f6:f1:57:72:28:a7 brd ff:ff:ff:ff:ff:ff inet 172.16.10.1/26 scope global host_bridge1 valid_lft forever preferred_lft forever user@net1:~$ ip addr show dev host_bridge2 6: host_bridge2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default link/ether be:5e:0b:ea:4c:52 brd ff:ff:ff:ff:ff:ff inet 172.16.10.65/26 scope global host_bridge2 valid_lft forever preferred_lft forever user@net1:~$
The next step is to bind the physical interfaces associated with each downstream host to the correct bridge. In our case, we want the host net2
, which is connected to net1's
eth1
interface to be part of the bridge host_bridge1
. Similarly, we want the host net3
, which is connected to net1
's eth2
interface, to be part of the bridge host_bridge2
. Using the ip link set
subcommand, we can define the bridges to be the masters of the physical interfaces:
user@net1:~$ sudo ip link set dev eth1 master host_bridge1 user@net1:~$ sudo ip link set dev eth2 master host_bridge2
We can verify that the interfaces were successfully bound to the bridge by using the bridge link show
command.
Note
The bridge
command is part of the iproute2
package and is used to validate bridge configuration.
user@net1:~$ bridge link show 3: eth1 state UP : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master host_bridge1 state forwarding priority 32 cost 4 4: eth2 state UP : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master host_bridge2 state forwarding priority 32 cost 4 user@net1:~$
Finally, we need to turn up the bridge interfaces as they are, by default, created in a down state:
user@net1:~$ sudo ip link set host_bridge1 up user@net1:~$ sudo ip link set host_bridge2 up
Once again, we can now check the link status of the bridges to verify that they came up successfully:
user@net1:~$ ip link show host_bridge1 5: host_bridge1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 00:0c:29:2d:dd:83 brd ff:ff:ff:ff:ff:ff user@net1:~$ ip link show host_bridge2 6: host_bridge2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 00:0c:29:2d:dd:8d brd ff:ff:ff:ff:ff:ff user@net1:~$
At this point, you should once again be able to reach the hosts net2
and net3
. However, the dummy interfaces are now unreachable. This is because the routes for the dummy interfaces were automatically withdrawn after we flushed interface eth1
and eth2
. Removing the IP addresses from those interfaces made the next hops used to reach the dummy interfaces unreachable. It is common for a device to withdraw a route from its routing table when the next hop becomes unreachable. We can add them again rather easily:
user@net1:~$ sudo ip route add 172.16.10.128/26 via 172.16.10.2 user@net1:~$ sudo ip route add 172.16.10.192/26 via 172.16.10.66
Now that everything is working again, we can perform some extra steps to validate the configuration. Linux bridges, much like real layer 2 switches, can also keep track of the MAC addresses they receive. We can view the MAC addresses the system is aware of by using the bridge fdb show
command:
user@net1:~$ bridge fdb show …<Additional output removed for brevity>… 00:0c:29:59:ca:ca dev eth1 00:0c:29:17:f4:03 dev eth2 user@net1:~$
The two MAC addresses we see in the preceding output reference the directly connected interfaces that net1
talks to in order to get to hosts net2
and net3
as well as the subnets defined on their associated dummy0
interfaces. We can verify this by looking at the hosts ARP table:
user@net1:~$ arp -a ? (10.10.10.1) at 00:21:d7:c5:f2:46 [ether] on eth0 ? (172.16.10.2) at 00:0c:29:59:ca:ca [ether] on host_bridge1 ? (172.16.10.66) at 00:0c:29:17:f4:03 [ether] on host_bridge2 user@net1:~$
Note
There aren't many scenarios where the old tool is better, but in the case of the bridge
command-line tool, some might argue that the older brctl
tool has some advantages. For one, the output is a little easier to read. In the case of learned MAC addresses, it will give you a better view into the mappings with the brctl showmacs <bridge name>
command. If you want to use the older tool, you can install the bridge-utils
package.
Removing interfaces from bridges can be accomplished through the ip link set
subcommand. For instance, if we wanted to remove eth1
from the bridge host_bridge1
we would run this command:
sudo ip link set dev eth1 nomaster
This removes the master slave binding between eth1
and the bridge host_bridge1
. Interfaces can also be reassigned to new bridges (masters) without removing them from the bridge they are currently associated with. If we wanted to delete the bridge entirely, we could do so with this command:
sudo ip link delete dev host_bridge2
It should be noted that you do not need to remove all of the interfaces from the bridge before you delete it. Deleting the bridge will automatically remove all master bindings.
Up until this point, we've focused on physical cables to make connections between interfaces. But how would we connect two interfaces that didn't have physical interfaces? For this purpose, Linux networking has an internal interface type called Virtual Ethernet (VETH) pairs. VETH interfaces are always created in pairs making them act like a sort of virtual patch cable. VETH interfaces can also have IP addresses assigned to them, which allow them to participate in a layer 3 routing path. In this recipe, we'll examine how to define and implement VETH pairs by building off the lab topology we've used in previous recipes.
In order to view and manipulate networking settings, you'll want to ensure that you have the iproute2
toolset installed. If not present on the system, it can be installed by using the command:
sudo apt-get install iproute2
In order to make network changes to the host, you'll also need root-level access. This recipe will continue the lab topology from the previous recipe. All of the prerequisites mentioned earlier still apply.
Let's once again modify the lab topology, so we can make use of VETH pairs:

Once again, the configuration on hosts net2
and net3
will remain unchanged. On the host net1
, we're going to implement VETH pairs in two different manners.
On the connection between net1
and net2
, we're going to use two different bridges and connect them together with a VETH pair. The bridge host_bridge1
will remain on net1
and maintain its IP address of 172.16.10.1
. We're also going to add a new bridge named edge_bridge1
. This bridge will not have an IP address assigned to it but will have net1
's interface facing net2
(eth1
) as a member of it. At that point, we'll use a VETH pair to connect the two bridges allowing traffic to flow from net1
across both bridges to net2
. In this case, the VETH pair will be used as a layer 2 construct.
On the connection between net1
and net3
we're going to use a VETH pair but in a slightly different fashion. We'll add a new bridge called edge_bridge2
and put net1
host's interface facing the host net3
(eth2
) on that bridge. Then we will provision a VETH pair and place one end on the bridge edge_bridge2
. We'll then assign the IP address previously assigned to the host_bridge2
to the host side of the VETH pair. In this case, the VETH pair will be used as a layer 3 construct.
Let's start on the connection between net1
and net2
by adding the new edge bridge:
user@net1:~$ sudo ip link add edge_bridge1 type bridge
Then, we'll add the interface facing net2
to edge_bridge1
:
user@net1:~$ sudo ip link set dev eth1 master edge_bridge1
Next, we'll configure the VETH pair that we'll use to connect host_bridge1
and edge_bridge1
. VETH pairs are always defined in a pair. Creating the interface will spawn two new objects, but they are reliant on each other. That is, if you delete one end of the VETH pair, the other end will get deleted right along with it. To define the VETH pair, we use the ip link add
subcommand:
user@net1:~$ sudo ip link add host_veth1 type veth peer name edge_veth1
We can see their configuration using the ip link show
subcommand:
user@net1:~$ ip link show …<Additional output removed for brevity>… 13: edge_veth1@host_veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 0a:27:83:6e:9a:c3 brd ff:ff:ff:ff:ff:ff 14: host_veth1@edge_veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether c2:35:9c:f9:49:3e brd ff:ff:ff:ff:ff:ff user@net1:~$
Note that we have two entries showing an interface for each side of the defined VETH pair. The next step is to place the ends of the VETH pair in the correct place. In the case of the connection between net1
and net2
, we want one end on host_bridge1
and the other on edge_bridge1
. To do this, we use the same syntax we used for assigning interfaces to bridges:
user@net1:~$ sudo ip link set host_veth1 master host_bridge1 user@net1:~$ sudo ip link set edge_veth1 master edge_bridge1
We can verify the mappings using the ip link show
command:
user@net1:~$ ip link show …<Additional output removed for brevity>… 9: edge_veth1@host_veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop master edge_bridge1 state DOWN mode DEFAULT group default qlen 1000 link/ether f2:90:99:7d:7b:e6 brd ff:ff:ff:ff:ff:ff 10: host_veth1@edge_veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop master host_bridge1 state DOWN mode DEFAULT group default qlen 1000 link/ether da:f4:b7:b3:8d:dd brd ff:ff:ff:ff:ff:ff
The last thing we need to do is bring up the interfaces associated with the connection:
user@net1:~$ sudo ip link set host_bridge1 up user@net1:~$ sudo ip link set edge_bridge1 up user@net1:~$ sudo ip link set host_veth1 up user@net1:~$ sudo ip link set edge_veth1 up
To reach the dummy interface off of net2
, you'll need to add the route back since it was once again lost during the reconfiguration:
user@net1:~$ sudo ip route add 172.16.10.128/26 via 172.16.10.2
At this point, we should have full reachability to net2
and its dummy0
interface through net1
.
On the connection between host net1
and net3
, the first thing we need to do is clean up any unused interfaces. In this case, that would be host_bridge2
:
user@net1:~$ sudo ip link delete dev host_bridge2
Then, we need to add the new edge bridge (edge_bridge2
) and associate net1
's interface facing net3
to the bridge:
user@net1:~$ sudo ip link add edge_bridge2 type bridge user@net1:~$ sudo ip link set dev eth2 master edge_bridge2
We'll then define the VETH pair for this connection:
user@net1:~$ sudo ip link add host_veth2 type veth peer name edge_veth2
In this case, we're going to leave the host side VETH pair unassociated from the bridges and instead assign an IP address directly to it:
user@net1:~$ sudo ip address add 172.16.10.65/25 dev host_veth2
Just like any other interface, we can see the assigned IP address by using the ip address show dev
command:
user@net1:~$ ip addr show dev host_veth2 12: host_veth2@edge_veth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 56:92:14:83:98:e0 brd ff:ff:ff:ff:ff:ff inet 172.16.10.65/25 scope global host_veth2 valid_lft forever preferred_lft forever inet6 fe80::5492:14ff:fe83:98e0/64 scope link valid_lft forever preferred_lft forever user@net1:~$
We will then place the other end of the VETH pair into edge_bridge2
connecting net1
to the edge bridge:
user@net1:~$ sudo ip link set edge_veth2 master edge_bridge2
And once again, we turn up all the associated interfaces:
user@net1:~$ sudo ip link set edge_bridge2 up user@net1:~$ sudo ip link set host_veth2 up user@net1:~$ sudo ip link set edge_veth2 up
Finally, we read our route to get to net3
's dummy interface:
user@net1:~$ sudo ip route add 172.16.10.192/26 via 172.16.10.66
After the configuration is completed, we should once again have full reachability into the environment and all the interfaces. If there are any issues with your configuration, you should be able to diagnose them through the use of the ip link show
and ip addr show
commands.
If you're ever questioning what the other end of a VETH pair is, you can use the ethtool
command-line tool to return the other side of the pair. For instance, assume that we create a non-named VETH pair as follows:
user@docker1:/$ sudo ip link add type veth user@docker1:/$ ip link show …<output removed for brevity>,,, 16: veth1@veth2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 12:3f:7b:8d:33:90 brd ff:ff:ff:ff:ff:ff 17: veth2@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 9e:9f:34:bc:49:73 brd ff:ff:ff:ff:ff:ff
While obvious in this example, we could use ethtool
to determine the interface index or ID of one or the other side of this VETH pair:
user@docker1:/$ ethtool -S veth1 NIC statistics: peer_ifindex: 17 user@docker1:/$ ethtool -S veth2 NIC statistics: peer_ifindex: 16 user@docker1:/$
This can be a handy troubleshooting tool later on when determining the ends of a VETH pair is not as obvious as it is in these examples.
Network namespaces allow you to create isolated views of the network. A namespace has a unique routing table that can differ entirely from the default routing table on the host. In addition, you can map interfaces from the physical host into namespaces for use within the namespace. The behavior of network namespaces closely mimics that of Virtual Routing and Forwarding (VRF) instances, which are available in most modern networking hardware. In this recipe, we'll learn the basics of network namespaces. We'll walk through the process of creating the namespace and discuss how to use different types of interfaces within a network namespace. Finally, we'll show how to connect multiple namespaces together.
In order to view and manipulate networking settings, you'll want to ensure that you have the iproute2
toolset installed. If not present on the system, it can be installed using the following command:
sudo apt-get install iproute2
In order to make network changes to the host, you'll also need root-level access. This recipe will continue the lab topology from the previous recipe. All of the prerequisites mentioned earlier still apply.
The concept of network namespaces is best demonstrated through an example, so let's jump right back to the lab topology from the previous recipes:

This diagram is the same topology we used in the last recipe, with one significant difference. We have the addition of two namespaces, NS_1 and NS_2. Each namespace encompasses certain interfaces on the host net1
:
NS_1:
edge_bridge1
eth1
edge_veth1
NS_2:
edge_bridge2
eth2
edge_veth2
Take note of where the boundary for the namespaces falls. In either case, the boundary falls on a physical interface (the net1
host's eth1
and eth2
) or directly in the middle of a VETH pair. As we'll see shortly, VETH pairs can bridge between namespaces making them an ideal tool for connecting network namespaces together.
To begin the reconfiguration, let's start by defining the namespaces, and then adding interfaces to the namespace. Defining a namespace is rather straightforward. We use the ip netns add
subcommand:
user@net1:~$ sudo ip netns add ns_1 user@net1:~$ sudo ip netns add ns_2
Namespaces can then be viewed by using the ip netns list
command:
user@net1:~$ ip netns list ns_2 ns_1 user@net1:~$
Once the namespaces are created, we can allocate the specific interfaces we identified as being part of each namespace. In most cases, this means telling an existing interface which namespace it belongs to. However, not all interfaces can be moved into a network namespace. Bridges for instances can live in network namespaces but need to be instantiated from within the name space. To do this, we can use the ip netns exec
subcommand to run the command from within the namespace. For instance, to create the edge bridges in each namespace, we would run these two commands:
user@net1:~$ sudo ip netns exec ns_1 ip link add \ edge_bridge1 type bridge user@net1:~$ sudo ip netns exec ns_2 ip link add \ edge_bridge2 type bridge
Let's break that command into two pieces:
sudo ip nent exec ns_1
: This tells the host you want to run a command inside a specific namespace, in this casens_1
ip link add edge_bridge1 type bridge
: As we saw in earlier recipes, we execute the command to build a bridge and give it a name, in this case,edge_bridge1
.
Using this same syntax, we can now examine the network configuration of a specific namespace. For instance, we could look at the interfaces with sudo ip netns exec ns_1 ip link show
:
user@net1:~$ sudo ip netns exec ns_1 ip link show 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: edge_bridge1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default link/ether 26:43:4e:a6:30:91 brd ff:ff:ff:ff:ff:ff user@net1:~$
As we expected, we see the bridge we instantiated inside the namespace. The other two interface types that the diagram shows in the namespace are of types that can be dynamically allocated into the namespace. To do that, we use the ip link set
command:
user@net1:~$ sudo ip link set dev eth1 netns ns_1 user@net1:~$ sudo ip link set dev edge_veth1 netns ns_1 user@net1:~$ sudo ip link set dev eth2 netns ns_2 user@net1:~$ sudo ip link set dev edge_veth2 netns ns_2
Now if we look at the available host interfaces, we should note that the interfaces we moved no longer exist in the default namespace:
user@net1:~$ ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:2d:dd:79 brd ff:ff:ff:ff:ff:ff 5: host_bridge1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 56:cc:26:4c:76:f6 brd ff:ff:ff:ff:ff:ff 7: edge_bridge1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 8: edge_bridge2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 10: host_veth1@if9: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast master host_bridge1 state LOWERLAYERDOWN mode DEFAULT group default qlen 1000 link/ether 56:cc:26:4c:76:f6 brd ff:ff:ff:ff:ff:ff 12: host_veth2@if11: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state LOWERLAYERDOWN mode DEFAULT group default qlen 1000 link/ether 2a:8b:54:81:36:31 brd ff:ff:ff:ff:ff:ff user@net1:~$
Note
You likely noticed that edge_bridge1
and edge_bridge2
still exist in this output since we never deleted them. This is interesting because they now also exist inside the namespaces ns_1
and ns_2
. It's important to point out that since the namespaces are totally isolated even the interface names can overlap.
Now that all of the interfaces are in the right namespace, all that's left to do is to apply standard bridge mapping and turn up the interfaces. Since we had to recreate the bridge interfaces in each namespace, we'll need to reattach the interfaces to each bridge. This is done just like you would normally; we just run the command within the namespace:
user@net1:~$ sudo ip netns exec ns_1 ip link set \ dev edge_veth1 master edge_bridge1 user@net1:~$ sudo ip netns exec ns_1 ip link set \ dev eth1 master edge_bridge1 user@net1:~$ sudo ip netns exec ns_2 ip link set \ dev edge_veth2 master edge_bridge2 user@net1:~$ sudo ip netns exec ns_2 ip link set \ dev eth2 master edge_bridge2
Once we have all of the interfaces in the right namespace and attached to the right bridges, all that's left is to bring them all up:
user@net1:~$ sudo ip netns exec ns_1 ip link set edge_bridge1 up user@net1:~$ sudo ip netns exec ns_1 ip link set edge_veth1 up user@net1:~$ sudo ip netns exec ns_1 ip link set eth1 up user@net1:~$ sudo ip netns exec ns_2 ip link set edge_bridge2 up user@net1:~$ sudo ip netns exec ns_2 ip link set edge_veth2 up user@net1:~$ sudo ip netns exec ns_2 ip link set eth2 up
After the interfaces come up, we should once again have connectivity to all of the networks attached to all three hosts.
While this example of namespaces only moved layer 2 type constructs into a namespace, they also support layer 3 routing with unique routing table instances per namespace. For instance, if we look at the routing table of one of the namespaces we'll see that it's completely empty:
user@net1:~$ sudo ip netns exec ns_1 ip route user@net1:~$
This is because we don't have any interfaces with IP addresses defined in the namespace. This demonstrates that both layer 2 and layer 3 constructs are isolated within a namespace. That's one major area where network namespaces and VRF instances differ. VRF instances only account for layer 3 configuration, whereas network namespaces isolate both layer 2 and layer 3 constructs. We'll see an example of layer 3 isolation with network namespaces in Chapter 3, User-Defined Networks, when we discuss the process Docker uses for networking containers.