Information technology (IT) applications are rapidly moving from dedicated infrastructure to a dynamic cloud-based infrastructure. This move to cloud started with server virtualization, where a hardware server ran as a virtual machine on a hypervisor. The adoption of cloud-based applications has accelerated due to factors such as globalization and outsourcing, where diverse teams need to collaborate in real time.
Server hardware connects to network switches using Ethernet and IP to establish network connectivity. However, as servers move from physical to virtual, the network boundary also moves from the physical network to the virtual network. Traditionally, applications, servers, and networking were tightly integrated. But modern enterprises and IT infrastructure demand flexibility in order to support complex applications.
The flexibility of cloud infrastructure requires networking to be dynamic and scalable. Software-Defined Networking (SDN) and Network Function Virtualization (NFV) play a critical role in data centers in order to deliver the flexibility and agility demanded by cloud-based applications. By providing practical management tools and abstractions that hide the underlying physical network's complexity, SDN allows operators to build complex networking capabilities on demand.
OpenStack is an open source cloud platform that helps build public and private cloud at scale. Within OpenStack, the name for the OpenStack Networking project is Neutron. The functionality of Neutron can be classified as core and service.
This chapter aims to provide a short introduction to OpenStack Networking. We will cover the following topics in this chapter:
Understanding traffic flows between virtual and physical networks
Neutron entities that support Layer 2 (L2) networking
Layer 3 (L3) or routing between OpenStack networks
Securing OpenStack network traffic
Advanced networking services in OpenStack
OpenStack and SDN
The terms Neutron and OpenStack Networking are used interchangeably throughout this book.
Server virtualization led to the adoption of virtualized applications and workloads running inside physical servers. While physical servers are connected to the physical network equipment, modern networking has pushed the boundary of networks into the virtual domain as well. Virtual switches, firewalls, and routers play a critical role in the flexibility provided by cloud infrastructure:

Figure 1: Networking components for server virtualization
The preceding diagram describes a typical virtualized server and its various networking components.
The virtual machines are connected to a Virtual Switch inside the Compute Node (or server). The traffic is secured using virtual routers and firewalls. The Compute Node is connected to a Physical Switch, which is the entry point into the physical network.
Let us now walk through different traffic flow scenarios using Figure 1 as the background. In Figure 2, traffic from one VM to another on the same Compute Node is forwarded by the Virtual Switch itself. It does not reach the physical network. You can even apply firewall rules to traffic between the two virtual machines:

Figure 2: Traffic flow between two virtual machines on the same server
Next, let us have a look at how traffic flows between virtual machines across two compute nodes. In Figure 3, the traffic comes out from the first Compute Node and then reaches the Physical Switch. The Physical Switch forwards the traffic to the second Compute Node and the Virtual Switch within the second Compute Node steers the traffic to the appropriate VM:

Figure 3: Traffic flow between two virtual machines on different servers
Finally, the following diagram is a depiction of traffic flow when a virtual machine sends or receives traffic from the Internet. The Physical Switch forwards the traffic to the Physical Router and Firewall, which is presumed to be connected to the Internet:

Figure 4: Traffic flow from a virtual machine to external network
As seen in the preceding diagrams, the physical and the virtual network components work together to provide connectivity to virtual machines and applications.
As a cloud platform, OpenStack supports multiple users grouped into tenants. One of the key requirements of a multi-tenant cloud is to provide isolation of data traffic belonging to one tenant from the rest of the tenants that use the same infrastructure. OpenStack supports different ways of achieving the isolation of network data traffic and it is the responsibility of the virtual switch on each compute node to implement the isolation.
In networking terminology, the connectivity to a physical or virtual switch is also known as Layer 2 (L2) connectivity. L2 connectivity is the most fundamental form of network connectivity needed for virtual machines. As mentioned previously, OpenStack supports core and service functionality. The L2 connectivity for virtual machines falls under the core capability of OpenStack Networking, whereas router, firewall, and so on fall under the service category.
The L2 connectivity in OpenStack is realized using two constructs, called network and subnet. Operators can use OpenStack CLI or the web interface to create networks and subnets. And as virtual machines are instantiated, the operators can associate them to appropriate networks.
A network defines the Layer 2 (L2) boundary for all the instances that are associated with it. All the virtual machines within a network are part of the same L2 broadcast domain.
The Liberty release has introduced a new OpenStack command-line interface (CLI) for different services. We will use the new CLI and see how to create a network:

A subnet is a range of IP addresses that are assigned to virtual machines on the associated Network. OpenStack Neutron configures a DHCP server with this IP address range and it starts one DHCP server instance per network, by default.
We will now show you how to create a subnet using OpenStack CLI:
Tip
Unlike a network, for a subnet, we need to use the regular Neutron CLI command in the Liberty release.

To give a complete perspective, we will create a virtual machine using the OpenStack web interface and show you how to associate a network and subnet to a virtual machine.
In your OpenStack web interface, navigate to Project | Compute | Instances:

Click on the Launch Instance action on the right-hand side, as highlighted in the preceding screenshot. In the resulting window, enter the name for your instance and how you want to boot your instance:

To associate a network and a subnet with the instance, click on the Networking tab. If you have more than one tenant network, you will be able to choose the network you want to associate with the instance. If you have exactly one network, the web interface will automatically select it:

As mentioned previously, providing isolation for tenant network traffic is a key requirement for any cloud. OpenStack Neutron uses network and subnet to define the boundaries and isolate data traffic between different tenants. Depending on Neutron configuration, the actual isolation of traffic is accomplished by the virtual switches. VLAN and VXLAN are the most common networking technologies used to isolate traffic, in addition to protocols such as GRE.
Once L2 connectivity is established, the virtual machines within one network can send or receive traffic between themselves. However, two virtual machines belonging to two different networks will not be able to communicate with each other automatically. This is done to provide privacy and isolation for tenant networks. In order to allow traffic from one Network to reach another network, OpenStack Networking supports an entity called a router.
The default implementation of OpenStack uses namespaces to support L3 routing capabilities. Namespaces are networking constructs in Linux that allow you to create a copy of the TCP/IP network stack all the way from the Ethernet interfaces (L2), routing tables, and so on, such that each instance is isolated from the other. In a cloud environment (especially for multi-tenancy), it is possible that users use the same IP addresses for their virtual machine instances. In order to allow overlapping IP addresses to co-exist within the same infrastructure, Neutron uses network namespaces to provide the isolation between overlapping IP addresses.
Operators can create routers using OpenStack CLI or web interface. They can then add more than one subnet as an interface to the router. This allows the networks associated with the router to exchange traffic with one another.
The command to create a router is as follows:

This command creates a router with the specified name.
Once a router is created, the next step is to associate one or more subnetworks to the router. The command to accomplish this is as follows:

The subnet represented by subnet1 is now associated to the router router1. Using the OpenStack dashboard, you can view the association between a router and a subnet. Navigate to Project | Networks | Network Topology. This should display the router, router1, connected to the network, network1, to which the subnet belongs, as shown in the following screenshot:

You can hover the mouse over the router router1 to see that the subnet is indeed added as an interface and is assigned an IP address.
The security of network traffic is critical, and OpenStack supports two mechanisms to secure network traffic. Security Groups allow traffic within a tenant's network to be secured. Linux iptables on the compute nodes are used to implement OpenStack security groups.
The traffic that goes outside of a tenant's network, to another network or the Internet, is secured using the OpenStack firewall service functionality. Like routing, firewall is a service with Neutron. The firewall service also uses iptables, but the scope of iptables is limited to the OpenStack router used as part of the firewall service.
The following diagram describes at a high level how iptables are used to secure network traffic:

In this network diagram, the VM instances are connected to the Virtual Switch using tap-interface. The security group's rules to allow or deny data traffic are mapped to iptables rules on the compute nodes. The iptables rules operate on these tap-interface to ensure that traffic is allowed or blocked as per the configured rules.
In order to secure traffic going from one VM to another within a given network, we must create a security group. The command to create a security group is as follows:

The next step is to create one or more rules within the security group. As an example, let us create a rule which allows only UDP, incoming traffic on port 8080
from any source IP address:

The final step is to associate this security group and the rules to a virtual machine instance. We will use the nova boot
command for this:

Once the virtual machine instance has a security group associated with it, the incoming traffic will be monitored and depending upon the rules inside the security group, data traffic may be blocked or permitted to reach the virtual machine.
We have seen that security groups provide a fine grain control over what traffic is allowed to and from a virtual machine instance. Another layer of security supported by OpenStack is Firewall as a Service (FWaaS). FWaaS enforces security at the router level, whereas security groups enforce security at a virtual-machine-interface level.
The main use case of FWaaS is to protect all virtual machine instances within a network from threats and attacks from outside the network. This could be virtual machines part of another network in the same OpenStack cloud or some entity in the Internet trying to perform an unauthorized access.
Let's now see how FWaaS is used in OpenStack. In FWaaS, a set of firewall rules is grouped into a firewall policy and then a firewall is created that implements one policy at a time. This firewall is then associated to a router.
A firewall rule can be created using the neutron firewall-rule-create
command, as follows:

This rule blocks the ICMP protocol so applications such as Ping
will be blocked by the firewall. The next step is to create a firewall policy. In real-world scenarios, the security administrators will define several rules and consolidate them under a single policy. For example, all rules that block various types of traffic can be combined into a single policy. The command to create a firewall policy is as follows:

The final step is to create a firewall and associate it with a router. The command to do this is as follows:

In the preceding command, we did not specify any routers and the OpenStack behavior is to associate the firewall (and in turn the policy and rules) to all the routers available for that tenant. The neutron firewall-create
command supports an option to pick a specific router as well.
Besides routing and firewall, OpenStack supports a few other commonly used networking technologies. Let's take a quick look at these without delving too deep into the respective commands.
Virtual machine instances created in OpenStack are used to run applications. Most applications are required to support redundancy and concurrent access. For example, a web server may be accessed by a large number of users at the same time. One of the common strategies to handle scale and redundancy is to implement load balancing for incoming requests. In this approach, a load balancer distributes an incoming service request onto a pool of servers, which processes the request, thus providing higher throughput. If one of the servers in the pool fails, the load balancer removes it from the pool and the subsequent service requests are distributed among the remaining servers. Users of the application use the IP address of the load balancer to access the application and are unaware of the pool of servers.
OpenStack implements load balancing using HAproxy software and a Linux namespace.
As mentioned previously, tenant isolation requires data traffic to be segregated and secured within an OpenStack cloud. However, there are times when external entities need to be part of the same network without removing the firewall-based security. This can be accomplished using a Virtual Private Network (VPN).
A VPN connects two endpoints on different networks over a public Internet connection, such that the endpoints appear to be directly connected to each other. VPNs also provide confidentiality and integrity of transmitted data.
Neutron provides a service plugin that enables OpenStack users to connect two networks using a VPN. The reference implementation of the VPN plugin in Neutron uses Openswan to create an IPSec-based VPN. IPSec is a suite of protocols that provides a secure connection between two endpoints by encrypting each IP packet transferred between them.
So far in this chapter, we have seen the different networking capabilities provided by OpenStack. Let us now look at two capabilities in OpenStack that enable SDN to be leveraged effectively.
OpenStack, being an open source platform, bundles open source networking solutions as the default implementation for these networking capabilities. For example, routing is supported using namespace, security using iptables, and load balancing using HAproxy.
Historically, these networking capabilities were implemented using customized hardware and software, most of them being proprietary solutions. These custom solutions are capable of much higher performance and are well-supported by their vendors. Hence they have a place in the OpenStack and SDN ecosystem.
From its initial releases, OpenStack has been designed for extensibility. Vendors can write their own extensions and then can easily configure OpenStack to use their extension instead of the default solutions. This allows operators to deploy the networking technology of their choice.
One of the most powerful capabilities of OpenStack is the extensive support for APIs. All OpenStack services interact with one another using well-defined RESTful APIs. This allows custom implementations and pluggable components to provide powerful enhancements for practical cloud implementation.
For example, when a network is created using the OpenStack web interface, a RESTful request is sent to the Horizon service. This in turn, invokes a RESTful API to validate the user using the Keystone service. Once validated, Horizon sends another RESTful API request to Neutron to actually create the network.
In the following chapter, we will see how these RESTful APIs provide support for crucial SDN capabilities in an OpenStack-based cloud.
As seen in this chapter, OpenStack supports a wide variety of networking functionality right out of the box. The importance of isolating tenant traffic and the need to allow customized solutions requires OpenStack to support flexible configuration. We also highlighted some key aspects of OpenStack that will play a key role in deploying SDN in data centers, thereby supporting powerful cloud architecture and solutions.
The following chapter will introduce SDN and demonstrate how it solves some of the challenges with traditional networking. We will examine how OpenStack and SDN together provide a modern cloud networking solution.