The Software-Defined Network Paradigm

In this article by Alok Shrivastwa and Sunil Sarat, the authors of the book Learning OpenStack, we will learn that before we jump straight onto the architecture, installation, and a basic configuration of the Neutron service, let's spend some time looking at some of the problem statements that Software-Defined Networking (SDN) tries to solve and the different approaches that it uses in order to solve those.

(For more resources related to this topic, see here.)

The biggest challenge that the cloud and multitenanted environments have posed since their inception is in the realm of network and security. We have dealt with multitenanted networks in the past, in the context of hosted datacenter service providers, and the answer to requirement for separation was Virtual Local Area Networks (VLANs), which worked well as they provided isolated segments, and the inter-VLAN routing was almost always through a firewall providing security as well.

We have come a long way since the hosted datacenter model to this new cloud world. While the basic idea of sharing resources by use of virtualization is no different from the old paradigm, the way we implement it has changed drastically.

As an example, say, if you wanted to use a hosted datacenter service in the pre-cloud world, you would put in a request, and then someone in the hosting provider's network team would create a new VLAN for you, among other things.

In the cloud world, however, the services are auto-provisioned, so we could potentially have many tenants requesting resources in a day. Also, they may request more than one VLAN to create, say, three-tier architecture, not to mention the implications on the security.

So now to the main question; does VLAN still not cut it? The answer is subjective; however, let's take a look at the shortcomings of the VLAN technology:

  • We can have a maximum of 4094 VLANs—remove some administrative and pre-assigned ones, and we are left little over 4000 VLAN's. Now this becomes a problem if we have say 500 customers in our cloud and each of them is using about 10 of them. We can very quickly run out of VLAN's.
  • VLANs need to be configured on all the devices in the Layer 2 (Switching) domain for them to work. If this is not done, the connectivity will not work.
  • When we use VLANs, we will need to use Spanning Tree Protocol (STP) for loop protection, and thereby, we lose a lot of multipathing ability (as most multi-path abilities are L3 upwards and not so much on the Layer 2 network).
  • VLANs are site-specific, and they are not generally extended between two datacenters. In the cloud world, where we don't care, where our computing resources stay, we would like to have access to the same networks, say for a disaster recovery (DR) kind of a scenario.

One of the methods that can alleviate some of the aforementioned problem statements is the use of an overlay network. We will then also take a look at an upcoming technology which is also used, called Open Flow.

What is an overlay network?

An overlay network is a network running on top of another network. The different components or nodes in this kind of network are connected using virtual links rather than physical ones.

The following diagram shows the concepts of overlay networking between three datacenters connected by an ISP:

An overlay network is created on top of a physical network, and the physical network determines the multipath and failover conditions, as the overlay is only as good as the network below it.

An overlay network generally works by encapsulating the data in a format that the underlay network understands and optionally may also encrypt it.

Components of overlay network

An overlay network normally contains the following components:

  • Physical connectivity and an underlay routable/switched network
  • Overlay endpoints

Overlay endpoints devices that need the connectivity. It should be noted that the overlay endpoint must be able to route/switch on its own, as the underlying network normally has no idea of the protocols that are being transported by the overlay.

Overlay technologies

There are several overlay technologies that help in creating overlay networks, some more complicated than the other, but ultimately solving similar problem statements.

Let's take a brief look at a few of them and understand some of the working concepts. We will not be deep diving into how these are configured on the network layer or any advanced concepts.

GRE

Generic Routing Encapsulation (GRE) is possibly one of first overlay technologies that existed. It was created in order to start creating standardization in the network Layer-3 protocols.

If you remember, before TCP/IP became the de facto standard for networking, there were several Layer 3 protocols, such as SPX/IPX, Banyan, and Apple Talk. The service providers could definitely not run all of these, as it would become a networking nightmare. Hence, GRE tunneling was used.

GRE encapsulates every layer-3 payload you throw at it inside an IP packet, by setting the destination as the address of the remote tunnel endpoint, and then sends it down the wire; it performs the opposite function the IP packet at the other end. This way, the underlay network sees the packet as a general IP packet and routes it accordingly. Thus, the endpoints can happily now talk another language if they choose to.

A new form of GRE called Multipoint GRE (MGRE) evolved in the later years to help with the multipoint tunnel requirements.

VXLAN

Virtual Extensible LAN (VXLAN) is an advancement of the VLAN technology itself. It actually brings in two major enhancements, described as follows:

  • Number of VXLANs possible: Theoretically, this has been beefed up to 16 million VXLANs in a network, thereby giving ample room for growth. However, the devices may support between 16,000–32,000 VXLANs.
  • Virtual tunnel endpoint (VTEP): VXLAN also supports VTEP, which can be used to create a Layer-2 overlay network atop the Layer 3 endpoints.

Underlying network considerations

Overlay technologies are supported in most devices, and if they are not, then they may just be a software update away. The requirements, as we have already seen, are that it needs a routable underlay network and the impact is determined by the properties of the underlying network itself. However, let's take some pointers from one of the most used networks (TCP IP over Ethernet networks).

The Maximum Transmission Unit (MTU) is the largest Protocol Data Unit (PDU) that is normally transmitted in any layer. We just need to bear in mind that the usable MTU decreases a little due to the encapsulation that is needed. In case of Ethernet, the MTU is 1500 bytes.

When using the overlay, the actual MTU, and thereby the maximum segment size (MSS) are reduced. In order to alleviate the problem, you may choose to use jumbo frames (up to 9000-byte MTU). If you are unable to use this, then the data transfer speeds will reduce because the amount of data being sent in each packet is lesser (if the MTU was adjusted) or due to fragmentation (if the MTU was not adjusted).

Open flow

Open flow is a new technology that works on the principle of the separation of the control and data planes in the network technology. In traditional networks, these planes reside in the device, but with open flow, the data plane would still reside in the switch, and the control plane is moved to a separate controller.

Open flow is considered the beginning of SDN itself, as we can now define the path that the packets need to follow entirely in the software without having to change any configuration on the devices themselves. Since open flow, there have been several topics added to the concept of SDN majorly to make it backward compatible.

The following diagram presents how the forwarding is done in case of an open flow system in place:

The devices talk to the controller and forward the packet details. The controller then tells the device the data path to follow, after which the device caches this information and uses it to forward future packets.

There are several examples of open flow controllers that are available in the market, but some popular ones are as follows:

  • Open daylight
  • Flood light
  • NEC open flow controller
  • Arista controller
  • Brocade vyatta controller

Underlying network consideration

The underlying network devices should be able to understand open flow itself. So while in most cases this may be a software upgrade, it could potentially also mean upgrading the network hardware if the devices don't already decouple the data and control planes.

Equipped with this information, let us look at the Neutron component of OpenStack.

Summary

In this article, you were introduced to overlay network, which is a networking service, and how is it different from the Nova network service. You then looked at Neutron architecture learnt about the components, such as Neutron server, L2 agent, and L3 agent. You also briefly learnt about networking concepts before you proceeded to install and configure Neutron.

Resources for Article:


Further resources on this subject:


You've been reading an excerpt of:

Learning OpenStack

Explore Title