The OpenFlow Controllers

Get hands-on with the platforms and development tools used to build OpenFlow network applications

(For more resources related to this topic, see here.)

SDN controllers

The decoupled control and data plane architecture of software-defined networking ( SDN ), as depicted in the following figure, and in particular OpenFlow can be compared with an operating system and computer hardware. The OpenFlow controller (similar to the operating system) provides a programmatic interface to the OpenFlow switches (similar to the computer hardware). Using this programmatic interface, network applications, referred to as Net Apps, can be written to perform control and management tasks and offer new functionalities. The control plane in SDN and OpenFlow in particular is logically centralized and Net Apps are written as if the network is a single system.

With a reactive control model, the OpenFlow switches must consult an OpenFlow controller each time a decision must be made, such as when a new packet flow reaches an OpenFlow switch (that is, Packet_in event). In the case of flow-based control granularity, there will be a small performance delay as the first packet of each new flow is forwarded to the controller for decision (for example, forward or drop), after which future traffic within that flow will be forwarded at line rate within the switching hardware. While the first-packet delay is negligible in many cases, it may be a concern if the central OpenFlow controller is geographically remote or if most flows are short-lived (for example, as single-packet flows). An alternative proactive approach is also possible in OpenFlow to push policy rules out from the controller to the switches.

While this simplifies the control, management, and policy enforcement tasks, the bindings must be closely maintained between the controller and OpenFlow switches. The first important concern of this centralized control is the scalability of the system and the second one is the placement of controllers. A recent study of the several OpenFlow controller implementations (NOX-MT, Maestro, and Beacon), conducted on a large emulated network with 100,000 hosts and up to 256 switches, revealed that all OpenFlow controllers were able to handle at least 50,000 new flow requests per second in each of the experimental scenarios. Furthermore, new OpenFlow controllers under development, such as Mc-Nettle ( target powerful multicore servers and are being designed to scale up to large data center workloads (for example, 20 million flow requests per second and up to 5,000 switches). In packet switching networks, traditionally, each packet contains the required information for a network switch to make individual routing decisions. However, most applications send data as a flow of many individual packets. The control granularity in OpenFlow is in the scale of flows, not packets. When controlling individual flows, the decision made for the first packet of the flow can be applied to all the subsequent packets of the flow within the data plane (OpenFlow switches). The overhead may be further reduced by grouping the flows together, such as all traffic between two hosts, and performing control decisions on the aggregated flows.

The role of controller in SDN approach

Multiple controllers may be used to reduce the latency or increase the scalability and fault tolerance of the OpenFlow (SDN) deployment. OpenFlow allows the connection of multiple controllers to a switch, which would allow backup controllers to take over in the event of a failure. Onix and HyperFlow take the idea further by attempting to maintain a logically centralized, but physically distributed control plane. This decreases the lookup overhead by enabling communication with local controllers, while still allowing applications to be written with a simplified central view of the network. The potential main downside of this approach is maintaining the consistent state in the overall distributed system. This may cause Net Apps, that believe they have an accurate view of the network, to act incorrectly due to inconsistency in the global network state.

Recalling the operating system analogy, an OpenFlow controller acts as a network operating system and should implement at least two interfaces: a southbound interface that allows OpenFlow switches to communicate with the controller, and a northbound interface that presents a programmable application programming interface (API) to network control and management applications (that is, Net Apps). The existing southbound interface is OpenFlow protocol as an early SDN southbound interface implementation. External control and management systems/software or network services may wish to extract information about the underlying network or enforce policies, or control an aspect of the network behavior. Besides, a primary OpenFlow controller may need to share policy information with a backup controller, or to communicate with other controllers across multiple control domains. While the southbound interface (for example, OpenFlow or ForCES, is well defined and can be considered as a de facto standard, there is no widely accepted standard for northbound interactions, and they are more likely to be implemented on a use-case basis for particular applications.

Existing implementations

Currently, there are different OpenFlow (and SDN) controller implementations, Open Source Resources , as part of existing open source projects. In this article, we limit ourselves to NOX, POX, NodeFlow, Floodlight (which is forked from Beacon), and OpenDaylight to present some OpenFlow controllers and different possibilities for choosing a programming language to develop the network applications.


NOX ( was the first OpenFlow controller written in C++ and provides API for Python too. It has been the basis for many research and development projects in the early exploration of OpenFlow and SDN space. NOX has two separate lines of development:

  • NOX-Classic
  • NOX, also known as new NOX

The former is the well-known line of development, which contains support for Python and C++ along with a bunch of network applications. However, this line of development is deprecated and there is no plan for further development on NOX-Classic. New NOX only supports C++. It has fewer network applications compared to NOX-Classic, but is much faster and has a much cleaner codebase. POX is Python-only version of NOX. It can be considered as a general, open source OpenFlow controller written in Python, and a platform for rapid development and prototyping of network applications. The primary target of POX is research. Since many of the research projects are short-lived by nature, the focus of the developers of POX is on right interfaces rather than maintaining a stable API. NOX (and POX) are managed in Git source code repositories on GitHub. Cloning the Git repository is the preferred way to get NOX and POX. POX branches fall into two categories: active and released. Active branches are branches that are being actively developed. Released branches are branches, which at some point were selected as being a new version. The most recently released branch may continue to get worked on, but only in the form of bug fixes—new features always go into the active branch. You can get the latest version of NOX and POX with the following commands:

$ git clone $ git clone

In this section, we start with a Net App, which behaves as a simple Ethernet hub. You can change it to a learning Ethernet L2 switch as a homework. In this application, the switch will examine each packet and learn the source-port mapping. Thereafter, the source MAC address will be associated with the port. If the destination of the packet is already associated with some port, the packet will be sent to the given port, else it will be flooded on all ports of the switch. The first step is to start your OpenFlow VM. Then you need to download the POX into your VM:

$ git clone $ cd pox

Running a POX application

After getting the POX controller, you can try running a basic hub example in POX as follows:

$./ log.level --DEBUG misc.of_tutorial

This command line tells POX to enable verbose logging and to start the of_tutorial component, which you will be using. This of_tutorial component acts as an Ethernet hub. Now you can start the Mininet OpenFlow laboratory using the following command line:

$ sudo mn --topo single,3 --mac --switch ovsk --controller remote

The switches may take a little bit of time to connect. When an OpenFlow switch loses its connection to a controller, it will generally increase the period between which it attempts to contact the controller, up to a maximum of 15 seconds. This timer is implementation specific and can be defined by the user. Since the OpenFlow switch has not connected yet, this delay may be anything between 0 and 15 seconds. If this is too long to wait, the switch can be configured to wait no more than N seconds using the --max-backoff parameter. Wait until the application indicates that the OpenFlow switch has connected. When the switch connects, POX will print something like the following:

INFO:openflow.of_01:[Con 1/1] Connected to 00-00-00-00-00-01 DEBUG:samples.of_tutorial:Controlling [Con 1/1]

The first line is from the portion of POX that handles OpenFlow connections. The second line is from the tutorial component itself.

Now, we verify that the hosts can ping each other, and that all the hosts see the exact same traffic: the behavior of a hub. To do this, we will create xterms for each host and view the traffic in each. In the Mininet console, start up three xterms:

mininet> xterm h1 h2 h3

Arrange each xterm so that they're all on the screen at once. This may require reducing the height to fit on a cramped laptop screen. In the xterms for h2 and h3, run tcpdump, a utility to print the packets seen by a host:

# tcpdump -XX -n -i h2-eth0

And respectively:

# tcpdump -XX -n -i h3-eth0

In the xterm for h1, issue a ping command:

# ping -c1

The ping packets are now going up to the controller, which then floods them out of all interfaces except the sending one. You should see identical ARP and ICMP packets corresponding to the ping in both xterms running tcpdump. This is how a hub works; it sends all packets to every port on the network. Now, see what happens when a non-existent host doesn't reply. From h1 xterm:

# ping -c1

You should see three unanswered ARP requests in the tcpdump xterms. If your code is off later, three unanswered ARP requests is a signal that you might be accidentally dropping packets. You can close the xterms now.

In order to change the behavior of the hub to a learning switch, you have to add the learning switch functionality to Go to your SSH terminal and stop the tutorial hub controller by pressing Ctrl + C . The file you'll modify is pox/misc/ Open pox/misc/ in your favorite editor. The current code calls act_like_hub() from the handler for packet_in messages to implement the switch behavior. You will want to switch to using the act_like_switch() function, which contains a sketch of what your final learning switch code should look like. Each time you change and save this file, make sure to restart POX, then use pings to verify the behavior of the combination of switch and controller as a:

  1. Hub.
  2. Controller-based Ethernet learning switch.
  3. Flow-accelerated learning switch.

For 2 and 3, hosts that are not the destination for a ping should display no tcpdump traffic after the initial broadcast ARP request. Python is a dynamic and interpreted language. There is no separate compilation step, just update your code and re-run it. Python has built-in hash tables, called dictionaries, and vectors, called lists. Some of the common operations that you need for learning switch are as follows:

  • To initialize a dictionary:

    mactable = {}

  • To add an element to a dictionary:

    mactable[0x123] = 2

  • To check for dictionary membership:

    if 0x123 in mactable: print 'element 2 is in mactable' if 0x123 not in mactable: print 'element 2 is not in mactable'

  • To print a debug message in POX:

    log.debug('saw new MAC!')

  • To print an error message in POX:

    log.error('unexpected packet causing system meltdown!')

  • To print all member variables and functions of an object:

    print dir(object)

  • To comment a line of code:

    # Prepend comments with a #; no // or /**/

You can find more Python resources at the following URLs.

List of built-in functions in Python:

Official Python tutorial:

In addition to the preceding mentioned functions, you also need some details about the POX APIs, which are useful for the development of learning switch. There is also other documentation available in the appropriate section of POX's website.

Sending OpenFlow messages with POX:

connection.send( ... ) # send an OpenFlow message to a switch

When a connection to a switch starts, a ConnectionUp event is fired. The example code creates a new Tutorial object that holds a reference to the associated Connection object. This can later be used to send commands (OpenFlow messages) to the switch:

ofp_action_output class

This is an action for use with ofp_packet_out and ofp_flow_mod. It specifies a switch port that you wish to send the packet out of. It can also take various special port numbers. An example of this would be OFPP_FLOOD, which sends the packet out on all ports except the one the packet originally arrived on. The following example creates an output action that would send packets to all ports:

out_action = of.ofp_action_output(port = of.OFPP_FLOOD) ofp_match class

Objects of this class describe packet header fields and an input port to match on. All fields are optional, items that are not specified are wildcards, and will match on anything. Some notable fields of ofp_match objects are:

  • dl_src: The data link layer (MAC) source address
  • dl_dst: The data link layer (MAC) destination address
  • in_port: The packet input switch port

Example: Create a match that matches packets arriving on port 3:

match = of.ofp_match() match.in_port = 3 ofp_packet_out OpenFlow message

The ofp_packet_out message instructs a switch to send a packet. The packet might be constructed at the controller, or it might be the one that the switch received, buffered, and forwarded to the controller (and is now referenced by a buffer_id). Notable fields are:

  • buffer_id: The buffer_id of a buffer you wish to send. Do not set if you are sending a constructed packet.
  • data: Raw bytes you wish the switch to send. Do not set if you are sending a buffered packet.
  • actions: A list of actions to apply (for this tutorial, this is just a single ofp_action_output action).
  • in_port: The port number this packet initially arrived on, if you are sending by buffer_id, otherwise OFPP_NONE.

Example: The send_packet() method of_tutorial:

def send_packet (self, buffer_id, raw_data, out_port, in_port):
""" Sends a packet out of the specified switch port.
If buffer_id is a valid buffer on the switch, use that.
Otherwise, send the raw data in raw_data.
The "in_port" is the port number that packet arrived on.
Use OFPP_NONE if you're generating this packet. """
msg = of.ofp_packet_out()
msg.in_port = in_port if buffer_id != -1 and buffer_id is not None:
# We got a buffer ID from the switch; use that
msg.buffer_id = buffer_id
# No buffer ID from switch -- we got the raw data
if raw_data is None:
# No raw_data specified -- nothing to send!
return = raw_data
action = of.ofp_action_output(port = out_port) msg.actions.append(action)
# Send message to switch self.connection.send(msg) ofp_flow_mod OpenFlow message

This instructs a switch to install a flow table entry. Flow table entries match some fields of the incoming packets, and execute a list of actions on the matching packets. The actions are the same as for ofp_packet_out, mentioned previously (and again, for the tutorial all you need is the simple ofp_action_output action). The match is described by an ofp_match object. Notable fields are:

  • idle_timeout: Number of idle seconds before the flow entry is removed. Defaults to no idle timeout.
  • hard_timeout: Number of seconds before the flow entry is removed. Defaults to no timeout.
  • actions: A list of actions to be performed on matching packets (for example, ofp_action_output).
  • priority: When using non-exact (wildcarded) matches, this specifies the priority for overlapping matches. Higher values have higher priority. Not important for exact or non-overlapping entries.
  • buffer_id: The buffer_id field of a buffer to apply the actions to immediately. Leave unspecified for none.
  • in_port: If using a buffer_id, this is the associated input port.
  • match: An ofp_match object. By default, this matches everything, so you should probably set some of its fields.

Example: Create flow_mod, that sends packets from port 3 out of port 4:

fm = of.ofp_flow_mod() fm.match.in_port = 3 fm.actions.append(of.ofp_action_output(port = 4))

For more information about OpenFlow constants, see the main OpenFlow types/enums/structs file, openflow.h, in ~/openflow/include/openflow/openflow.h You may also wish to consult POX's OpenFlow library in pox/openflow/, and, of course, the OpenFlow 1.0 specification.

The POX packet library is used to parse packets and make each protocol field available to Python. This library can also be used to construct packets for sending. The parsing libraries are present in pox/lib/packet/.

Each protocol has a corresponding parsing file. For the first exercise, you'll only need to access the Ethernet source and destination fields. To extract the source of a packet, use the dot notation:


The Ethernet src and dst fields are stored as pox.lib.addresses.EthAddr objects. These can easily be converted to their common string representation (str(addr) will return something like "01:ea:be:02:05:01"), or created from their common string representation (EthAddr("01:ea:be:02:05:01")). To see all members of a parsed packet object:

print dir(packet)

Here's what you'd see for an ARP packet:

'REQUEST', 'REV_REPLY', 'REV_REQUEST', '__class__', '__delattr__', '__dict__', '__doc__',
'__format__', '__getattribute__', '__hash__', '__init__', '__len__', '__module__',
'__new__', '__nonzero__', '__reduce__', '__reduce_ex__', '__repr__',
'__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__',
'_init', 'err', 'find', 'hdr', 'hwdst', 'hwlen', 'hwsrc', 'hwtype', 'msg', 'next',
'opcode', 'pack', 'parse', 'parsed', 'payload', 'pre_hdr', 'prev', 'protodst',
'protolen', 'protosrc', 'prototype', 'raw', 'set_payload', 'unpack', 'warn']

Many fields are common to all the Python objects and can be ignored, but this can be a quick way to avoid a trip to a function's documentation.


NodeFlow (, developed by Gary Berger , Technical Leader, Office of the CTO of Cisco Systems) is a minimalist OpenFlow controller written in JavaScript for Node.js ( Node.js is a server-side software system designed for writing scalable Internet applications (for example, HTTP servers). It can be considered as a packaged compilation of Google's V8 JavaScript engine, the libuv platform abstraction layer, and a core library, which is written in JavaScript. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. Programs are written on the server side in JavaScript, using event-driven, asynchronous I/O to minimize overhead and maximize the scalability. Therefore, unlike most JavaScript programs, the program is not executed in a web browser. Instead, it runs as a server-side JavaScript application. NodeFlow is actually a very simple program and relies heavily on a protocol interpreter called OFLIB-NODE written by Zoltan LaJos Kis . NodeFlow is an experimental system available at GitHub (git:// along with a fork of the OFLIB-NODE libraries (git:// The beauty of NodeFlow is its simplicity on running and understanding an OpenFlow controller with less than 500 lines of code. Leveraging JavaScript and the high performance Google's V8 JavaScript engine allows for network architects to experiment with various SDN features without the need to deal with all of the boilerplate code required for setting up event driven programming.

The NodeFlow server (that is, OpenFlow controller) instantiates with a simple call to net.createServer. The address and listening port are configured through a start script:

NodeFlowServer.prototype.start = function(address, port) { var self = this var socket = [] var server = net.createServer() server.listen(port, address, function(err, result) { util.log("NodeFlow listening on:" + address + '@' + port) self.emit('started', { "Config": server.address() }) })

The next step is to create a unique session ID, from which the controller can keep track of each of the different switch connections. The event listener maintains the socket. The main event processing loop is invoked whenever data is received from the socket channel. The stream library is utilized to buffer the data and return the decoded OpenFlow message in msgs object. The msgs object is passed to the _ProcessMessage function for further processing:

server.on('connection', function(socket) { socket.setNoDelay(noDelay = true) var sessionID = socket.remoteAddress + ":" + socket.remotePort sessions[sessionID] = new sessionKeeper(socket) util.log("Connection from : " + sessionID) socket.on('data', function(data) { var msgs = switchStream.process(data); msgs.forEach(function(msg) { if (msg.hasOwnProperty('message')) { self._processMessage(msg, sessionID) } else { util.log('Error: Cannot parse the message.') console.dir(data) } })

The last part is the event handlers. EventEmitters of Node.js is utilized to trigger the callbacks. These event handlers wait for the specific event to happen and then trigger the processing. NodeFlow handles two specific events: OFPT_PACKET_IN, which is the main event to listen on for OpenFlow PACKET_IN events, and SENDPACKET, which simply encodes and sends out OpenFlow messages:

self.on('OFPT_PACKET_IN', function(obj) { var packet = decode.decodeethernet(, 0) nfutils.do_l2_learning(obj, packet) self._forward_l2_packet(obj, packet) }) self.on('SENDPACKET', function(obj) { nfutils.sendPacket(obj.type, obj.packet.outmessage, obj.packet.sessionID) })

The simple Net App based on NodeFlow could be a learning switch (following do_l2_learning function). The learning switch simply searches for the source MAC address and in case the address is not already in the learning table, it will be inserted in the corresponding source port to the forwarding table:

do_l2_learning: function(obj, packet) { self = this var dl_src = packet.shost var dl_dst = packet.dhost var in_port = obj.message.body.in_port var dpid = obj.dpid if (dl_src == 'ff:ff:ff:ff:ff:ff') { return } if (!l2table.hasOwnProperty(dpid)) { l2table[dpid] = new Object() //create object } if (l2table[dpid].hasOwnProperty(dl_src)) { var dst = l2table[dpid][dl_src] if (dst != in_port) { util.log("MAC has moved from " + dst + " to " + in_port) } else { return } } else { util.log("learned mac " + dl_src + " port : " + in_port) l2table[dpid][dl_src] = in_port } if (debug) { console.dir(l2table) } }

The complete NodeFlow server is called server.js, which can be downloaded from NodeFlow Git repository. To run the NodeFlow controller, execute the Node.js and pass the NodeFlow server (that is, server.js) to the Node.js binary (for example, node.exe on Windows):

C:\ program Files\nodejs>node server.js


Floodlight is a Java-based OpenFlow controller, based on the Beacon implementation, which supports both physical and virtual OpenFlow switches. Beacon is a cross platform, modular OpenFlow controller, also implemented in Java. It supports event-based and threaded operation. Beacon was created by David Erickson at Stanford University as a Java-based, and cross platform OpenFlow controller. Prior to being licensed under GPL v2, Floodlight was forked from Beacon, which carries on with an Apache license. Floodlight has been redesigned without the OSGI framework. Therefore, it can be built, run, and modified without OSGI experience. Besides, Floodlight's community currently includes a number of developers at Big Switch Networks who are actively testing and fixing bugs, and building additional tools, plugins, and features for it. The Floodlight controller is intended to be a platform for a wide variety of network applications (Net Apps). Net Apps are important, since they provide solutions to real-world networking problems. Some of the Floodlight's Net Apps are:

  • The Virtual Networking Filter, which identifies packets that enter the network, but do not match an existing flow. The application determines whether the source and destination are on the same virtual network; if so, the application signals the controller to continue the flow creation. This filter is in fact a simple layer 2 (MAC) based network virtualization, which enables users to create multiple logical layer 2 networks in a single layer 2 domain.
  • The Static Flow Pusher is used to create a flow in advance of the initial packet in the flow that enters the network. It is exposed via Floodlight's REST API that allows a user to manually insert flows into an OpenFlow network.
  • The Circuit Pusher creates a flow and provisions switches along the path to the packet's destination. The bidirectional circuit between source and destination is a permanent flow entry, on all switches in the route between the two devices.
  • Firewall modules give the same protection to devices on the software-defined network as traditional firewalls on a physical network. Access Control List ( ACL ) rules control whether a flow should be set up to a specific destination. The Firewall application has been implemented as a Floodlight Module that enforces ACL rules on OpenFlow enabled switches in the network. The packet monitoring is done using the packet-in messages.
  • Floodlight can be run as a network plugin for OpenStack using a Neutron. Neutron plugin exposes a Networking-as-a-Service ( NaaS ) model via a REST API that is implemented by Floodlight. This solution has two components: a VirtualNetworkFilter module in Floodlight (that implements the Neutron API) and the Neutron RestProxy plugin that connects Floodlight to Neutron. Once a Floodlight controller is integrated into OpenStack, network engineers can dynamically provision network resources alongside other virtual and physical computer resources. This improves the overall flexibility and performance.

For more details and tutorials see the FloodLight OpenFlowHub page,


OpenDaylight is a Linux Foundation Collaborative project (, in which a community has come together to fill the need for an open and reference framework for programmability and control through an open source SDN solution. It combines open community developers, open source code, and project governance that guarantees an open, community decision-making process on business and technical issues. OpenDaylight can be a core component within any SDN architecture. Building upon an open source SDN controller enables users to reduce operational complexity, extend the lifetime of their existing network infrastructure, and enable new services and capabilities only available with SDN. The mission statement of OpenDaylight project can be read as "OpenDaylight facilitates a community-led industry-supported open source framework, including code and architecture, to accelerate and advance a common, robust Software-Defined Networking platform". OpenDaylight is open to anyone. Anyone can develop and contribute code, get elected to the Technical Steering Committee (TSC), get voted onto the Board, or help steer the project forward in any number of ways. OpenDaylight will be composed of numerous projects. Each project will have contributors, committers, and one committer elected by their peers to be the Project Lead. The initial TSC and project leads will be composed of the experts who developed the code that has been originally contributed to the project. This ensures that the community gets access to the experts most familiar with the contributed code to ramp up and provide mentorship to new community participants.

Special controllers

In addition to the OpenFlow controllers that we introduced in this article, there are also two special purpose OpenFlow controllers: FlowVisor and RouteFlow. The former acts as a transparent proxy between OpenFlow switches and multiple OpenFlow controllers. It is able to create network slices and can delegate control of each slice to a different OpenFlow controller. FlowVisor also isolates these slices from each other by enforcing proper policies. RouteFlow, on the other hand, provides virtualized IP routing over OpenFlow capable hardware. RouteFlow can be considered as a network application on top of OpenFlow controllers. It is composed by an OpenFlow Controller application, an independent server, and a virtual network environment that reproduces the connectivity of a physical infrastructure and runs the IP routing engines. The routing engines generate the forwarding information base (FIB) into the Linux IP tables according to the configured routing protocols (for example, OSPF and BGP).


In this article the overall functionality of OpenFlow (SDN) controllers were presented and some of the existing implementations (NOX/POX, NodeFlow, and Floodlight) were explained in more detail. A learning Ethernet switch Net App, based on the API of POX was also presented.

Resources for Article:

Further resources on this subject:

Books to Consider

comments powered by Disqus

An Introduction to 3D Printing

Explore the future of manufacturing and design  - read our guide to 3d printing for free