Mastering Python Networking - Second Edition

3.8 (12 reviews total)
By Eric Chou
    What do you get with a Packt Subscription?

  • Instant access to this title and 7,500+ eBooks & Videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Review of TCP/IP Protocol Suite and Python

About this book

Networks in your infrastructure set the foundation for how your application can be deployed, maintained, and serviced. Python is the ideal language for network engineers to explore tools that were previously available to systems engineers and application developers. In this second edition of Mastering Python Networking, you’ll embark on a Python-based journey to transition from traditional network engineers to network developers ready for the next-generation of networks.

This book begins by reviewing the basics of Python and teaches you how Python can interact with both legacy and API-enabled network devices. As you make your way through the chapters, you will then learn to leverage high-level Python packages and frameworks to perform network engineering tasks for automation, monitoring, management, and enhanced security. In the concluding chapters, you will use Jenkins for continuous network integration as well as testing tools to verify your network.

By the end of this book, you will be able to perform all networking tasks with ease using Python.

Publication date:
August 2018


Review of TCP/IP Protocol Suite and Python

Welcome to the new age of network engineering. When I first started working as a network engineer 18 years ago, at the turn of the millennium, the role was distinctly different than other technical roles. Network engineers mainly possessed domain-specific knowledge to manage and operate local and wide area networks, while occasionally crossing over to systems administration, but there was no expectation to write code or understand programming concepts. This is no longer the case. Over the years, the DevOps and Software-Defined Networking (SDN) movement, among other factors, have significantly blurred the lines between network engineers, systems engineers, and developers.

The fact that you have picked up this book suggests that you might already be an adopter of network DevOps, or maybe you are considering going down that path. Maybe you have been working as a network engineer for years, just as I was, and want to know what the buzz around the Python programming language is about. Or you might already be fluent in Python but wonder what its applications are in network engineering. If you fall into any of these camps, or are simply just curious about Python in the network engineering field, I believe this book is for you:

The intersection between Python and network engineering

Many books that dive into the topics of network engineering and Python have already been written. I do not intend to repeat their efforts with this book. Instead, this book assumes that you have some hands-on experience of managing networks, as well as a basic understanding of network protocols and the Python language. You do not need to be an expert in Python or network engineering, but should find that the concepts in this chapter form a general review. The rest of the chapter should set the level of expectation of the prior knowledge required to get the most out of this book. If you want to brush up on the contents of this chapter, there are lots of free or low-cost resources to bring you up to speed. I would recommend the free Khan Academy ( and the Python tutorials at:

This chapter will pay a very quick visit to the relevant networking topics. From my experience working in the field, a typical network engineer or developer might not remember the exact TCP state machine to accomplish their daily tasks (I know I don't), but they would be familiar with the basics of the OSI model, the TCP and UDP operations, different IP headers fields, and other fundamental concepts.

We will also look at a high-level review of the Python language; just enough for those readers who do not code in Python on a daily basis to have ground to walk on for the rest of the book.

Specifically, we will cover the following topics:

  • An overview of the internet
  • The OSI and client-server model
  • TCP, UDP, and IP protocol suites
  • Python syntax, types, operators, and loops
  • Extending Python with functions, classes, and packages

Of course, the information presented in this chapter is not exhaustive; please do check out the references for further information.


An overview of the internet

What is the internet? This seemingly easy question might receive different answers depending on your background. The internet means different things to different people; the young, the old, students, teachers, business people, poets, could all give different answers to the question.

To a network engineer, the internet is a global computer network consisting of a web of inter-networks connecting large and small networks together. In other words, it is a network of networks without a centralized owner. Take your home network as an example. It might consist of a home Ethernet switch and a wireless access point connecting your smartphone, tablet, computers, and TV together for the devices to communicate with each other. This is your Local Area Network (LAN). When your home network needs to communicate with the outside world, it passes information from your LAN to a larger network, often appropriately named the Internet Service Provider (ISP). Your ISP often consists of edge nodes that aggregate the traffic to their core network. The core network's function is to interconnect these edge networks via a higher speed network. At special edge nodes, your ISP is connected to other ISPs to pass your traffic appropriately to your destination. The return path from your destination to your home computer, tablet, or smartphone may or may not follow the same path through all of these networks back to your device, while the source and destination remain the same.

Let's take a look at the components making up this web of networks.

Servers, hosts, and network components

Hosts are end nodes on the network that communicate to other nodes. In today's world, a host can be a traditional computer, or can be your smartphone, tablet, or TV. With the rise of the Internet of Things (IoT), the broad definition of a host can be expanded to include an IP camera, TV set-top boxes, and the ever-increasing type of sensors that we use in agriculture, farming, automobiles, and more. With the explosion of the number of hosts connected to the internet, all of them need to be addressed, routed, and managed. The demand for proper networking has never been greater.

Most of the time when we are on the internet, we make requests for services. This could be viewing a web page, sending or receiving emails, transferring files, and so on. These services are provided by servers. As the name implies, servers provide services to multiple nodes and generally have higher levels of hardware specification because of this. In a way, servers are special super-nodes on the network that provide additional capabilities to its peers. We will look at servers later on in the client-server model section.

If you think of servers and hosts as cities and towns, the network components are the roads and highways that connect them together. In fact, the term information superhighway comes to mind when describing the network components that transmit the ever increasing bits and bytes across the globe. In the OSI model that we will look at in a bit, these network components are layer one to three devices. They are layer two and three routers and switches that direct traffic, as well as layer one transports such as fiber optic cables, coaxial cables, twisted copper pairs, and some DWDM equipment, to name a few.

Collectively, hosts, servers, and network components make up the internet as we know it today.

The rise of data centers

In the last section, we looked at the different roles that servers, hosts, and network components play in the inter-network. Because of the higher hardware capacity that servers demand, they are often put together in a central location, so they can be managed more efficiently. We often refer to these locations as data centers.

Enterprise data centers

In a typical enterprise, the company generally has the need for internal tools such as emailing, document storage, sales tracking, ordering, HR tools, and a knowledge sharing intranet. These services translate into file and mail servers, database servers, and web servers. Unlike user computers, these are generally high-end computers that require a lot of power, cooling, and network connections. A byproduct of the hardware is also the amount of noise they make. They are generally placed in a central location, called the Main Distribution Frame (MDF), in the enterprise to provide the necessary power feed, power redundancy, cooling, and network connectivity.

To connect to the MDF, the user's traffic is generally aggregated at a location closer to the user, which is sometimes called the Intermediate Distribution Frame (IDF), before they are bundled up and connected to the MDF. It is not unusual for the IDF-MDF spread to follow the physical layout of the enterprise building or campus. For example, each building floor can consist of an IDF that aggregates to the MDF on another floor. If the enterprise consists of several buildings, further aggregation can be done by combining the buildings' traffic before connecting them to the enterprise data center.

Enterprise data centers generally follow the network design of three layers. These layers are access, distribution, and a core. The access layer is analogous to the ports each user connects to, the IDF can be thought of as the distribution layer, while the core layer consists of the connection to the MDF and the enterprise data centers. This is, of course, a generalization of enterprise networks, as some of them will not follow the same model.

Cloud data centers

With the rise of cloud computing and software, or infrastructure as a service, the data centers cloud providers build are at a hyper-scale. Because of the number of servers they house, they generally demand a much, much higher capacity for power, cooling, network speed, and feed than any enterprise data center. Even after working on cloud provider data centers for many years, every time I visit a cloud provider data center, I am still amazed at the scale of them. In fact, cloud data centers are so big and power-hungry that they are typically built close to power plants where they can get the cheapest power rate, without losing too much efficiency during the transportation of the power. Their cooling needs are so great, some are forced to be creative about where the data center is built, building in a generally cold climate just so they can just open the doors and windows to keep the server running at a safe temperature when needed. Any search engine can give you some of the astounding numbers when it comes to the science of building and managing cloud data centers for the likes of Amazon, Microsoft, Google, and Facebook:

Utah data center (source:

At the cloud provider scale, the services that they need to provide are generally not cost efficient or able to feasibly be housed in a single server. They are spread between a fleet of servers, sometimes across many different racks, to provide redundancy and flexibility for service owners. The latency and redundancy requirements put a tremendous amount of pressure on the network. The number of interconnections equates to an explosive growth of network equipment; this translates into the number of times this network equipment needs to be racked, provisioned, and managed. A typical network design would be a multi-staged, CLOS network:

CLOS network

In a way, cloud data centers are where network automation becomes a necessity for speed and reliability. If we follow the traditional way of managing network devices via a Terminal and command-line interface, the number of engineering hours required would not allow the service to be available in a reasonable amount of time. This is not to mention that human repetition is error-prone, inefficient, and a terrible waste of engineering talent.

Cloud data centers are where I started my path of network automation with Python a number of years ago, and I've never looked back since.

Edge data centers

If we have sufficient computing power at the data center level, why keep anything anywhere else but at these data centers? All the connections from clients around the world can be routed back to the data center servers providing the service, and we can call it a day, right? The answer, of course, depends on the use case. The biggest limitation in routing the request and session all the way back from the client to a large data center is the latency introduced in the transport. In other words, large latency is where the network becomes a bottleneck. The latency number would never be zero: even as fast as light can travel in a vacuum, it still takes time for physical transportation. In the real world, latency would be much higher than light in a vacuum when the packet is traversing through multiple networks, and sometimes through an undersea cable, slow satellite links, 3G or 4G cellular links, or Wi-Fi connections.

The solution? Reduce the number of networks the end user traverses through. Be as closely connected to the user as possible at the edge where the user enters your network and place enough resources at the edge location to serve the request. Let's take a minute and imagine that you are building the next generation of video streaming service. In order to increase customer satisfaction with smooth streaming, you would want to place the video server as close to the customer as possible, either inside or very near to the customer's ISP. Also, the upstream of the video server farm would not just be connected to one or two ISPs, but all the ISPs that I can connect to to reduce the hop count. All the connections would be with as much bandwidth as needed to decrease latency during peak hours. This need gave rise to the peering exchange's edge data centers of big ISP and content providers. Even when the number of network devices is not as high as cloud data centers, they too can benefit from network automation in terms of the increased reliability, security, and visibility network automation brings.

We will cover security and visibility in later chapters of this book.


The OSI model

No network book is complete without first going over the Open System Interconnection (OSI) model. The model is a conceptional model that componentizes the telecommunication functions into different layers. The model defines seven layers, and each layer sits independently on top of another one, as long as they follow defined structures and characteristics. For example, in the network layer, IP, can sit on top of the different types of data link layers, such as Ethernet or frame relay. The OSI reference model is a good way to normalize different and diverse technologies into a set of common language that people can agree on. This greatly reduces the scope for parties working on individual layers and allows them to look at specific tasks in depth without worrying too much about compatibility:

OSI model

The OSI model was initially worked on in the late 1970s and was later published jointly by the International Organization for Standardization (ISO) and what's now known as the Telecommunication Standardization Sector of the International Telecommunication Union (ITU-T). It is widely accepted and commonly referred to when introducing a new topic in telecommunication.

Around the same time period of the OSI model development, the internet was taking shape. The reference model the original designer used is often referred to as the TCP/IP model. The Transmission Control Protocol (TCP) and the Internet Protocol (IP) were the original protocol suites contained in the design. This is somewhat similar to the OSI model in the sense that they divide end-to-end data communication into abstraction layers. What is different is the model combines layers 5 to 7 in the OSI model in the Application layer, while the Physical and Data link layers are combined in the Link layer:

Internet protocol suite

Both the OSI and TCP/IP models are useful for providing standards for end-to-end data communication. However, for the most part, we will refer to the TCP/IP model more, since that is what the internet was built on. We will specify the OSI model when needed, such as when we are discussing the web framework in upcoming chapters.


Client-server model

The reference models demonstrated a standard way for data to communicate between two nodes. Of course, by now, we all know that not all nodes are created equal. Even in its DARPA-net days, there were workstation nodes, and there were nodes with the purpose of providing content to other nodes. These server nodes typically have higher hardware specifications and are managed more closely by engineers. Since these nodes provide resources and services to others, they are typically referred to as servers. Servers typically sit idle, waiting for clients to initiate requests for their resources. This model of distributed resources that are asked for by the client is referred to as the client-server model.

Why is this important? If you think about it for a minute, the importance of networking is highlighted by this client-server model. Without it, there is really not a lot of need for network interconnections. It is the need to transfer bits and bytes from client to server that shines a light on the importance of network engineering. Of course, we are all aware of how the biggest network of them all, the internet, has been transforming the lives of all of us and continuing to do so.

How, you asked, can each node determine the time, speed, source, and destination every time they need to talk to each other? This brings us to network protocols.


Network protocol suites

In the early days of computer networking, protocols were proprietary and closely controlled by the company who designed the connection method. If you were using Novell's IPX/SPX protocol in your hosts, you would not able to communicate with Apple's AppleTalk hosts and vice versa. These proprietary protocol suites generally have analogous layers to the OSI reference model and follow the client-server communication method. They generally work great in Local Area Networks (LAN) that are closed, without the need to communicate with the outside world. When traffic does need to move beyond the local LAN, typically, an internet working device, such as a router, is used to translate from one protocol to another. An example would be a router connecting an AppleTalk network to an IP-based network. The translation is usually not perfect, but since most of the communication happens within the LAN in the early days, it is okay.

However, as the need for inter-network communication rises beyond the LAN, the need for standardizing the network protocol suites becomes greater. The proprietary protocols eventually gave way to the standardized protocol suites of TCP, UDP, and IP, which greatly enhanced the ability of one network to talk to another. The internet, the greatest network of them all, relies on these protocols to function properly. In the next few sections, we will take a look at each of the protocol suites.

The transmission control protocol

The Transmission Control Protocol (TCP) is one of the main protocols used on the internet today. If you have opened a web page or have sent an email, you have come across the TCP protocol. The protocol sits at layer 4 of the OSI model, and it is responsible for delivering the data segment between two nodes in a reliable and error-checked manner. The TCP consists of a 160-bit header consisting of, among others, source and destination ports, a sequence number, an acknowledgment number, control flags, and a checksum:

TCP header

Functions and characteristics of TCP

TCP uses datagram sockets or ports to establish a host-to-host communication. The standard body, called Internet Assigned Numbers Authority (IANA) designates well-known ports to indicate certain services, such as port 80 for HTTP (web) and port 25 for SMTP (mail). The server in the client-server model typically listens on one of these well-known ports in order to receive communication requests from the client. The TCP connection is managed by the operating system by the socket that represents the local endpoint for connection.

The protocol operation consists of a state machine, where the machine needs to keep track of when it is listening for an incoming connection, during the communication session, as well as releasing resources once the connection is closed. Each TCP connection goes through a series of states such as Listen, SYN-SENT, SYN-RECEIVED, ESTABLISHED, FIN-WAIT, CLOSE-WAIT, CLOSING, LAST-ACK, TIME-WAIT, and CLOSED.

TCP messages and data transfer

The biggest difference between TCP and User Datagram Protocol (UDP), which is its close cousin on the same layer, is that it transmits data in an ordered and reliable fashion. The fact that the operation guarantees delivery is often referred to TCP as a connection-oriented protocol. It does this by first establishing a three-way handshake to synchronize the sequence number between the transmitter and the receiver, SYN, SYN-ACK, and ACK.

The acknowledgment is used to keep track of subsequent segments in the conversation. Finally, at the end of the conversation, one side will send a FIN message, and the other side will ACK the FIN message as well as sending a FIN message of its own. The FIN initiator will then ACK the FIN message that it received.

As many of us who have troubleshot a TCP connection can tell you, the operation can get quite complex. One can certainly appreciate that, most of the time, the operation just happens silently in the background.

A whole book could be written about the TCP protocol; in fact, many excellent books have been written on the protocol.

As this section is a quick overview, if interested, The TCP/IP Guide ( is an excellent free resource that you can use to dig deeper into the subject.

User datagram protocol

The User Datagram Protocol (UDP) is also a core member of the internet protocol suite. Like TCP, it operates on layer 4 of the OSI model that is responsible for delivering data segments between the application and the IP layer. Unlike TCP, the header is only 64-bit, which only consists of a source and destination port, length, and checksum. The lightweight header makes it ideal for applications that prefer faster data delivery without setting up the session between two hosts or needing reliable data delivery. Perhaps it is hard to imagine with today's fast internet connections, but the extra header made a big difference to the speed of transmission in the early days of X.21 and frame relay links. Although, as important as the speed difference is, not having to maintain various states, such as TCP, also saves computer resources on the two endpoints:

UDP header

You might now wonder why UDP was ever used at all in the modern age; given the lack of reliable transmission, wouldn't we want all the connections to be reliable and error-free? If you think about multimedia video streaming or Skype calling, those applications benefit from a lighter header when the application just wants to deliver the datagram as quickly as possible. You can also consider the fast DNS lookup process based on the UDP protocol. When the address you type in on the browser is translated into a computer understandable address, the user will benefit from a lightweight process, since this has to happen before even the first bit of information is delivered to you from your favorite website.

Again, this section does not do justice to the topic of UDP, and the reader is encouraged to explore the topic through various resources if you are is interested in learning more about UDP.

The internet protocol

As network engineers will tell you, they live at the Internet Protocol (IP) layer, which is layer 3 on the OSI model. IP has the job of addressing and routing between end nodes, among others. The addressing of an IP is probably its most important job. The address space is divided into two parts: the network and the host portion. The subnet mask is used to indicate which portion in the network address consists of the network and which portion is the host by matching the network portion with a 1 and the host portion with a 0. Both IPv4 and, later, IPv6 expresses the address in the dotted notation, for example, The subnet mask can either be in a dotted notation ( or use a forward slash to express the number of bits that should be considered in the network bit (/24):

IPv4 header

The IPv6 header, the next generation of the IP header of IPv4, has a fixed portion and various extension headers:

IPv6 fixed header

The Next Header field in the fixed header section can indicate an extension header to be followed that carries additional information. The extension headers can include routing and fragment information. As much as the protocol designer would like to move from IPv4 to IPv6, the internet today is still pretty much addressed with IPv4, with some of the service provider networks addressed with IPv6 internally.

The IP NAT and security

Network Address Translation (NAT) is typically used for translating a range of private IPv4 addresses into publicly routable IPv4 addresses. But it can also mean a translation between IPv4 to IPv6, such as at a carrier edge when they use IPv6 inside of the network that needs to be translated to IPv4 when the packet leaves the network. Sometimes, NAT6 to 6 is used as well for security reasons.

Security is a continuous process that integrates all the aspects of networking, including automation and Python. This book aims to use Python to help you manage the network; security will be addressed as part of the following chapters in the book, such as using SSHv2 over telnet. We will also look at how we can use Python and other tools to gain visibility in the network.

IP routing concepts

In my opinion, IP routing is about having the intermediate devices between the two endpoints transmit the packets between them based on the IP header. For all communication via the internet, the packet will traverse through various intermediate devices. As mentioned, the intermediate devices consist of routers, switches, optical gears, and various other gears that do not examine beyond the network and transport layer. In a road trip analogy, you might travel in the United States from the city of San Diego in California to the city of Seattle in Washington. The IP source address is analogous to San Diego and the destination IP address can be thought of as Seattle. On your road trip, you will stop by many different intermediate spots, such as Los Angeles, San Francisco, and Portland; these can be thought of as the routers and switches between the source and destination.

Why was this important? In a way, this book is about managing and optimizing these intermediate devices. In the age of mega data centers that span the size of multiple American football fields, the need for efficient, agile, reliable, and cost-effective ways to manage the network becomes a major point of competitive advantage for companies. In future chapters, we will dive into how we can use Python programming to effectively manage a network.


Python language overview

In a nutshell, this book is about making our network engineering lives easier with Python. But what is Python and why is it the language of choice of many DevOps engineers? In the words of the Python Foundation Executive Summary (

"Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. Its high-level, built-in data structure, combined with dynamic typing and dynamic binding, makes it very attractive for Rapid Application Development, as well as for use as a scripting or glue language to connect existing components together. Python's simple, easy-to-learn syntax emphasizes readability and therefore reduces the cost of program maintenance."

If you are somewhat new to programming, the object-oriented, dynamic semantics mentioned previously probably do not mean much to you. But I think we can all agree that for rapid application development, simple, and easy-to-learn syntax sounds like a good thing. Python, as an interpreted language, means there is no compilation process required, so the time to write, test, and edit Python programs is greatly reduced. For simple scripts, if your script fails, a print statement is usually all you need to debug what was going on. Using the interpreter also means that Python is easily ported to different types of operating systems, such as Windows and Linux, and a Python program written on one operating system can be used on another.

The object-oriented nature encourages code reuse by breaking a large program into simple reusable objects, as well as other reusable formats with functions, modules, and packages. In fact, all Python files are modules that can be reused or imported into another Python program. This makes it easy to share programs between engineers and encourages code reuse. Python also has a batteries included mantra, which means that for common tasks, you need not download any additional packages. In order to achieve this without the code being too bloated, a set of standard libraries is installed when you install the Python interpreter. For common tasks such as regular expression, mathematics functions, and JSON decoding, all you need is the import statement, and the interpreter will move those functions into your program. This is what I would consider one of the killer features of the Python language.

Lastly, the fact that Python code can start in a relatively small-sized script with a few lines of code and grow into a full production system is very handy for network engineers. As many of us know, the network typically grows organically without a master plan. A language that can grow with your network in size is invaluable. You might be surprised to see a language that was deemed as a scripting language by many is being used for full production systems by many cutting-edge companies (organizations using Python,

If you have ever worked in an environment where you have to switch between working on different vendor platforms, such as Cisco IOS and Juniper Junos, you know how painful it is to switch between syntaxes and usage when trying to achieve the same task. With Python being flexible enough for large and small programs, there is no such context switching, because it is just Python.

For the rest of the chapter, we will take a high-level tour of the Python language for a bit of a refresher. If you are already familiar with the basics, feel free to quickly scan through it or skip the rest of the chapter.

Python versions

As many readers are already aware, Python has been going through a transition from Python 2 to Python 3 for the last few years. Python 3 was released back in 2008, over 10 years ago, with active development with the most recent release of 3.7. Unfortunately, Python 3 is not backward compatible with Python 2. At the time of writing the second edition of this book, in the middle of 2018, the Python community has largely moved over to Python 3. The latest Python 2.x release, 2.7, was released over six years ago in mid-2010. Fortunately, both versions can coexist on the same machine. Personally, I use Python 2 as my default interpreter when I type in Python at the Command Prompt, and I use Python 3 when I need to use Python 3. More information is given in the next section about invoking Python interpreter, but here is an example of invoking Python 2 and Python 3 on an Ubuntu Linux machine:

$ python
Python 2.7.12 (default, Dec 4 2017, 14:50:18)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()
$ python3
Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()

With the 2.7 release being end-of-life, most Python frameworks now support Python 3. Python 3 also has lots of good features, such as asynchronous I/O that can be taken advantage of when we need to optimize our code. This book will use Python 3 for its code examples unless otherwise stated. We will also try to point out the Python 2 and Python 3 differences when applicable.

If a particular library or framework is better suited for Python 2, such as Ansible (see the following information), it will be pointed out and we will use Python 2 instead.

At the time of writing, Ansible 2.5 and above have support for Python 3. Prior to 2.5, Python 3 support was considered a tech preview. Given the relatively new supportability, many of the community modules are still yet to migrate to Python 3. For more information on Ansible and Python 3, please see

Operating system

As mentioned, Python is cross-platform. Python programs can be run on Windows, Mac, and Linux. In reality, certain care needs to be taken when you need to ensure cross-platform compatibility, such as taking care of the subtle difference between backslashes in Windows filenames. Since this book is for DevOps, systems, and network engineers, Linux is the preferred platform for the intended audience, especially in production. The code in this book will be tested on the Linux Ubuntu 16.06 LTS machine. I will also try my best to make sure the code runs the same on the Windows and the MacOS platform.

If you are interested in the OS details, they are as follows:

$ uname -a
Linux packt-network-python 4.13.0-45-generic #50~16.04.1-Ubuntu SMP Wed May 30 11:18:27 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Running a Python program

Python programs are executed by an interpreter, which means the code is fed through this interpreter to be executed by the underlying operating system and results are displayed. There are several different implementations of the interpreter by the Python development community, such as IronPython and Jython. In this book, we will use the most common Python interpreter in use today, CPython. Whenever we mention Python in this book, we are referring to CPython unless otherwise indicated.

One way you can use Python is by taking advantage of the interactive prompt. This is useful when you want to quickly test a piece of Python code or concept without writing a whole program. This is typically done by simply typing in the Python keyword:

Python 3.5.2 (default, Nov 17 2016, 17:05:23)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for
more information.
>>> print("hello world")
hello world
In Python 3, the print statement is a function; therefore, it requires parentheses. In Python 2, you can omit the parentheses.

The interactive mode is one of Python's most useful features. In the interactive shell, you can type any valid statement or sequence of statements and immediately get a result back. I typically use this to explore a feature or library that I am not familiar with. Talk about instant gratification!

On Windows, if you do not get a Python shell prompt back, you might not have the program in your system search path. The latest Windows Python installation program provides a checkbox for adding Python to your system path; make sure that is checked. Or you can add the program in the path manually by going to Environment Settings.

A more common way to run the Python program, however, is to save your Python file and run it via the interpreter after. This will save you from typing in the same statements over and over again like you have to do in the interactive shell. Python files are just regular text files that are typically saved with the .py extension. In the *Nix world, you can also add the shebang (#!) line on top to specify the interpreter that will be used to run the file. The # character can be used to specify comments that will not be executed by the interpreter. The following file,, has the following statements:

# This is a comment
print("hello world")

This can be executed as follows:

$ python
hello world

Python built-in types

Python has several standard types built in to the interpreter:

  • None: The Null object
  • Numerics: int, long, float, complex, and bool (the subclass of int with a True or False value)
  • Sequences: str, list, tuple, and range
  • Mappings: dict
  • Sets: set and frozenset

The None type

The None type denotes an object with no value. The None type is returned in functions that do not explicitly return anything. The None type is also used in function arguments to error out if the caller does not pass in an actual value.


Python numeric objects are basically numbers. With the exception of Booleans, the numeric types of int, long, float, and complex are all signed, meaning they can be positive or negative. A Boolean is a subclass of the integer, which can be one of two values: 1 for True, and 0 for False. The rest of the numeric types are differentiated by how precisely they can represent the number; for example, int are whole numbers with a limited range while long are whole numbers with unlimited range. Floats are numbers using the double-precision representation (64-bit) on the machine.


Sequences are ordered sets of objects with an index of non-negative integers. In this and the next few sections, we will use the interactive interpreter to illustrate the different types. Please feel free to type along on your own computer.

Sometimes it surprises people that string is actually a sequence type. But if you look closely, strings are a series of characters put together. Strings are enclosed by either single, double, or triple quotes. Note in the following examples, the quotes have to match, and triple quotes allow the string to span different lines:

>>> a = "networking is fun"
>>> b = 'DevOps is fun too'
>>> c = """what about coding?
... super fun!"""

The other two commonly used sequence types are lists and tuples. Lists are sequences of arbitrary objects. Lists can be created by enclosing objects in square brackets. Just like strings, lists are indexed by non-zero integers that start at zero. The values of lists are retrieved by referencing the index number:

>>> vendors = ["Cisco", "Arista", "Juniper"]
>>> vendors[0]
>>> vendors[1]
>>> vendors[2]

Tuples are similar to lists, created by enclosing values in parentheses. Like lists, the values in the tuple are retrieved by referencing its index number. Unlike lists, the values cannot be modified after creation:

>>> datacenters = ("SJC1", "LAX1", "SFO1")
>>> datacenters[0]
>>> datacenters[1]
>>> datacenters[2]

Some operations are common to all sequence types, such as returning an element by index as well as slicing:

>>> a
'networking is fun'
>>> a[1]
>>> vendors
['Cisco', 'Arista', 'Juniper']
>>> vendors[1]
>>> datacenters
('SJC1', 'LAX1', 'SFO1')
>>> datacenters[1]
>>> a[0:2]
>>> vendors[0:2]
['Cisco', 'Arista']
>>> datacenters[0:2]
('SJC1', 'LAX1')
Remember that index starts at 0. Therefore, the index of 1 is actually the second element in the sequence.

There are also common functions that can be applied to sequence types, such as checking the number of elements and the minimum and maximum values:

>>> len(a)
>>> len(vendors)
>>> len(datacenters)
>>> b = [1, 2, 3, 4, 5]
>>> min(b)
>>> max(b)

It will come as no surprise that there are various methods that apply only to strings. It is worth noting that these methods do not modify the underlying string data itself and always return a new string. If you want to use the new value, you will need to catch the return value and assign it to a different variable:

>>> a
'networking is fun'
>>> a.capitalize()
'Networking is fun'
>>> a.upper()
>>> a
'networking is fun'
>>> b = a.upper()
>>> b
>>> a.split()
['networking', 'is', 'fun']
>>> a
'networking is fun'
>>> b = a.split()
>>> b
['networking', 'is', 'fun']

Here are some of the common methods for a list. This list is a very useful structure in terms of putting multiple items together and iterating through them one at a time. For example, we can make a list of data center spine switches and apply the same access list to all of them by iterating through them one by one. Since a list's value can be modified after creation (unlike tuples), we can also expand and contrast the existing list as we move along the program:

>>> routers = ['r1', 'r2', 'r3', 'r4', 'r5']
>>> routers.append('r6')
>>> routers
['r1', 'r2', 'r3', 'r4', 'r5', 'r6']
>>> routers.insert(2, 'r100')
>>> routers
['r1', 'r2', 'r100', 'r3', 'r4', 'r5', 'r6']
>>> routers.pop(1)
>>> routers
['r1', 'r100', 'r3', 'r4', 'r5', 'r6']


Python provides one mapping type, called the dictionary. The dictionary is what I think of as a poor man's database because it contains objects that can be indexed by keys. This is often referred to as the associated array or hashing table in other languages. If you have used any of the dictionary-like objects in other languages, you will know that this is a powerful type, because you can refer to the object with a human-readable key. This key will make more sense for the poor guy who is trying to maintain and troubleshoot the code. That guy could be you only a few months after you wrote the code and troubleshooting at 2 a.m.. The object in the dictionary value can also be another data type, such as a list. You can create a dictionary with curly braces:

>>> datacenter1 = {'spines': ['r1', 'r2', 'r3', 'r4']}
>>> datacenter1['leafs'] = ['l1', 'l2', 'l3', 'l4']
>>> datacenter1
{'leafs': ['l1', 'l2', 'l3', 'l4'], 'spines': ['r1',
'r2', 'r3', 'r4']}
>>> datacenter1['spines']
['r1', 'r2', 'r3', 'r4']
>>> datacenter1['leafs']
['l1', 'l2', 'l3', 'l4']


A set is used to contain an unordered collection of objects. Unlike lists and tuples, sets are unordered and cannot be indexed by numbers. But there is one character that makes sets standout as useful: the elements of a set are never duplicated. Imagine you have a list of IPs that you need to put in an access list of. The only problem in this list of IPs is that they are full of duplicates. Now, think about how many lines of code you would use to loop through the list of IPs to sort out unique items, one at a time. However, the built-in set type would allow you to eliminate the duplicate entries with just one line of code. To be honest, I do not use set that much, but when I need it, I am always very thankful it exists. Once the set or sets are created, they can be compared with each other using the union, intersection, and differences:

>>> a = "hello"
>>> set(a)
{'h', 'l', 'o', 'e'}
>>> b = set([1, 1, 2, 2, 3, 3, 4, 4])
>>> b
{1, 2, 3, 4}
>>> b.add(5)
>>> b
{1, 2, 3, 4, 5}
>>> b.update(['a', 'a', 'b', 'b'])
>>> b
{1, 2, 3, 4, 5, 'b', 'a'}
>>> a = set([1, 2, 3, 4, 5])
>>> b = set([4, 5, 6, 7, 8])
>>> a.intersection(b)
{4, 5}
>>> a.union(b)
{1, 2, 3, 4, 5, 6, 7, 8}
>>> 1 *
{1, 2, 3}

Python operators

Python has some numeric operators that you would expect; note that the truncating division, (//, also known as floor division) truncates the result to an integer and a floating point and returns the integer value. The modulo (%) operator returns the remainder value in the division:

>>> 1 + 2
>>> 2 - 1
>>> 1 * 5
>>> 5 / 1
>>> 5 // 2
>>> 5 % 2

There are also comparison operators. Note the double equals sign for comparison and a single equals sign for variable assignment:

>>> a = 1
>>> b = 2
>>> a == b
>>> a > b
>>> a < b
>>> a <= b

We can also use two of the common membership operators to see whether an object is in a sequence type:

>>> a = 'hello world'
>>> 'h' in a
>>> 'z' in a
>>> 'h' not in a
>>> 'z' not in a

Python control flow tools

The if, else, and elif statements control conditional code execution. As one would expect, the format of the conditional statement is as follows:

if expression:
do something
elif expression:
do something if the expression meets
elif expression:
do something if the expression meets

Here is a simple example:

>>> a = 10
>>> if a > 1:
... print("a is larger than 1")
... elif a < 1:
... print("a is smaller than 1")
... else:
... print("a is equal to 1")
a is larger than 1

The while loop will continue to execute until the condition is false, so be careful with this one if you don't want to continue to execute (and crash your process):

while expression:
do something >>> a = 10
>>> b = 1
>>> while b < a:
... print(b)
... b += 1

The for loop works with any object that supports iteration; this means all the built-in sequence types, such as lists, tuples, and strings, can be used in a for loop. The letter i in the following for loop is an iterating variable, so you can typically pick something that makes sense within the context of your code:

for i in sequence:
do something >>> a = [100, 200, 300, 400]
>>> for number in a:
... print(number)

You can also make your own object that supports the iterator protocol and be able to use the for loop for this object.

Constructing such an object is outside the scope of this chapter, but it is useful knowledge to have; you can read more about it at

Python functions

Most of the time, when you find yourself copy and pasting some pieces of code, you should break it up into a self-contained chunk into functions. This practice allows for better modularity, is easier to maintain, and allows for code reuse. Python functions are defined using the def keyword with the function name, followed by the function parameters. The body of the function consists of the Python statements that are to be executed. At the end of the function, you can choose to return a value to the function caller, or by default, it will return the None object if you do not specify a return value:

def name(parameter1, parameter2):
return value

We will see a lot more examples of function in the following chapters, so here is a quick example:

>>> def subtract(a, b):
... c = a - b
... return c
>>> result = subtract(10, 5)
>>> result

Python classes

Python is an object-oriented programming (OOP) language. The way Python creates objects is with the class keyword. A Python object is most commonly a collection of functions (methods), variables, and attributes (properties). Once a class is defined, you can create instances of such a class. The class serves as a blueprint for subsequent instances.

The topic of OOP is outside the scope of this chapter, so here is a simple example of a router object definition:

>>> class router(object):
... def __init__(self, name, interface_number,
... = name
... self.interface_number = interface_number
... self.vendor = vendor

Once defined, you are able to create as many instances of that class as you'd like:

>>> r1 = router("SFO1-R1", 64, "Cisco")
>>> r1.interface_number
>>> r1.vendor
>>> r2 = router("LAX-R2", 32, "Juniper")
>>> r2.interface_number
>>> r2.vendor

Of course, there is a lot more to Python objects and OOP. We will look at more examples in future chapters.

Python modules and packages

Any Python source file can be used as a module, and any functions and classes you define in that source file can be reused. To load the code, the file referencing the module needs to use the import keyword. Three things happen when the file is imported:

  1. The file creates a new namespace for the objects defined in the source file
  2. The caller executes all the code contained in the module
  3. The file creates a name within the caller that refers to the module being imported. The name matches the name of the module

Remember the subtract() function that you defined using the interactive shell? To reuse the function, we can put it into a file named

def subtract(a, b):
c = a - b
return c

In a file within the same directory of, you can start the Python interpreter and import this function:

Python 2.7.12 (default, Nov 19 2016, 06:48:10)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for
more information.
>>> import subtract
>>> result = subtract.subtract(10, 5)
>>> result

This works because, by default, Python will first search for the current directory for the available modules. If you are in a different directory, you can manually add a search path location using the sys module with sys.path. Remember the standard library that we mentioned a while back? You guessed it, those are just Python files being used as modules.

Packages allow a collection of modules to be grouped together. This further organizes Python modules into a more namespace protection to further reusability. A package is defined by creating a directory with a name you want to use as the namespace, then you can place the module source file under that directory. In order for Python to recognize it as a Python-package, just create a file in this directory. In the same example as the file, if you were to create a directory called math_stuff and create a file:

[email protected]:~/Master_Python_Networking/
Chapter1$ mkdir math_stuff
[email protected]:~/Master_Python_Networking/
Chapter1$ touch math_stuff/
[email protected]:~/Master_Python_Networking/
Chapter1$ tree .
└── math_stuff

1 directory, 3 files
[email protected]:~/Master_Python_Networking/

The way you will now refer to the module will need to include the package name:

>>> from math_stuff.subtract import subtract
>>> result = subtract(10, 5)
>>> result

As you can see, modules and packages are great ways to organize large code files and make sharing Python code a lot easier.



In this chapter, we covered the OSI model and reviewed network protocol suites, such as TCP, UDP, and IP. They work as the layers that handle the addressing and communication negotiation between any two hosts. The protocols were designed with extensibility in mind and have largely been unchanged from their original design. Considering the explosive growth of the internet, that is quite an accomplishment.

We also quickly reviewed the Python language, including built-in types, operators, control flows, functions, classes, modules, and packages. Python is a powerful, production-ready language that is also easy to read. This makes the language an ideal choice when it comes to network automation. Network engineers can leverage Python to start with simple scripts and gradually move on to other advanced features.

In Chapter 2, Low-Level Network Device Interactions, we will start to look at using Python to programmatically interact with network equipment.

About the Author

  • Eric Chou

    Eric Chou is a seasoned technologist with over 20 years of experience. He has worked on some of the largest networks in the industry while working at Amazon, Azure, and other Fortune 500 companies. Eric is passionate about network automation, Python, and helping companies build better security postures.

    In addition to being the author of Mastering Python Networking (Packt), he is also the co-author of Distributed Denial of Service (DDoS): Practical Detection and Defense, (O'Reilly Media).

    Eric is also the primary inventor for two U.S. patents in IP telephony. He shares his deep interest in technology through his books, classes, and blog, and contributes to some of the popular Python open source projects.

    Browse publications by this author

Latest Reviews

(12 reviews total)
Not sure why, but Packt website *****, I bought the book almost a month ago, but still cant download it. The website doesnt accept my login credentials for some reason. Please Fix it.
No hard copies received!!
Mastering Python Networking - Second Edition
Unlock this book and the full library FREE for 7 days
Start now