Mastering Python Networking - Third Edition

By Eric Chou
  • Instant online access to over 8,000+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Review of TCP/IP Protocol Suite and Python

About this book

Networks in your infrastructure set the foundation for how your application can be deployed, maintained, and serviced. Python is the ideal language for network engineers to explore tools that were previously available to systems engineers and application developers. In Mastering Python Networking, Third edition, you’ll embark on a Python-based journey to transition from traditional network engineers to network developers ready for the next-generation of networks.

This new edition is completely revised and updated to work with Python 3. In addition to new chapters on network data analysis with ELK stack (Elasticsearch, Logstash, Kibana, and Beats) and Azure Cloud Networking, it includes updates on using newer libraries such as pyATS and Nornir, as well as Ansible 2.8. Each chapter is updated with the latest libraries with working examples to ensure compatibility and understanding of the concepts.

Starting with a basic overview of Python, the book teaches you how it can interact with both legacy and API-enabled network devices. You will learn to leverage high-level Python packages and frameworks to perform network automation tasks, monitoring, management, and enhanced network security followed by Azure and AWS Cloud networking. Finally, you will use Jenkins for continuous integration as well as testing tools to verify your network.

Publication date:
January 2020
Publisher
Packt
Pages
576
ISBN
9781839214677

 

Review of TCP/IP Protocol Suite and Python

Welcome to the new exciting age of network engineering! When I started working as a network engineer at the turn of the millennium, the role was distinctly different than the network engineering role of today. At the time, network engineers mainly possessed domain-specific knowledge to manage and operate local and wide area networks using the command line interface. While they might occasionally cross over the discipline wall to handle tasks normally associated with systems administration and developers, there was no explicit expectation for a network engineer to write code or understand programming concepts. This is no longer the case today.

Over the years, the DevOps and Software–Defined Networking (SDN) movement, among other factors, have significantly blurred the lines between network engineers, systems engineers, and developers.

The fact that you have picked up this book suggests that you might already be an adopter of network DevOps, or maybe you are considering going down that path of checking out network programmability. Maybe you have been working as a network engineer for many years, just as I had, and wanted to know what the buzz around the Python programming language is all about. You might even already have been fluent in the Python programming language but wonder what its applications are in the network engineering field.

If you fall into any of these camps, or are simply just curious about Python in the network engineering field, I believe this book is for you:

Figure 1: The intersection between Python and network engineering

There are many books that have already been written that dive into the topics of network engineering and Python separately. I do not intend to repeat their efforts with this book. Instead, this book assumes that you have some hands-on experience of managing networks, as well as a basic understanding of network protocols. It's helpful if you're already familiar with Python as a language, but we will cover some basics later in the chapter. You do not need to be an expert in Python or network engineering to get the most out of this book. This book intends to build on the basic foundations of network engineering and Python to help readers to learn and practice various applications that can make their lives easier.

In this chapter, we will do a general review some of the networking and Python concepts. The rest of the chapter should set the level of expectation of the prior knowledge required to get the most out of this book. If you want to brush up on the contents of this chapter, there are lots of free or low-cost resources to bring you up to speed. I would recommend the free Khan Academy (https://www.khanacademy.org/) and the Python tutorials at https://www.python.org/.

This chapter will pay a very quick visit to the relevant networking topics at a high level without going too much into the details. Judging from my experience of working in the field, a typical network engineer or developer might not remember the exact Transmission Control Protocol (TCP) state machine to accomplish their daily tasks (I know I don't), but they would be familiar with the basics of the Open Systems Interconnection (OSI) model, the TCP and User Datagram Protocol (UDP) operations, different IP header fields, and other fundamental concepts.

We will also look at a high-level review of the Python language; just enough for those readers who do not code in Python on a daily basis to have ground to walk on for the rest of the book.

Specifically, we will cover the following topics:

  • An overview of the internet
  • The OSI and client-server model
  • TCP, UDP, and IP protocol suites
  • Python syntax, types, operators, and loops
  • Extending Python with functions, classes, and packages

Of course, the information presented in this chapter is not exhaustive; please do check out the references for further information if required.

As network engineers, we are typically challenged by the scale and complexity of the network we need to manage. They range from small home-based networks, medium size networks that make a small business go, to large multi-national enterprise networks that span the globe. The biggest network of them all, is of course the Internet. Without the Internet, there would be no email, websites, API, streaming media, or cloud computing as we know it. Therefore, before we dive deeper into the specifics of protocols and Python, let us begin with an overview of the Internet.

 

An overview of the internet

What is the internet? This seemingly easy question might receive different answers depending on your background. The internet means different things to different people; the young, the old, students, teachers, business people, poets, could all give different answers to the same question.

To a network engineer, the internet is a global computer network consisting of a web of inter-networks connecting large and small networks together. In other words, it is a network of networks without a centralized owner. Take your home network as an example. It might consist of a device that integrates the functions of routing, Ethernet switching, and wireless access points connecting your smartphones, tablets, computers, and internet-enabled TVs together for the devices to communicate with each other. This is your local area network (LAN).

When your home network needs to communicate with the outside world, it passes information from your LAN to a larger network, often appropriately named the internet service provider (ISP). The ISP is typically thought of as a business that you pay to get online. They are able to do this by aggregating small networks into bigger networks that they maintain. Your ISP network often consists of edge nodes that aggregate the traffic to their core network. The core network's function is to interconnect these edge networks via a higher speed network.

At special edge nodes, your ISP is connected to other ISPs to pass your traffic appropriately to your destination. The return path from your destination to your home computer, tablet, or smartphone may or may not follow the same path through all of these networks back to your device, while the source and destination remain the same.

Let's take a look at the components making up this web of networks.

Servers, hosts, and network components

Hosts are end nodes on the network that communicate to other nodes. In today's world, a host can be a traditional computer, or it can be your smartphone, tablet, or TV. With the rise of the Internet of Things (IoT), the broad definition of a host can be expanded to include an Internet Protocol (IP) camera, TV set-top boxes, and the ever-increasing types of sensors that we use in agriculture, farming, automobiles, and more. With the explosion of the number of hosts connected to the internet, all of them need to be addressed, routed, and managed. The demand for proper networking has never been greater.

Most of the time when we are on the internet, we make requests for services. This could be viewing a web page, sending or receiving emails, transferring files, and so on. These services are provided by servers. As the name implies, servers provide services to multiple nodes and generally have higher levels of hardware specification. In a way, servers are special supernodes on the network that provide additional capabilities to their peers. We will look at servers later on in the client-server model section.

If you think of servers and hosts as cities and towns, the network components are the roads and highways that connect them together. In fact, the term information superhighway comes to mind when describing the network components that transmit the ever-increasing bits and bytes across the globe. In the OSI model that we will look at in a bit, these network components are layer one to three devices that sometimes venture into layer four as well. They are layer two and three routers and switches that direct traffic, as well as layer one transports such as fiber optic cables, coaxial cables, twisted copper pairs, and some Dense wavelength division multiplexing (DWDM) equipment, to name a few.

Collectively, hosts, servers, storage, and network components make up the internet as we know it today.

The rise of data centers

In the last section, we looked at the different roles that servers, hosts, and network components play in the inter-network. Because of the higher hardware capacity that servers demand, they are often put together in a central location to be managed more efficiently. We often refer to these locations as data centers.

Enterprise data centers

In a typical enterprise, the company generally has the business needs for internal tools such as emailing, document storage, sales tracking, ordering, HR tools, and a knowledge-sharing intranet. These services translate into file and mail servers, database servers, and web servers. Unlike user computers, these are generally high-end computers that require a lot of power, cooling, and network connections. A byproduct of the hardware is also the amount of noise they make, which is not suitable for a normal work space. The servers are generally placed in a central location, called the main distribution frame (MDF), in the enterprise building to provide the necessary power feed, power redundancy, cooling, and network connectivity.

To connect to the MDF, the user's traffic is generally aggregated at a location closer to the user, which is sometimes called the intermediate distribution frame (IDF), before they are bundled up and connected to the MDF. It is not unusual for the IDF-MDF spread to follow the physical layout of the enterprise building or campus. For example, each building floor can consist of an IDF that aggregates to the centralized MDF on another floor in the same building. If the enterprise consists of several buildings, further aggregation can be done by combining the buildings' traffic before connecting them to the enterprise data center.

Enterprise data centers generally follow the network design of three layers. These layers are access layers, distribution layers, and a core layer. Of course, as with any design, there are no hard rules or one-size-fits-all model; the three-layer designs are just a general guide. As an example, to overlay the three-layer design to our User-IDF-MDF example earlier, the access layer is analogous to the ports each user connects to, the IDF can be thought of as the distribution layer, while the core layer consists of the connection to the MDF and the enterprise data centers. This is, of course, a generalization of enterprise networks, as some of them will not follow the same model.

Cloud data centers

With the rise of cloud computing and software, or Infrastructure as a Service (IaaS), the data centers the cloud providers have built are really big in scale, sometimes referred to as hyper-scale data centers. What we referred to as cloud computing is the on-demand availability of computing resources offered by the likes of Amazon, Microsoft, and Google without the user having to directly manage the resources.

Because of the number of servers they need to house, the cloud data centers generally demand a much, much higher capacity of power, cooling, and network capacity than any enterprise data center. Even after working on cloud provider data centers for many years, every time I visit a cloud provider data center, I am still amazed at the scale of them. Just to give examples of the sheer scale of them, cloud data centers are so big and power-hungry that they are typically built close to power plants where they can get the cheapest power rate, without losing too much efficiency during the transportation of the power. Their cooling needs are so high that some are forced to be creative about where the data center is built. Facebook, for example, have built their Lulea data center in northern Sweden (just 70 miles south of the Arctic Circle) in part to leverage the cold temperature for cooling. Any search engine can give you some of the astounding numbers when it comes to the science of building and managing cloud data centers for the likes of Amazon, Microsoft, Google, and Facebook. The Microsoft data center in West Des Moines, Iowa, for example, consists of 1.2 million square feet of facility on 200 acres of land and required the city to spend an estimated $ 65 million in public infrastructure upgrades.

Figure 2: Utah data center (source: https://en.wikipedia.org/wiki/Utah_Data_Center)

At the cloud provider scale, the services that they need to provide are generally not cost effective or feasible to be housed in a single server. The services are spread between a fleet of servers, sometimes across many different racks, to provide redundancy and flexibility for service owners.

The latency and redundancy requirements as well as the physical spread out of servers put a tremendous amount of pressure on the network. The number of interconnections required to connect the server fleets equates to an explosive growth of network equipment such as cables, switches, and routers. These requirements translate into the number of times these network equipments needs to be racked, provisioned, and managed. A typical network design would be a multi-staged, Clos network:

Figure 3: Clos network

In a way, cloud data centers are where network automation becomes a necessity for speed, flexibility, and reliability. If we follow the traditional way of managing network devices via a Terminal and command line interface, the number of engineering hours required would not allow the service to be available in a reasonable amount of time. This is not to mention that human repetition is error-prone, inefficient, and a terrible waste of engineering talent. To add further complexity, there is often the need to quickly change some of the network configuration to accommodate rapidly changing business needs.

Personally, cloud data centers are where I started my path of network automation with Python a number of years ago, and I've never looked back since.

Edge data centers

If we have sufficient computing power at the data center level, why keep anything anywhere else but at these data centers? All the connections from clients around the world can be routed back to the data center servers providing the service, and we can call it a day, right? The answer is, of course, it depends on the use case. The biggest limitation in routing the request and session all the way back from the client to a large data center is the latency introduced in the transport. In other words, large latency is where the network becomes a bottleneck.

Of course, any elementary physics textbook can tell you that the network latency number would never be zero: even as fast as light can travel in a vacuum, it still takes time for physical transportation. In the real world, latency would be much higher than light in a vacuum. Why? Because the network packet has to traverse through multiple networks, sometimes through an undersea cable, slow satellite links, 3G or 4G cellular links, or Wi-Fi connections.

What if we need to reduce the network latency? One solution would be to reduce the number of networks the end user requests traverse through. Be as closely connected to the end user as possible at the edge where the user enters your network and place enough resources at the edge location to serve the request. This is especially common for servicing media content such as music and videos.

Let's take a minute and imagine that you are building the next generation of video streaming service. In order to increase customer satisfaction with smooth streaming, you would want to place the video server as close to the customer as possible, either inside or very near to the customer's ISP. Also, for redundancy and connection speed, the upstreaming of the video server farm would not just be connected to one or two ISPs, but all the ISPs that I can connect to, to reduce the hop count. All the connections would be with as much bandwidth as needed to decrease latency during peak hours. This need gave rise to the peering exchange's edge data centers of big ISP and content providers. Even when the number of network devices is not as high as cloud data centers, they too can benefit from network automation in terms of the increased reliability, flexibility, security, and visibility network automation brings.

We will cover security (Chapter 6, Network Security with Python) and visibility (Chapter 7, Network Monitoring with PythonPart 1, and Chapter 8, Network Monitoring with PythonPart 2) in later chapters of this book. As with many complex subjects that break the complexity by dividing the subject into smaller, digestive pieces, networking is based on the concept of layers. Over the years, there are different networking models that have been developed. We will take a look at two of the most important models in this book, starting with the OSI model.

 

The OSI model

No network book is complete without first going over the OSI model. The model is a conceptional model that componentizes the telecommunication functions into different layers. The model defines seven layers, and each layer sits independently on top of another one with defined structures and characteristics.

For example, in the network layer, IP is located on top of the different types of data link layers, such as Ethernet or frame relay. The OSI reference model is a good way to normalize different and diverse technologies into a set of common languages that people can agree on. This greatly reduces the scope for parties working on individual layers and allows them to look at specific tasks in depth without worrying too much about compatibility:

Figure 4: OSI model

The OSI model was initially worked on in the late 1970s and was later published jointly by the International Organization for Standardization (ISO), what is now known as the Telecommunication Standardization Sector of the International Telecommunication Union (ITU-T). It is widely accepted and commonly referred to when introducing a new topic in telecommunication.

Around the same time period as the OSI model development, the internet was taking shape. The reference model the original designer used is often referred to as the TCP/IP model. The TCP and the IP were the original protocol suites contained in the design. This is somewhat similar to the OSI model in the sense that they divide end-to-end data communication into abstraction layers.

What is different is the model combines layers 5 to 7 in the OSI model in the Application layer, while the Physical and Data link layers are combined in the Link layer:

Figure 5: Internet protocol suite

Both the OSI and TCP/IP models are useful for providing standards for end-to-end data communication. However, for the most part, we will refer to the TCP/IP model more in this book, since that is what the internet was originally built on. We will refer to the OSI model when needed, such as when we are discussing the web framework in the upcoming chapters. Just like models at the transport layer, there are also reference models that govern communication at the application level. In the modern network, the client-server model is what most applications are based on. We will take a look at the client-server model in the next section.

 

Client-server model

The client-server reference models demonstrated a standard way for data to communicate between two nodes. Of course, by now, we all know that not all nodes are created equal. Even in the earliest Advanced Research Projects Agency Network (ARPANET) days, there were workstation nodes, and there were server nodes with the purpose of providing content to other nodes. These server nodes typically have higher hardware specifications and are managed more closely by engineers. Since these nodes provide resources and services to others, they are appropriately referred to as servers. Servers typically sit idle, waiting for clients to initiate requests for their resources. This model of distributed resources that are requested by the client request is referred to as the client-server model.

Why is this important? If you think about it for a minute, the importance of networking is greatly highlighted by this client-server model. Without the need to transfer services between clients and servers, there is really not a lot of need for network interconnections. It is the need to transfer bits and bytes from the client to the server that shines a light on the importance of network engineering. Of course, we are all aware of how the biggest networks of them all, the internet, has been transforming the lives of all of us and is continuing to do so.

You might be asking, how can each node determine the time, speed, source, and destination every time they need to talk to each other? This brings us to network protocols.

 

Network protocol suites

In the early days of computer networking, protocols were proprietary and closely controlled by the company who designed the connection method. If you were using Novell's IPX/SPX protocol in your hosts, the same hosts would not be able to communicate with Apple's AppleTalk hosts, and vice versa. These proprietary protocol suites generally have analogous layers to the OSI reference model and follow the client-server communication method but are not compatible with each other. The proprietary protocols generally only work in LANs that are closed, without the need to communicate with the outside world. When traffic does need to move beyond the local LAN, typically an internet translation device, such as a router, is used to translate from one protocol to another. For example, in order to connect an AppleTalk-based network to the internet, a router would be used to connect and translate the AppleTalk protocol to an IP-based network. The additional translation is usually not perfect, but since most of the communication happened within the LAN in the early days, it was accepted by the network administrators.

However, as the need for inter-network communication rises beyond the LAN, the need for standardizing the network protocol suites becomes greater. The proprietary protocols eventually gave way to the standardized protocol suites of TCP, UDP, and IP, which greatly enhanced the ability of one network to talk to another. The internet, the greatest network of them all, relies on these protocols to function properly. In the next few sections, we will take a look at each of the protocol suites.

The transmission control protocol

The TCP is one of the main protocols used on the internet today. If you have opened a web page or have sent an email, you have come across the TCP protocol. The protocol sits at layer 4 of the OSI model, and it is responsible for delivering the data segment between two nodes in a reliable and error-checked manner. The TCP consists of a 160-bit header that contains, among others, source and destination ports, a sequence number, an acknowledgment number, control flags, and a checksum:

Figure 6: TCP header

Functions and characteristics of TCP

TCP uses datagram sockets or ports to establish a host-to-host communication. The standard body, called Internet Assigned Numbers Authority (IANA) designates well-known ports to indicate certain services, such as port 80 for HTTP (web) and port 25 for SMTP (mail). The server in the client-server model typically listens on one of these well known ports in order to receive communication requests from the client. The TCP connection is managed by the operating system by the socket that represents the local endpoint for connection.

The protocol operation consists of a state machine, where the machine needs to keep track of when it is listening for an incoming connection, during the communication session, as well as releasing resources once the connection is closed. Each TCP connection goes through a series of states such as Listen, SYN-SENT, SYN-RECEIVED, ESTABLISHED, FIN-WAIT, CLOSE-WAIT, CLOSING, LAST-ACK, TIME-WAIT, and CLOSED.

TCP messages and data transfer

The biggest difference between TCP and UDP, which is its close cousin on the same layer, is that it transmits data in an ordered and reliable fashion. The fact that the TCP operation guarantees delivery is often referred to TCP as a connection-oriented protocol. It does this by first establishing a three-way handshake to synchronize the sequence number between the transmitter and the receiver, SYN, SYN-ACK, and ACK.

The acknowledgment is used to keep track of subsequent segments in the conversation. Finally, at the end of the conversation, one side will send a FIN message, and the other side will ACK the FIN message as well as sending a FIN message of its own. The FIN initiator will then ACK the FIN message that it received.

As many of us who have troubleshot a TCP connection can tell you, the operation can get quite complex. One can certainly appreciate that, most of the time, the operation just happens silently in the background.

A whole book could be written about the TCP protocol; in fact, many excellent books have been written on the protocol.

As this section is a quick overview, if interested, The TCP/IP Guide (http://www.tcpipguide.com/) is an excellent free resource that you can use to dig deeper into the subject.

The user datagram protocol

The UDP is also a core member of the internet protocol suite. Like TCP, it operates on layer 4 of the OSI model that is responsible for delivering data segments between the application and the IP layer. Unlike TCP, the header is only 64 bits, which only consists of a source and destination port, length, and checksum. The lightweight header makes it ideal for applications that prefer faster data delivery without setting up the session between two hosts or needing reliable data delivery. Perhaps it's hard to imagine with today's fast internet connections, but the extra header made a big difference to the speed of transmission in the early days of X.21 and frame relay links.

Besides the speed difference, not having to maintain various states, such as TCP, also saves computer resources on the two endpoints:

Figure 7: UDP header

You might now wonder why UDP was ever used at all in the modern age; given the lack of reliable transmissions, wouldn't we want all the connections to be reliable and error-free? If you think about multimedia video streaming or Skype calling, those applications benefit from a lighter header when the application just wants to deliver the datagram as quickly as possible. You can also consider the fast Domain Name System (DNS) lookup process based on the UDP protocol where the tradeoff between accuracy and latency usually tips to the side of small latency.

When the address you type in on the browser is translated into a computer understandable address, the user will benefit from a lightweight process, since this has to happen before even the first bit of information is delivered to you from your favorite website.

Again, this section does not do justice to the topic of UDP, and the reader is encouraged to explore the topic through various resources if you are interested in learning more about UDP.

The Wikipedia article on UDP, https://en.wikipedia.org/wiki/User_Datagram_Protocol, is a good starting point to learn more about UDP.

The internet protocol

As network engineers will tell you, we live at the IP layer, which is layer 3 on the OSI model. IP has the job of addressing and routing between end nodes, among others. The addressing of an IP is probably its most important job. The address space is divided into two parts: the network and the host portion. The subnet mask is used to indicate which portion in the network address consists of the network and which portion is the host by matching the network portion with a 1 and the host portion with a 0. IPv4 expresses the address in the dotted notation, for example, 192.168.0.1.

The subnet mask can either be in a dotted notation (255.255.255.0) or use a forward slash to express the number of bits that should be considered in the network bit (/24):

Figure 8: IPv4 header

The IPv6 header, the next generation of the IP header of IPv4, has a fixed portion and various extension headers:

Figure 9: IPv6 header

The IPv6 Next Header field in the fixed header section can indicate an extension header to be followed that carries additional information. It can also identify the upper layer protocol such as TCP and UDP. The extension headers can include routing and fragment information. As much as the protocol designer would like to move from IPv4 to IPv6, the internet today is still pretty much addressed with IPv4, with some of the service provider networks addressed with IPv6 internally.

IP network address translation (NAT) and network security

NAT is typically used for translating a range of private IPv4 addresses into publicly routable IPv4 addresses. But it can also mean a translation between IPv4 to IPv6, such as at a carrier edge when they use IPv6 inside of the network that needs to be translated to IPv4 when the packet leaves the network. Sometimes, NAT6 to 6 is used as well for security reasons.

Security is a continuous process that integrates all the aspects of networking, including automation and Python. This book aims to use Python to help you manage the network; security will be addressed as part of the following chapters in the book, such as using Python to implement access lists, search for breaches in the log, and so on. We will also look at how we can use Python and other tools to gain visibility in the network, such as a graphic network topology dynamically based on network device information.

IP routing concepts

IP routing is about having the intermediate devices between the two endpoints transmit the packets between them based on the IP header. For all communication via the internet, the packet will traverse through various intermediate devices. As mentioned, the intermediate devices consist of routers, switches, optical gears, and various other gears that do not examine beyond the network and transport layer. In a road trip analogy, you might travel in the United States from the city of San Diego in California to the city of Seattle in Washington. The IP source address is analogous to San Diego and the destination IP address can be thought of as Seattle. On your road trip, you will stop by many different intermediate spots, such as Los Angeles, San Francisco, and Portland; these can be thought of as the intermediary routers and switches between the source and destination.

Why was this important? In a way, this book is about managing and optimizing these intermediate devices. In the age of mega data centers that span the size of multiple American football fields, the need for efficient, agile, reliable, and cost-effective ways to manage the network becomes a major point of competitive advantage for companies. In future chapters, we will dive into how we can use Python programming to effectively manage a network.

Now that we've taken a look at network reference models and protocol suites, we're ready to dive into the Python language itself. In this chapter, we'll begin with a broad overview of Python.

 

Python language overview

In a nutshell, this book is about making our network engineering lives easier with Python. But what is Python and why is it the language of choice of many DevOps engineers? In the words of the Python Foundation Executive Summary (https://www.python.org/doc/essays/blurb/):

"Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. Its high-level, built-in data structure, combined with dynamic typing and dynamic binding, makes it very attractive for Rapid Application Development, as well as for use as a scripting or glue language to connect existing components together. Python's simple, easy-to-learn syntax emphasizes readability and therefore reduces the cost of program maintenance."

If you are somewhat new to programming, the object-oriented, dynamic semantics mentioned previously probably do not mean much to you. But I think we can all agree that rapid application development and simple, easy-to-learn syntax sounds like a good thing. Python, as an interpreted language, means there is little to no compilation process required before execution, so the time to write, test, and edit Python programs is greatly reduced. For simple scripts, if your script fails, a print statement is usually all you need to debug what was going on.

Using the interpreter also means that Python is easily ported to different types of operating systems, such as Windows and Linux, and a Python program written on one operating system can be used on another with little or no change.

The functions, modules, and packages encourages code reuse by breaking a large program into simple reusable pieces. The object-oriented nature of Python takes it one step further for grouping the components into objects. In fact, all Python files are modules that can be reused or imported into another Python program. This makes it easy to share programs between engineers and encourages code reuse. Python also has a batteries included mantra, which means that for common tasks, you need not download any additional packages outside of the Python language itself. In order to achieve this goal without the code being too bloated, a set of Python modules, a.k.a. standard libraries, are installed when you install the Python interpreter. For common tasks such as regular expressions, mathematical functions, and JSON decoding, all you need is the import statement, and the interpreter will move those functions into your program. This batteries included mantra is what I would consider one of the killer features of the Python language.

Lastly, the fact that Python code can start in a relatively small-sized script with a few lines of code and grow into a full production system is very handy for network engineers. As many of us know, the network typically grows organically without a master plan. A language that can grow with your network in size is invaluable. You might be surprised to see a language that was deemed as a scripting language by many is being used for full production systems by many cutting-edge companies (organizations using Python; https://wiki.python.org/moin/OrganizationsUsingPython).

If you have ever worked in an environment where you have to switch between working on different vendor platforms, such as Cisco IOS and Juniper Junos, you know how painful it is to switch between syntaxes and usage when trying to achieve the same task. Since Python is flexible enough for both small and large programs, there is no such dramatic context switching. It is just the same Python code from small to large!

For the rest of the chapter, we will take a high-level tour of the Python language for a bit of a refresher. If you are already familiar with the basics, feel free to quickly scan through it or skip the rest of the chapter.

Python versions

As many readers are already aware, Python has been going through a transition from Python 2 to Python 3 for the last few years. Python 3 was released back in 2008, over 10 years ago, with active development with the most recent release of 3.7. Unfortunately, Python 3 is not backward compatible with Python 2.

At the time of writing the third edition of this book, in late 2019, the Python community has largely moved over to Python 3. In fact, Python 2 will officially be end-of-life beginning January 1st, 2020 (https://pythonclock.org/). The latest Python 2.x release, 2.7, was released over six years ago in mid-2010. Fortunately, both versions can coexist on the same machine. Given that Python 2 is end-of-life and not maintained by the time you read this passage, we should all switch to Python 3. More information is given in the next section about invoking the Python interpreter, but here is an example of invoking Python 2 and Python 3 on an Ubuntu Linux machine:

$ python2
Python 2.7.15+ (default, Jul 9 2019, 16:51:35)
[GCC 7.4.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()
$ python3.7
Python 3.7.4 (default, Sep 2 2019, 20:47:34)
[GCC 7.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()

With the 2.7 release being end-of-life, most Python frameworks now support Python 3. Python 3 also has lots of good features, such as asynchronous I/O that can be taken advantage of when we need to optimize our code. This book will use Python 3 for its code examples unless otherwise stated. We will also try to point out the Python 2 and Python 3 differences when applicable.

If a particular library or framework is better suited for Python 2, such as Ansible (see the following information), it will be pointed out, and we will use Python 2 instead. We should aim at using Python 3 as the default option and only use Python 2 when it is absolutely necessary.

At the time of writing, Ansible 2.8 and above have support for Python 3. Prior to 2.5, Python 3 support was considered a tech preview. Given the relatively new supportability, many of the community modules are still in the process of migrating to Python 3. For more information on Ansible and Python 3, please see https://docs.ansible.com/ansible/2.5/dev_guide/developing_python_3.html.

Operating system

As mentioned, Python is cross-platform. Python programs can be run on Windows, Mac, and Linux. In reality, certain care needs to be taken when you need to ensure cross-platform compatibility, such as taking care of the subtle differences between backslashes in Windows filenames and activating a virtual environment on different platforms. Since this book is for DevOps, systems, and network engineers, Linux is the preferred platform for the intended audience, especially in production. The code in this book will be tested on a Linux Ubuntu 18.04 LTS machine. I will also try my best to make sure the code runs the same on the Windows and the macOS platform.

If you are interested in the OS details, they are as follows:

$ uname -a
Linux network-dev-2 4.18.0-25-generic #26~18.04.1-Ubuntu SMP Thu Jun 27 07:28:31 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Running a Python program

Python programs are executed by an interpreter, which means the code is fed through this interpreter to be executed by the underlying operating system and results are displayed. There are several different implementations of the interpreter by the Python development community, such as IronPython and Jython. In this book, we will use the most common Python interpreter in use today, CPython. Whenever we mention Python in this book, we are referring to CPython unless otherwise indicated.

One way you can use Python is by taking advantage of the interactive prompt. This is useful when you want to quickly test a piece of Python code or concept without writing a whole program.

This is typically done by simply typing in the Python keyword:

$ python3.7
Python 3.7.4 (default, Sep 2 2019, 20:47:34)
[GCC 7.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> print("hello world")
hello world

In Python 3, the print statement is a function; therefore, it requires parentheses. In Python 2, you can omit the parentheses.

The interactive mode is one of Python's most useful features. In the interactive shell, you can type any valid statement or sequence of statements and immediately get a result back. I typically use this to explore a feature or library that I am not familiar with. The interactive mode can also be used for more complex tasks such as experimenting with data structure behaviors, for example, mutable versus immutable data types. Talk about instant gratification!

On Windows, if you do not get a Python shell prompt back, you might not have the program in your system search path. The latest Windows Python installation program provides a checkbox for adding Python to your system path; make sure that is checked during installation. Or you can add the program in the path manually by going to Environment settings.

A more common way to run the Python program, however, is to save your Python file and run it via the interpreter after. This will save you from typing in the same statements over and over again like you have to do in the interactive shell. Python files are just regular text files that are typically saved with the .py extension. In the *Nix world, you can also add the shebang (#!) line on top to specify the interpreter that will be used to run the file. The # character can be used to specify comments that will not be executed by the interpreter. The following file, helloworld.py, has the following statements:

# This is a comment print("hello world")

This can be executed as follows:

$ python helloworld.py
hello world
$

Python built-in types

Python implements dynamic-typing, or duck typing, and tries to automatically determine the object's type as you declare them. Python has several standard types built into the interpreter:

  • Numerics: int, float, complex, and bool (the subclass of int with a True or False value)
  • Sequences: str, list, tuple, and range
  • Mappings: dict
  • Sets: set and frozenset
  • None: The null object

The None type

The None type denotes an object with no value. The None type is returned in functions that do not explicitly return anything. The None type is also used in function arguments to error out if the caller does not pass in an actual value.

Numerics

Python numeric objects are basically numbers. With the exception of Booleans, the numeric types of int, long, float, and complex are all signed, meaning they can be positive or negative. A Boolean is a subclass of the integer, which can be one of two values: 1 for True, and 0 for False. In practice, we are almost always testing Booleans with True or False instead of the numerics 1 and 0. The rest of the numeric types are differentiated by how precisely they can represent the number; in Python 3, int does not have a maximum size while in Python 2 int are whole numbers with a limited range. Floats are numbers using the double-precision representation (64-bit) on the machine.

Sequences

Sequences are ordered sets of objects with an index of non-negative integers. In this and the next few sections, we will use the interactive interpreter to illustrate the different types.

Please feel free to type along on your own computer.

Sometimes, it surprises people that string is actually a sequence type. But if you look closely, strings are a series of characters put together. Strings are enclosed by either single, double, or triple quotes.

Note in the following examples, the quotes have to match, and triple quotes allow the string to span different lines:

>>> a = "networking is fun"
>>> b = 'DevOps is fun too'
>>> c = """what about coding?
... super fun!"""
>>>

The other two commonly used sequence types are lists and tuples. Lists are sequences of arbitrary objects. Lists can be created by enclosing objects in square brackets. Just like strings, lists are indexed by non-zero integers that start at zero. The values of a list are retrieved by referencing the index number:

>>> vendors = ["Cisco", "Arista", "Juniper"]
>>> vendors[0]
'Cisco'
>>> vendors[1]
'Arista'
>>> vendors[2]
'Juniper'

Tuples are similar to lists, created by enclosing values in parentheses. Like lists, the values in the tuple are retrieved by referencing its index number. Unlike lists, the values cannot be modified after creation:

>>> datacenters = ("SJC1", "LAX1", "SFO1")
>>> datacenters[0]
'SJC1'
>>> datacenters[1]
'LAX1'
>>> datacenters[2]
'SFO1'

Some operations are common to all sequence types, such as returning an element by index as well as slicing:

>>> a
'networking is fun'
>>> a[1]
'e'
>>> vendors
['Cisco', 'Arista', 'Juniper']
>>> vendors[1]
'Arista'
>>> datacenters
('SJC1', 'LAX1', 'SFO1')
>>> datacenters[1]
'LAX1'
>>>
>>> a[0:2]
'ne'
>>> vendors[0:2]
['Cisco', 'Arista']
>>> datacenters[0:2]
('SJC1', 'LAX1')
>>>

Remember that the index starts at 0. Therefore, the index of 1 is actually the second element in the sequence.

There are also common functions that can be applied to sequence types, such as checking the number of elements and the minimum and maximum values:

>>> len(a)
17
>>> len(vendors)
3
>>> len(datacenters)
3
>>>
>>> b = [1, 2, 3, 4, 5]
>>> min(b)
1
>>> max(b)
5

It will come as no surprise that there are various methods that apply only to strings. It is worth noting that these methods do not modify the underlying string data itself and always return a new string. In short, mutable objects, for example, lists and dictionaries, can be changed after they have been created, and an immutable object, for example, strings, cannot. If you want to use the new value, you will need to catch the return value and assign it to a different variable:

>>> a
'networking is fun'
>>> a.capitalize()
'Networking is fun'
>>> a.upper()
'NETWORKING IS FUN'
>>> a
'networking is fun'
>>> b = a.upper()
>>> b
'NETWORKING IS FUN'
>>> a.split()
['networking', 'is', 'fun']
>>> a
'networking is fun'
>>> b = a.split()
>>> b
['networking', 'is', 'fun']
>>>

Here are some of the common methods for a list. The Python list data type is a very useful structure in terms of putting multiple items together and iterating through them one at a time. For example, we can make a list of data center spine switches and apply the same access list to all of them by iterating through them one by one. Since a list's value can be modified after creation (unlike tuples), we can also expand and contract the existing list as we move along the program:

>>> routers = ['r1', 'r2', 'r3', 'r4', 'r5']
>>> routers.append('r6')
>>> routers
['r1', 'r2', 'r3', 'r4', 'r5', 'r6']
>>> routers.insert(2, 'r100')
>>> routers
['r1', 'r2', 'r100', 'r3', 'r4', 'r5', 'r6']
>>> routers.pop(1)
'r2'
>>> routers
['r1', 'r100', 'r3', 'r4', 'r5', 'r6']

Python list data is great for storing data, but it is a bit tricky at times to keep track of data if we need to reference them by location. We will take a look at Python mapping types next.

Mapping

Python provides one mapping type, called the dictionary. The dictionary is what I think of as a poor man's database because it contains objects that can be indexed by keys. This is often referred to as the associated array or hashing table in other programming languages. If you have used any of the dictionary-like objects in other languages, you will know that this is a powerful type, because you can refer to the object with a human-readable key. This key, instead of just a list of items, will make more sense for the poor guy who is trying to maintain and troubleshoot the code.

That guy could be you only a few months after you wrote the code and were troubleshooting at 2 AM. The object in the dictionary value can also be another data type, such as a list. As we have used square brackets for lists and round braces for tuples, we can use curly braces to create a dictionary:

>>> datacenter1 = {'spines': ['r1', 'r2', 'r3', 'r4']}
>>> datacenter1['leafs'] = ['l1', 'l2', 'l3', 'l4']
>>> datacenter1
{'leafs': ['l1', 'l2', 'l3', 'l4'], 'spines': ['r1',
'r2', 'r3', 'r4']}
>>> datacenter1['spines']
['r1', 'r2', 'r3', 'r4']
>>> datacenter1['leafs']
['l1', 'l2', 'l3', 'l4']

The Python dictionary is one of my favorite data containers to use in my network scripts. There are other data containers that can come in handy, set is one of them.

Sets

A set is used to contain an unordered collection of objects. Unlike lists and tuples, sets are unordered and cannot be indexed by numbers. But there is one character that makes sets stand out as useful: the elements of a set are never duplicated. Imagine you have a list of IPs that you need to put in an access list. The only problem in this list of IPs is that they are full of duplicates.

Now, think about how many lines of code you would use to loop through the list of IPs to sort out unique items, one at a time. However, the built-in set type would allow you to eliminate the duplicate entries with just one line of code. To be honest, I do not use the Python set data type that much, but when I need it, I am always very thankful it exists. Once the set or sets are created, they can be compared with each other using the union, intersection, and differences:

>>> a = "hello"
# Use the built-in function set() to convert the string to a set
>>> set(a)
{'h', 'l', 'o', 'e'}
>>> b = set([1, 1, 2, 2, 3, 3, 4, 4])
>>> b
{1, 2, 3, 4}
>>> b.add(5)
>>> b
{1, 2, 3, 4, 5}
>>> b.update(['a', 'a', 'b', 'b'])
>>> b
{1, 2, 3, 4, 5, 'b', 'a'}
>>> a = set([1, 2, 3, 4, 5])
>>> b = set([4, 5, 6, 7, 8])
>>> a.intersection(b)
{4, 5}
>>> a.union(b)
{1, 2, 3, 4, 5, 6, 7, 8}
>>> 1 *
{1, 2, 3}
>>>

Now that we have taken a look at different data types, we will take a tour of Python operators next.

Python operators

Python has some numeric operators that you would expect, such as +, -, and so on; note that the truncating division, (//, also known as floor division) truncates the result to an integer and a floating point and returns the integer value. The modulo (%) operator returns the remainder value in the division:

>>> 1 + 2
3
>>> 2 - 1
1
>>> 1 * 5
5
>>> 5 / 1 #returns float
5.0
>>> 5 // 2 # // floor division
2
>>> 5 % 2 # modular operator
1

There are also comparison operators. Note the double equals sign for comparison and a single equals sign for variable assignment:

>>> a = 1
>>> b = 2
>>> a == b
False
>>> a > b
False
>>> a < b
True
>>> a <= b
True

We can also use two of the common membership operators to see whether an object is in a sequence type:

>>> a = 'hello world'
>>> 'h' in a
True
>>> 'z' in a
False
>>> 'h' not in a
False
>>> 'z' not in a
True

The Python operators allow us to perform simple operations efficiently. In the next section, we will take a look at how we can use control flows to repeat these operations.

Python control flow tools

The if, else, and elif statements control conditional code execution. Unlike some other programming languages, Python uses indentation to structure the blocks. As one would expect, the format of the conditional statement is as follows:

if expression:
  do something
elif expression:
  do something if the expression meets
elif expression:
  do something if the expression meets
...
else:
  statement

Here is a simple example:

>>> a = 10
>>> if a > 1:
...   print("a is larger than 1")
... elif a < 1:
...   print("a is smaller than 1")
... else:
...   print("a is equal to 1")
...
a is larger than 1
>>>

The while loop will continue to execute until the condition is False, so be careful with this one if you don't want to continue to execute (and crash your process):

while expression:
  do something
  
>>> a = 10
>>> b = 1
>>> while b < a:
...   print(b)
...   b += 1
...
1
2
3
4
5
6
7
8
9
>>>

The for loop works with any object that supports iteration; this means all the built-in sequence types, such as lists, tuples, and strings, can be used in a for loop. The letter i in the following for loop is an iterating variable, so you can typically pick something that makes sense within the context of your code:

for i in sequence:
  do something
>>> a = [100, 200, 300, 400]
>>> for number in a:
...   print(number)
...
100
200
300
400

Now that we have taken a look at Python data types, operators, and control flows, we are ready to group them together into reusable code pieces called functions.

Python functions

Most of the time, when you find yourself copy and pasting some pieces of code, you should break it up into self-contained chunks of functions. This practice allows for better modularity, is easier to maintain, and allows for code reuse. Python functions are defined using the def keyword with the function name, followed by the function parameters. The body of the function consists of the Python statements that are to be executed. At the end of the function, you can choose to return a value to the function caller, or, by default, it will return the None object if you do not specify a return value:

def name(parameter1, parameter2):
  statements
  return value

We will see a lot more examples of functions in the following chapters, so here is a quick example. In the following examples, we use positional parameters, so the first element is always referred to as the first variable in the function. Another way of referring to parameters are as keywords with default values, such as def subtract(a=10, b=5):

>>> def subtract(a, b):
...   c = a - b
...   return c
...
>>> result = subtract(10, 5)
>>> result
5
>>>

Python functions are great for grouping tasks together. Can we group different functions into a bigger piece of reusable code? Yes, we can do that via Python classes.

Python classes

Python is an object-oriented programming (OOP) language. The way Python creates objects is with the class keyword. A Python object is most commonly a collection of functions (methods), variables, and attributes (properties). Once a class is defined, you can create instances of such a class. The class serves as a blueprint for subsequent instances.

The topic of OOP is outside the scope of this chapter, so here is a simple example of a router object definition:

>>> class router(object):
...   def __init__(self, name, interface_number, vendor):
...     self.name = name
...     self.interface_number = interface_number
...     self.vendor = vendor
...
>>>

Once defined, you are able to create as many instances of that class as you'd like:

>>> r1 = router("SFO1-R1", 64, "Cisco")
>>> r1.name
'SFO1-R1'
>>> r1.interface_number
64
>>> r1.vendor
'Cisco'
>>>
>>> r2 = router("LAX-R2", 32, "Juniper")
>>> r2.name
'LAX-R2'
>>> r2.interface_number
32
>>> r2.vendor
'Juniper'
>>>

Of course, there is a lot more to Python objects and OOP. We will look at more examples in future chapters.

Python modules and packages

Any Python source file can be used as a module, in fact, a Python file is a module and any functions and classes you define in that source file can be reused. To load the code, the file referencing the module needs to use the import keyword. Three things happen when the file is imported:

  1. The file creates a new namespace for the objects defined in the source file
  2. The caller executes all the code contained in the module
  1. The file creates a name within the caller that refers to the module being imported. The name matches the name of the module

Remember the subtract() function that you defined using the interactive shell? To reuse the function, we can put it into a file named subtract.py:

def subtract(a, b):
  c = a - b
  return c

In a file within the same directory of subtract.py, you can start the Python interpreter and import this function:

Python 2.7.12 (default, Nov 19 2016, 06:48:10)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import subtract
>>> result = subtract.subtract(10, 5)
>>> result
5

This works because, by default, Python will first search for the current directory for the available modules. Remember the standard library that we mentioned a while back? You guessed it, those are just Python files being used as modules.

If you are in a different directory, you can manually add a search path location using the sys module with sys.path.

Packages allow a collection of modules to be grouped together. This further organizes Python modules for more namespace protection to further reusability. A package is defined by creating a directory with a name you want to use as the namespace, then you can place the module source file under that directory.

In order for Python to recognize it as a Python package, just create a __init__.py file in this directory. The __init__.py file can often be an empty file. In the same example as the subtract.py file, if you were to create a directory called math_stuff and create a __init__.py file:

[email protected]:~/Master_Python_Networking/Chapter1$ mkdir math_stuff
[email protected]:~/Master_Python_Networking/Chapter1$ touch math_stuff/__init__.py
[email protected]:~/Master_Python_Networking/Chapter1$ tree
.
 ├── helloworld.py
 └── math_stuff
  ├── __init__.py
  └── subtract.py
1 directory, 3 files
[email protected]:~/Master_Python_Networking/Chapter1$

The way you will now refer to the module will need to include the package name using the dot notation, for example, math_stuff.subtract:

>>> from math_stuff.subtract import subtract
>>> result = subtract(10, 5)
>>> result
5
>>>

As you can see, modules and packages are great ways to organize large code files and make sharing Python code a lot easier.

 

Summary

In this chapter, we covered the OSI model and reviewed network protocol suites, such as TCP, UDP, and IP. They work as the layers that handle the addressing and communication negotiation between any two hosts. The protocols were designed with extensibility in mind and have largely been unchanged from their original design. Considering the explosive growth of the internet, that is quite an accomplishment.

We also quickly reviewed the Python language, including built-in types, operators, control flows, functions, classes, modules, and packages. Python is a powerful, production-ready language that is also easy to read. This makes the language an ideal choice when it comes to network automation. Network engineers can leverage Python to start with simple scripts and gradually move on to other advanced features.

In Chapter 2, Low-Level Network Device Interactions, we will start to look at using Python to programmatically interact with network equipment.

About the Author

  • Eric Chou

    Eric Chou is a seasoned technologist with over 20 years of experience. He has worked on some of the largest networks in the industry while working at Amazon, Azure, and other Fortune 500 companies. Eric is passionate about network automation, Python, and helping companies build better security postures. In addition to being the author of Mastering Python Networking (Packt), he is also the co-author of Distributed Denial of Service (DDoS): Practical Detection and Defense, (O'Reilly Media). Eric is also the primary inventor for two U.S. patents in IP telephony. He shares his deep interest in technology through his books, classes, and blog, and contributes to some of the popular Python open source projects.

    Browse publications by this author
Book Title
Unlock this full book with a FREE 10-day trial
Start Free Trial