Home Cloud & Networking Implementing Cisco Networking Solutions

Implementing Cisco Networking Solutions

By Harpreet Singh
books-svg-icon Book
eBook $39.99 $27.98
Print $48.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $39.99 $27.98
Print $48.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Network Building Essentials
About this book
Most enterprises use Cisco networking equipment to design and implement their networks. However, some networks outperform networks in other enterprises in terms of performance and meeting new business demands, because they were designed with a visionary approach. The book starts by describing the various stages in the network lifecycle and covers the plan, build, and operate phases. It covers topics that will help network engineers capture requirements, choose the right technology, design and implement the network, and finally manage and operate the network. It divides the overall network into its constituents depending upon functionality, and describe the technologies used and the design considerations for each functional area. The areas covered include the campus wired network, wireless access network, WAN choices, datacenter technologies, and security technologies. It also discusses the need to identify business-critical applications on the network, and how to prioritize these applications by deploying QoS on the network. Each topic provides the technology choices, and the scenario, involved in choosing each technology, and provides configuration guidelines for configuring and implementing solutions in enterprise networks.
Publication date:
September 2017
Publisher
Packt
Pages
436
ISBN
9781787121782

 

Network Building Essentials

"Alone we can do so little; together we can do so much."
- Helen Keller

Information technology (IT) has become an integral part of any modern business. This reliance on the use of technology has led to a lot of successful new businesses, and even a small delay in adopting new technology, not to mention a lack of willingness to adopt new technologies, has led to the elimination of so many businesses around the world.

Networks are the foundation for IT, as they help connect multiple elements in the technological landscape of any organization. In this chapter, we will discuss the basic concepts that the reader will find useful in their goal of learning how to build IT networks. We will cover the following topics in this chapter:

  • The need for networking and an introduction to IT networks
  • A standard reference model for a network called the Open Systems Interconnection (OSI) model
  • The TCP/IP protocol stack
  • The various stages in building a network
 

Introduction to networks

The advent of computers has had a profound impact on society. These mechanical brains can carry out most jobs today in almost all sectors from medicine, education, aviation, retail, manufacturing, entertainment, communication, science and technology, research, aerospace, banking, space exploration, weather forecasting, and business transactions—the list is endless.

Computers have evolved a long way from the machine that Charles Babbage invented to the machines we see today. Much of it has been possible by the technological advances in semiconductor technology, which has made computers sleek, faster, and cost-effective. However, computers would not be as useful as they are today, if they were "egoist machines" not talking to one another, creating islands of excellence.

Businesses felt the need to leverage computing power across domains, and had a strong desire to automate the process to reduce manual dependencies. This acted as the driver for the evolution of communication networks that would enable communication between standalone computers. This ability to network computers has made them much more effective and acceptable in modern business.

As businesses evolved, and became more competitive, information and communication were regarded as among the most important factors that define the success of an organization, and hence the channels of carrying this information and communication became the lifelines of the organization. With the ever-increasing use of computers for carrying out most of the tasks in an organization, the information flow and communication between computers is becoming as important, if not more so, than between humans.

Early computer networks used different protocols such as DECnet, SNA, NetBIOS, and IPX to make computers communicate with each other. Although this facilitated networking, most of the protocols were proprietary, thereby limiting connectivity between machines from different vendors. Computer networking was fraught with cost inefficiencies, and interoperability issues because of the lack of a standard networking protocol that could be used across all vendors. Fortunately, the success of the ARPANET and the internet gave a big impetus to TCP/IP protocol, and the wide acceptance of the TCP/IP protocol stack among home and enterprise users forced many vendors to implement the stack on their devices. This changed computer networking and brought it to the levels of standardization and plug and play nature that exists today.

 

The OSI model and the TCP/IP stack

"A common language is a first step towards communication across cultural boundaries."
- Ethan Zuckerman

In communication, it is critical to have a common language and semantics that both parties can understand for the communication to be effective. This can be thought of as having a common language when talking of human communication, and as a protocol while talking of computer networking/communications. As discussed in the previous section, with the advent of computer networking, many vendors came out with their own proprietary protocols for computers to talk to each other, leading to interoperability issues between computer systems and networking was limited to devices from the same vendor. You can't get a person who knows only Chinese to effectively communicate with a person who knows only Russian!

International bodies involved in standardization were making efforts to evolve an open common framework, which could be used by all devices that needed to communicate with each other. These efforts led to the development of a framework called the Basic Reference Model for Open Systems Interconnections (OSI) reference model. This was jointly developed by the International Organization for Standardization (ISO) and International Telegraph and Telephone Consultative Committee (CCITT) (abbreviated from the Comité Consultatif International Téléphonique et Télégraphique), which later became the ITU-T.

We will broadly define the OSI model in the subsequent section, and then dive deeper into the TCP/IP model that will help clarify some of the concepts that might appear vague in the OSI discussion, as the OSI model is only a reference model without any standardization of interfaces or protocols, and was developed before the TCP/IP protocols were developed.

OSI had two major components as defined in the ISO/IEC 7498-1 standard:

  • An abstract model of networking, called the Basic Reference Model or seven-layer model
  • A set of specific protocols defined by other specifications within ISO

Basic OSI reference model

The communication entities perform a variety of different functions during the communication process. These functions range from creating a message, formatting the message, adding information that can help detect errors during transmission, sending the data on the physical medium, and so on.

The OSI reference model defines a layered model for interconnecting systems, with seven layers. The layered approach allows the model to group similar functions within a single layer, and provides standard interfaces allowing the various layers to talk to each other.

Figure 1 shows the seven layers of the OSI model. It is important to note that the reference model defines only the functions of each layer, and the interfaces with the adjoining layers. The OSI model neither standardizes the interfaces between the various layers within the system (subsequently standardized by other protocol standards) nor delves into the internals of the layer, as to how the functions are implemented in each layer.

The OSI model describes the communication flow between two entities as follows:

  • The layers have a strict peering relationship, which means that layers at a particular level would communicate with its peer layers on the other nodes through a peering protocol, for example, data generated at layer 3 of one node would be received by the layer 3 at the other node, with which it has a peering relationship.
  • The peering relationship can be between two adjacent devices, or across multiple hops. As an example, the intermediate node in figure 1, that has only layers 1 through 3, the peering relationship at layer 7 will be between the layer 7 at the transmitting and receiving nodes, which are not directly connected but are multiple hops away.
  • The data to be transmitted is composed at the application layer of the transmitting node and will be received at the application layer of the receiving node.
  • The data will flow down the OSI-layered hierarchy from layer 7 to layer 1 at the transmitting node, traverse the intermediate network, and flow up the layered hierarchy from layer 1 to layer 7 at the receiving node. This implies that within a node, the data can be handed over by a layer to its adjacent layer only. Each layer will perform its designated functions and then pass on the processed data to the next layer:
Figure 1: The OSI reference model

The high-level functions of each layer are described as follows:

Layer 1 - The physical layer

The primary function of this layer is to convert the bit stream onto the physical medium by converting it into electrical/optical impulses or radio signals. This layer provides the physical connection to the underlying medium and also provides the hardware means to activate, maintain, and de-activate physical connections between data link entities. This includes sequencing of the bit stream, identifying channels on the underlying medium, and optionally multiplexing. This should not be confused with the actual medium itself.

Some of the protocols that have a layer 1 component are Ethernet, G.703, FDDI, V.35, RJ45, RS232, SDH, DWDM, OTN, and so on.

Layer 2 - The data link layer

The data link layer acts as the driver of the physical layer and controls its functioning. The data link layer sends data to the physical layer at the transmitting end and receives data from the physical layer at the receiving node. It also provides error detection and correction that might have occurred during transmission/reception at the physical medium, and also defines the process for flow control between the two nodes to avoid any buffer overruns on either side of the data link connection. This can happen using PAUSE frames in Ethernet, and should not be confused with flow control in higher layers.

Some of the protocols that operate at the data link layer are LAPB, 802.3 Ethernet, 802.11 WiFi and 802.15.4 ZigBee, X.25, Point to Point (PPP) protocol, HDLC, SLIP, ATM, Frame Relay, and so on.

Layer 3 - The network layer

The basic service of the network layer is to provide the transparent transfer of datagrams between the transport layers at the two nodes. This layer is also responsible for finding the right intermediate nodes that might be required to send data to the destination node, if the destination node is not on the same network as the source node. This layer also breaks down datagrams into smaller fragments, if the underlying datalink layer is not capable of handling the datagram that is offered to the network layer for transport on the network.

A fundamental concept in the OSI stack is that data should be passed to a higher layer at the receiving node as it was handed over to the lower layers by the transmitting peer. As an example, the TCP layer passes TCP segments to the IP layer, and the IP layer might use the services of the lower layers, leading to fragment packets on the way to the destination, but when the IP layer passes the data to the TCP layer at the receiving node, the data should be in the form of TCP segments that were handed down to the IP layer at the transmitting end. To ensure this transparent transfer of datagrams to the receiving node TCP layer, the network layer at the receiving node reassembles all the fragments of a single datagram before handing it over to the transport layer.

The OSI model describes both connection-oriented and connectionless modes of the OSI network layer.

Connection- oriented and connectionless modes are used to describe the readiness of the communicating nodes before the process of actual data transfer between the two nodes. In the connection-oriented mode, a connection is established between the source and the destination, and a path is defined along the network through which actual data transfer would happen. A telephone call is a typical example of this mode, where you cannot talk until a connection has been established between the calling number and the called number.

In the connectionless mode of data transfer, the transmitting node just sends the data on the network without first establishing a connection, or verifying whether the receiving end is ready to accept data, or even if the receiving node is up or not. In this mode, there is no connection or path established between the source and the destination, and data generally flows in a hop by hop manner, with a decision being taken on the best path towards the destination at every hop. Since, data is sent without any validation of the receiving node status, there is no acknowledgement of data in a connectionless mode of data transfer. This is unlike the connection-oriented mode, where the path is defined the moment a connection is established, and all data flows along that path, with the data transfer being acknowledged between the two communicating nodes.

Since data packets in a connection-oriented mode follow a fixed path to the destination, the packets arrive in the same sequence at the receiver in which they were transmitted. On the other hand, packets in the case of a connectionless network might reach the receiver out of sequence if the packets are routed on different links on the network, as decisions are taken at every hop.

The OSI standard defined the network layer to provide both modes. However, most of the services were implemented in practice as the connectionless mode at layer 3, and the connection-oriented aspects were left to layer 4. We will discuss this further during our discussion on TCP/IP.

Some of the protocols that operate at the network layer are AppleTalk, DDP, IP, IPX, CLNP, IS-IS, and so on.

Layer 4 - The transport layer

The transport layer provides the functional and procedural means of transferring variable-length data sequences from a source to a destination host via one or more networks. This layer has end-to-end significance and provides a connectionless or connection-oriented service to the session layer. This layer is responsible for connection establishment, management, and release.

The transport layer controls the reliability of a given link through end-to-end flow control, segmentation/de-segmentation, and error control. This layer also provides multiplexing functions of multiplexing various data connections over a single network layer.

Some protocols operating at the transport layer are TCP, UDP, SCTP, NBF, and so on.

Layer 5 - The session layer

The primary purpose of the session layer is to coordinate and synchronize the dialog between the presentation layers at the two end points and to manage their data exchange. This layer establishes, manages, and terminates connections between applications. The session layer sets up, coordinates, and terminates conversations, exchanges, and dialogues between the applications at each end.

Some of the protocols operating at the session layer are sockets, NetBIOS, SAP, SOCKS, RPC, and so on.

Layer 6 - The presentation layer

The presentation layer provides a common representation of the data transferred between application entities, and provides independence from differences in data representation/syntax. This layer is also sometimes referred to as the syntax layer. The presentation layer works to transform data into the form that the application layer can accept. This layer is also responsible for encryption and decryption for the application data.

Some examples of protocols at the presentation layer are MIME, ASCII, GIF, JPEG, MPEG, MIDI, SSL, and so on.

Layer 7 - The application layer

The application layer is the topmost layer of the OSI model, and has no upper-layer protocols. The software applications that need communication with other systems interact directly with the OSI application layer. This layer is not to be confused with the application software, which is the program that implements the software; for example, HTTP is an application layer protocol, while Google Chrome is a software application.

The application layer provides services directly to user applications. It enables the users and software applications to access the network and provides user interfaces and support for services such as email, remote file access and transfer, shared database management, and other types of distributed information services.

Some examples of application layer protocols are HTTP, SMTP, SNMP, FTP, DNS, LDAP, Telnet, and so on.

The TCP/IP model

The Advanced Research Projects Agency Network (ARPANET), which was initially funded by the US Department of Defense (DoD) was an early packet-switching network and the first network to implement the protocol suite TCP/IP. ARPANET was the test bed of the TCP/IP protocol suite which resulted in the TCP/IP model also known as the DoD model.

The TCP/IP model is a simplified model of the OSI model and has only four broad layers instead of the seven layers of the OSI model. Figure 2 shows the comparison between the two models. As can be seen from the following figure, the TCP/IP model is a much more simplified model, where the top three layers of the OSI model have been combined into a single application layer, and the physical and data link layers have been combined into a network access layer:

Figure 2: Comparing the OSI model with TCP/IP model

Some of the major differences between the two models are as follows:

  • The functions of the application layer in the TCP/IP model include the functions of the application, presentation and session layer of the OSI model
  • The OSI session layer function of graceful close/end-to-end connection setup, management, and release is taken over by the TCP/IP transport layer (Transmission Control Protocol)
  • The network access layer combines the functions of the OSI data link and the physical layers
  • The network layer in the OSI mode can be connection oriented or connectionless, while the Internet Protocol (IP) is a connectionless protocol
  • The transport layer in the OSI model is connection oriented, whereas, different protocols at the transport layer in the TCP/IP model provide different types of services; for example, TCP provides a connection oriented service, while UDP provides a connectionless service

Let's explore what happens when data moves from one layer to another in the TCP/IP model taking Figure 3 as an example. When data is given to the software application, for example, a web browser, the browser sends this data to the application layer, which adds a HTTP header to the data. This is known as application data. This application data is then passed on to the TCP layer, which adds a TCP header to it, thus creating a TCP segment. This segment is then passed on to the network layer (IP layer) where the IP header is added to the segment creating an IP packet or IP datagram. This IP header is then encapsulated by the data link adding a data link header and trailer, creating a Frame. This frame is then transmitted onto the transmission medium as a bit stream in the form of electrical/optical/radio signals depending upon the physical media used for communication:

Figure 3: Data flow across the TCP/IP layers

A simplified stack showing some protocols in the TCP/ IP stack is shown in the following figure:

Figure 4: Common protocols in the TCP/IP stack

Let's delve deeper into the TCP/IP model by looking at the TCP/IP headers in some more detail.

Internet Protocol (IP)

Internet Protocol (IP) as it is commonly known, was developed by Bob Kahn and Vinton Cerf, and is a protocol operating at layer 3 (network layer) of the OSI model. The primary function of the IP is to transfer datagrams from source to destination and provide a network transport service. As noted in the preceding section, IP as defined in the TCP/IP model operates in a connectionless mode, and hence is sometimes referred to as Send and Pray protocol, as there is no acknowledgement/guarantee that the IP datagrams sent by the source have been received by the destination. This function is left to the upper layers of the protocol stack.

Figure 5 shows the structure and fields of an IPv4 header. The IPv4 header is defined in the IETF standard, RFC 791. The header is appended by the network layer to the TCP/UDP segments handed to the network layer. The length of the header is always a multiple of 4 bytes. The section consists of multiple fields that are outlined in the following figure.

The length of each part of the IPv4 header in bits is highlighted in Figure 5 within parenthesis after the name of the field:

Figure 5: IPv4 packet format

We will now talk about the fields in brief:

  • Version (4): This is a 4-bit field and is used to decode the IP address version being used by the IP system. The version for the header depicted in Figure 5 is version 4. There is a newer version of IP called IP version 6 or IPv6, which has a different header format and is discussed later.
  • Header Length: This is again a 4-bit field, and encodes the length of the IP header in 4-byte words. This means that if the IPv4 header has no options, the header would be 20 bytes long, and hence would consist of five 4-byte words. Hence, the value of the header length field in the IP header would be 5. This field cannot have a value less than 5 as the fields in the first 20 bytes of the IPv4 header are mandatory.
  • DSCP: Differentiated Services Code Point (DSCP) is a 6-bit field in the IPv4 header and is used to encode the Quality of Service (QoS) required by the IP datagram on the network. This field will define if the packet will be treated as a priority packet on the network, or should be discarded if there is congestion on the network. This field was not in the original RFC for IP, but was added later by RFC 2474 to support differentiated models for QoS on IP networks. We will discuss this in detail in the chapter on QoS implementation.
  • ECN: Explicit Congestion Notification (ECN) is a 2-bit field defined by RFC 2481, and the latest standard for this at the time of writing is RFC3168. This field is used to explicitly notify the end hosts if the intermediate devices have encountered congestion so that the end devices can slow down the traffic being sent on the network, by lowering the TCP window. This helps in managing congestion on the network even before the intermediate devices start to drop packets due to queue overruns.
  • Total Length: This is a 16-bit field that encodes the total length of the IP datagram in bytes. The total length of the IP datagram is the length of the TCP segment plus the length of the IP header. Since this is a 16-bit field, the total length of a single IP datagram can be 65535 bytes (216-1). The most commonly used length for the IP datagram on the network is 1500 bytes. We will delve deeper into the impact of the IP datagram size in the later chapters while discussing the impact on the WAN.
  • Identification (ID): This 16-bit value uniquely identifies an IP datagram for a given source address, destination address, and protocol, such that it does not repeat within the maximum datagram lifetime, which is set to 2 minutes by the TCP specification (RFC 793). RFC 6864 has made some changes to the original fields that are relevant only at high data rates, and in networks that undergo fragmentation. These issues will be discussed in the later chapters.
  • Flags: These are three different flags in the IPv4 header as shown in Figure 6. Each flag is one bit in length. The flags are used when the IP layer needs to send a datagram of a length that cannot be handled by the underlying data link layer. In this case, the intermediate nodes can fragment the datagram into smaller ones, which are reassembled by the IP layer at the receiving node, before passing on to the TCP layer. The flags are meant to control the fragmentation behavior:
Figure 6: Flags in IPv4 header
  • MBZ: This stands for Must be Zero (MBZ), where bits are always sent as 0 on the network.
  • DF: This stands for Do Not Fragment (DF) bit, which if set to 1 means that this packet should not be fragmented by the intermediate nodes. Such packets are discarded by the intermediate nodes, if there is a need to fragment these packets, and an error message is sent to the transmitting node using Internet Control Message Protocol (ICMP).
  • MF: This stands for More Fragments (MF) bit, which if set to 1 signifies that this is a fragmented packet and there are more fragments of the original datagram. The last fragment and an unfragmented packet will have the MF bit as 0:
    • Fragment Offset: This field is 13 bits long and is used only by the fragmented packets to denote where in the original datagram the fragment belongs. The first fragment will have the offset as 0 and the subsequent fragments will have the fragment offset value that defines the length of all fragments before this fragment in the original datagram as a number, where each number is 8 bytes.
    • Time To Live/TTL: This 8-bit field is used to denote the maximum number of intermediate nodes that can process the packet at the IP layer. Each intermediate node decrements the value by 1 to ensure that the IP packet does not get caught in an infinite routing loop and keeps on going back and forth between nodes. The packet is discarded when the field reaches a zero value, and is discarded by the node, and an error message sent to the source of the datagram as an ICMP message.
    • Protocol: This 8-bit field is used to denote what upper layer protocol is being encapsulated in the IP packet. Since the IP layer multiplexes multiple transport layers, for example, UDP, TCP, OSPF, ICMP, IGMP, and so on, this field acts as a demultiplexing identifier to identify which upper layer should the payload be handed to at the receiving node. The values for this field were originally defined in RFC 1700, which is now obsolete, and is replaced by an online database. Some of the common values for the protocol field are shown in the following figure:
Figure 7: Some common IP protocol numbers
  • Header Checksum: This 16-byte field is used for checking the integrity of the received IP datagram. This value is calculated using an algorithm covering all the fields in the header (assuming this field to be zero for the purposes of calculating the header checksum). This value is calculated and stored in the header when the IP datagram is sent from source to destination and at the destination side this checksum is again calculated and verified against the checksum present in header. If the value is the same, then the datagram was not corrupted, else it's assumed that datagram was received corrupted.
  • Source IP address and Destination IP address: These 32-bit fields contain the source and destination IP addresses respectively. Since the length of an IPv4 address is 32 bits, this field length was set to 32 bits. With the introduction of IPv6, which has a 128-bit address, this cannot fit in this format, and there is a different format for an IPv6 header.
  • Options: This optional, variable-length field contains certain options that can be used by IP protocol. Some of these options can be used for Strict Source routing, Loose Source routing, Record route options, and so on that are used for troubleshooting and other protocols.
  • Padding: This is a field that is used to pad the IP header to make the IPv4 header length a multiple of 4 bytes, as the definition of the Header Length field mandates that the IPv4 header length is a multiple of 4 bytes.
  • Data: This variable length field contains the actual payload that is encapsulated at the IP layer, and consists of the data that is passed onto the upper layer transport protocols to the IP layer. The upper layer protocols attach their own headers as the data traverses down the protocol stack, as we saw in Figure 3: Data flow across the TCP/IP layers.

Transmission Control Protocol (TCP)

"The single biggest problem with communication is the illusion that it has taken place."
- George Bernard Shaw

As discussed in the previous section, IP provides a connectionless service. There is no acknowledgement mechanism in the IP layer, and the IP packets are routed at every hop from the source to the destination. Hence, it is possible that some packets sent by the transmitting node are lost on the network due to errors, or are discarded by the intermediate devices due to congestion on the network. Hence the receiving node will never receive the lost packets in the absence of a feedback mechanism.

Further, if there are multiple paths on the network to reach the destination from the source, it is possible that packets will take different paths to reach the destination, depending upon the routing topology at a given time. This implies that packets can reach the receiving node out of sequence with respect to the sequence in which they were transmitted.

The TCP layer ensures that whatever was transmitted is correctly received. The purpose of the TCP layer is to ensure that the receiving host application layer sees a continuous stream of data as was transmitted by the transmitting node as though the two were connected through a direct wire. Since TCP provides that service to the application layer using the underlying services of the IP layer, TCP is called a connection-oriented protocol.

A typical TCP segment is shown in Figure 8, where the different fields of the TCP header are shown along with their lengths in bits in parentheses. A brief description of the functions of the various fields is shown in the following figure:

Figure 8: Transmission Control Protocol (TCP) segment structure
  • Source Port/Destination Port: As discussed in the earlier sections, the transport layer provides the multiplexing function of multiplexing various data connections over a single network layer. The source port and destination port fields are 16-bit identifiers that are used to distinguish the upper layer protocols. Some of the common TCP port numbers are shown in the following figure:
Figure 9: Common TCP Port Numbers
  • Sequence Number: This 16-bit field is used to number the starting byte of the payload data in this TCP segment with relation to the overall data stream that is being transmitted as a part of the TCP session.
  • Acknowledgement Number: This 16-bit field is a part of the feedback mechanism to the sender and is used to acknowledge to the sender how many bytes of the stream have been received successfully, and in sequence. The acknowledgement number identifies the next byte that the receiving node is expecting on this TCP session.
  • Data Offset: This 4-bit field is used to convey how far from the start of the TCP header the actual message starts. Hence, this value indicates the length of the TCP header in multiples of 32-bit words. The minimum value of this field is 5.
  • Reserved: These are bits that are not to be used, and will be reserved for future use.
  • Control flags: There are 9 bits reserved in the TCP header for control flags and there are 9 one-bit flags as shown in Figure 10. Although these flags are carried from left to right, we will describe them in the random order for ease of understanding:
Figure 10: TCP control Flags
  • SYN: This 1-bit flag is used to initiate a TCP connection during the three-way handshake process.
  • FIN: This 1-bit flag is used to signify that there is no more data to be sent on this TCP connection, and can be used to terminate the TCP session.
  • RST: This 1-bit flag is used to reject the connection to maintain synchronization of the TCP session between two hosts.
  • PSH: Push (PSH) is a 1-bit flag that tells the TCP receiver not to wait for the buffer to be full, but to send the data gathered so far to the upper layers.
  • ACK: This 1-bit flag is used to signify that the Acknowledgement field in the header is significant.
  • URG: Urgent (URG) is also a 1-bit flag, and when set signifies that this segment contains Urgent data and the urgent pointer defines the location of that urgent data.
  • ECE: This 1-bit flag (ECN Echo) signals to the network layer that the host is capable of using Explicit Congestion techniques as defined in the ECN bit section of the IP header. This flag is not a part of the original TCP specification, but is added by RFC 3168.
  • CWR: This is also a 1-bit flag added by RFC 3168. The Congestion Window Reduced (CWR) flag is set by the sending host to indicate that it received a TCP segment with the ECE flag set.
  • NS (1 bit): This 1-bit flag is defined by an experimental RFC 3540, with the primary intention that the sender can verify the correct behavior of the ECN receiver.
    • Window Size: This 16-bit field indicates the number of data octets beginning with the one indicated in the acknowledgment field, which the sender of this segment is willing to accept. This is used to prevent the buffer overruns at the receiving node.
    • Checksum: This 16-byte field is used for checking the integrity of the received TCP segment.
    • Urgent Pointer: The urgent pointer field is often set to zero and ignored, but in conjunction with the URG control flags, it can be used as a data offset to identify a subset of a message that requires priority processing.
    • Options: These are used to carry additional TCP options such as Maximum Segment Size (MSS) that the sender of the segment is willing to accept.
    • Padding: This is a field that is used to pad the TCP header to make the header length a multiple of 4 bytes, as the definition of the data offset field mandates that the TCP header length be a multiple of 4 bytes.
    • Data: This is the data that is being carried in the TCP segment and includes the application layer headers.

Most of the traffic that we see on the internet today is TCP traffic. TCP ensures that application data is sent from the source to the destination in the sequence that it was transmitted, thus providing a connection oriented service to the application. To this end, TCP uses acknowledgement and congestion control mechanisms using the various header fields described earlier. At a very high level, if the segments are received at the receiver TCP layer that are out of sequence, the TCP layer buffers these segments and waits for the missing segments, asking the source to resend the data if required. This buffering, and the need to sequence datagrams, needs processing resources, and also causes unnecessary delay for the receiver.

We live in a world where data/information is time sensitive, and loses value if delivered later in time. Consider seeing the previous day's newspaper at your doorstep one morning. Similarly, there are certain types of traffic that lose their value if the traffic is delayed. This type of traffic is usually voice and video traffic when encapsulated in IP. Such traffic is time sensitive and there is no point in providing acknowledgements, and adding to delays. Hence, this type of traffic is carried in a User Datagram Protocol (UDP) that is a connectionless protocol and does not use any retransmission mechanism. We will explore this more during our discussions on designing and implementing QoS.

User Datagram Protocol (UDP)

UDP is a protocol that provides connectionless service to the application, and sends data to the application layer as received, without worrying about lost parts of the application data stream or some parts being received out of order. A UDP packet is shown in Figure 11:

Figure 11: UDP packet structure

Since UDP provides lesser services compared to TCP, the packet has fewer fields and is much simpler. The UDP datagram can be of any length as can be encapsulated in the IP packets as follows, and has a header that is of fixed 8-byte length. The different fields in the UDP packet are discussed as follows:

  • Source Port/Destination Port: Like TCP, UDP also serves multiple applications and hence has to provide the multiplexing function to cater to multiple applications that might want to use the services of the UDP layer. The source port/destination port fields are 16-bit identifiers that are used to distinguish the upper layer protocols. Some of the common UDP port numbers are shown in the following figure:
Figure 12: Common UDP port numbers
  • Length: This 16-bit field represents the total size of each UDP datagram, including both header and data. The values range from a minimum of 8 bytes (the required header size) to sizes above 65,000 bytes.
  • Checksum: Similar to TCP, this 16-bit field is used for checking the integrity of the received UDP datagram.
  • Data: This is the data that is being carried in the UDP packet and includes the application layer headers.

IP version 6

IPv6 is a new version of the IP protocol. The current version IPv4 had a limited number of IP addresses (2^32 addresses), and there was a need to connect more hosts. Hence IPv6 allows for a 128-bit address field compared to a 32-bit address field in IPv4. Hence, IPv6 can have 2^128 unique IP addresses. IPv6 also provides some new features and does away with some features of the IPv4 packet such as fragmentation and Header checksum. Figure 13 shows the IPv6 header and shows the various fields of the IPv6 header:

Figure 13: IPv6 header

We will not go into the details of IPv6 in this book, but will cover it as and when required during the discussion on design and implementation.

 

Building a network

Now that we have reviewed the basics of the networking protocols that would be fundamental to build an Enterprise network, let's discuss the considerations for building an IP network.

Purpose of networks

We see so many networks around us. Each network has a specific purpose for which it is built. For example, the primary purpose of the networks that we see in computer labs is to provide access to shared resources, most notably printers and data storage. The networks in the manufacturing plant are meant to carry control signals for the various plant machinery that are connected on the network. The military and defense networks have a totally different purpose.

Since the networks are supposed to deliver different services to the end users, the design of the network will be different, and will be defined by the characteristics of the services to a large extent. Hence, the starting point for planning a network is to define the services that the network will offer, so that the network can be built accordingly.

Once the network is built, and starts offering services to the end users, it needs to be operated, and changes need to be made on the network on a day-to-day basis. The operations include monitoring the network for critical network parameters, and taking corrective action in case of network incidents such as outages/performance degradations. The changes might also include adding new services or deleting or modifying any existing services on the network. The network operations depend upon the way the network is designed, and the services it is running. For example, for a network that is not built with adequate redundancy, the operations approach has to be very different than that for a network that has enough resiliencies built in the network.

These concepts have been widely described and used in the frameworks used for network architectures, for example, for the Services, Network, Operations (SNO) approach, or the Services Oriented Network Architecture (SONA) framework proposed by Cisco.

Network lifecycle

As discussed in the previous section, the network is built for a specific purpose. Operating the network involves making changes to the network parameters, and sometimes design, to meet new business/application requirements, and finally the network is either replaced by a new design or incorporates a new technology. Since the network is dynamic, it is important to have a systematic approach based on the different phases of the network. Different approaches have been proposed by different vendors, but almost all of them are essentially overlapping and similar. In this section, we will cover Cisco's PPDIOO approach for network lifecycle, as it is the most comprehensive approach, and is a superset of other approaches within the scope of the network lifecycle. PPDIOO is an acronym for Prepare, Plan, Design, Implement, Operate, and Optimize.

Other forms of the lifecycle approach that are simplified versions of the PPDIOO approach are the Plan, Build, Manage (PBM) approach where some of the stages of the PPDIOO approach are combined into the three phases of the PBM approach.

Advantages of network lifecycle approach

The network lifecycle approach provides several key benefits aside from keeping the design process organized. Some of the benefits of using a structured lifecycle approach to network design are as follows:

  • Lowering the total cost of network ownership: Businesses have always used the total cost of ownership (TCO) approach to take decisions. With IT becoming more and more relevant to business today, their IT decisions have to follow the approach of TCO rather than a pure capital expenditure (CAPEX) approach. This means that the operational expenses (OPEX) that have to be incurred while running and maintaining the network are also an important factor in the overall network approach. The network lifecycle approach helps in lowering the TCO by:
    • Identifying, evaluating, and choosing the right technology options.
    • Developing and documenting a design that is aligned with the business/service requirements.
    • Building implementation plans that can minimize the risk of implementation, thus avoiding cost and time overruns.
    • Planning for the operations as an integral part of network design so as to improve the operational efficiency by choosing the right set of tools, and operational skills required.
  • Increasing services uptime: Downtime or outages are the most dreaded terms in network operations, as they causes service disruption resulting in loss of revenue and goodwill. A network lifecycle approach can help reduce downtime by:
    • Identifying the network elements that need to be highly available for service availability, and designing the network for redundancy of such elements.
    • Planning the operational skills required for the network, and ensuring that the Network Operations Center (NOC) staff has the right skills.
  • Improving business agility: As businesses are faced with dynamic market trends, IT needs to be able to support business quickly and efficiently. This agility means that the network should have the ability to make changes to the way existing services are delivered on the network, or the ability to quickly add new services to the network based on the business requirements. A lifecycle approach helps provide this agility to the network by:
    • Capturing the business and technology requirements and their dependencies.
    • Developing detailed designs for each service at a block level and at a configuration level such that new services can be added without impacting the existing services.
    • Defining the horizontal and vertical scaling options for the network design during the design phase itself so that capacity can be quickly added to the network when required by the applications.
    • Creating operational run books and bringing in operational efficiencies through the proper use of tools and the right resource skills.

The PPDIOO approach consists of six phases as depicted in Figure 14, which are described as shown in the following diagram:

Figure 14: The PPDIOO approach

Prepare phase

“Planning is bringing the future into the present so that you can do something about it now.”
- Alan Lakein

It is said with reference to the OSI layers that there is an eighth layer that rides preceding the application layer, and that is the business layer, as this is the layer that will define what applications are to be used on the network. The prepare phase tries to capture the business layer and the technological requirements of the underlying network infrastructure.

The prepare and the plan phases of the network lifecycle talk about the future and then hand it over to the next phases, which are concerned with how to build the present network so that it can meet the future requirements.

The prepare phase involves establishing the organizational requirements from a business perspective, and developing an appropriate technology strategy. The following are some examples of questions to be answered in this stage:

  • What is the vision of the company?
  • What are the business goals of the company today, and anticipating the goals and IT requirements in the future?
  • What is the cloud strategy for the organization?
  • Would the organization want to own the network assets and build a data center, or just host the applications in an outsourced data center?
  • What will be the model of the DC outsourcing? Infrastructure as a Service or Platform, or Software as a Service?
  • What is the communications strategy of the company? Would the company want to move to cloud-based models for its internal communications?
  • What will be the WAN strategy for the network? Would the WAN links be owned or on a shared network?
  • What is the Operations strategy for the organization?

The end goal of this stage is to develop a network strategy by comparing the different options and to propose an architectural view that identifies the various technologies to be used in the network, and the interdependencies between the various technologies. This phase also covers a lot of financial analysis, and building business cases for the decisions as all decisions have to be backed by sound financial reasoning.

By preparing for the network rollout in this manner, the company has a fair view of the budgetary requirements for the project in terms of time, money, and resources, and a long-term roadmap that can be leveraged as the business requirements change with time.

Most of these decisions are taken by the senior management and have already been taken by the time a network is being designed and implemented. Hence, we will not delve any further into these aspects in this book, but focus on the implications of the various technologies and how they impact the operational and business models in subsequent chapters.

Plan phase

This is the phase where the job of an actual network architect starts. This phase involves getting the right stakeholders together and documenting the network requirements with respect to the network goals, services, user needs, and so on. The plan phase involves identifying the sites, classifying them, and evaluating the existing network infrastructure if any to understand if the existing assets can be reused and redesigned for the new network. This phase also involves finalizing the hardware requirements for the network infrastructure devices.

Some of the questions that need to be answered at this stage are as follows:

  • Who are the users of this network and what is the level of segmentation required between the various users?
  • What are the services required by each group of users?
  • Where are the users located?
  • What is the hardware required to meet the user requirements?
  • Where will the new hardware be installed?
  • What are the power and space requirements at the locations?
  • What are the existing services/network if any that need to be integrated/replaced by the new network?
  • Where would the network be connected to the internet?
  • What is the current security state of the company?
  • What operational skills will be required to design/implement and operate the network?

Two important documents that are created during this phase are the Customer Requirement Document (CRD) that contains the detailed technical specifications of the network to be built, and the Site Requirement Specification (SRS) document that contains the physical, electrical, and environmental specifications for each site, where the equipment will be deployed. Site audits are done based upon the SRS documents to ensure that the sites are ready for the equipment to be installed, and any gaps/corrective action required is identified.

We will cover some of these topics in Chapter 2, Networks for Digital Enterprises, where we will describe the network requirements for a modern enterprise.

Design phase

The business requirements have been drilled down into technical requirements until the planning phase. Now is the time to convert the technical requirements into the actual protocol-level details that will ensure that the network delivers the technical, functional, and performance requirements that the network is being designed for. In this phase of the network lifecycle, some of the most technical decisions are made such as:

  • What should be the physical topology of the network?
  • What should be the logical topology of the network?
  • How should we plan for redundancy at the node level, site level, and at a service level?
  • What should be the IP addressing schema for the network?
  • What protocols should run on the network?
  • How do we prioritize the different types of applications on the network?
  • How do we segment the users on the network?
  • How do we ensure security of the network devices?
  • What management protocols should be run on the network?
  • How would the different services be deployed on the network?
  • How would we ensure that adding a new service does not impact any existing service?

It is in the design phase of the network lifecycle that the documents called the High-level design (HLD) and Low-level design (LLD) documents are made. The high-level design talks about the network design at a protocol level, and the low-level design talks about how to implement the design on the network devices and arriving at configuration templates. These design documents detail the design to meet the requirements of availability, reliability, flexibility, security, scalability, and performance.

The detailed design can also help in chalking out the day-to-day operational activities and network management processes, thereby simplifying network operations and helping to reduce OPEX and TCO. The design phase is also the phase when the design is validated on a staged network in the lab and configuration templates are fine-tuned.

Another important activity in the design phase is to define the test cases that will be executed on the network to ensure that the network is built as designed. The test case document is generally called an Acceptance Test Plan (ATP) document or a Network Ready for Use (NRFU) test plan. Having a documented test plan down to the details on how to execute the tests and what commands to run to validate the network implementation is crucial to ensure that the network will run as per the required specifications. A typical NRFU will have two parts: one covering the test cases that can be carried out on a standalone basis at each site, and the other part covering the end-to-end service testing across the entire network. The NRFU document can also add additional parts specific to network integration/service migration if the new network has to be integrated with any existing assets, or any existing services need to be migrated on the network that is being built.

We will cover these activities like choosing the right protocols and building the configuration templates based on these protocol choices in Chapters 3 to 9.

Implement phase

In the implement phase, the goal is to integrate devices and new capabilities in accordance with the design and without compromising network availability or performance. The implementation phase is where the actual implementation of the design starts. This phase includes deploying the network equipment and configuring it.

Site audits are reviewed and the actual implementation of the devices, including rack and stack power on testing is done in this stage. Some of the documents required for this stage include detailed installation documents for each type of equipment and the test process for each device type. Further, Network Implementation Plan (NIP) documents are created that are detailed documents for each site that is a part of the network. These documents lay down the list of equipment to be deployed at the site, the rack layouts, port connectivity diagrams, and the actual device configurations, along with the IP addressing and other variable parameters for each device at the site. The configurations are derived based on the configuration templates that were created and tested in the design phase. This document becomes the reference document for the implementation engineer who has to simply download the configuration onto the new devices and conduct the tests that are specific to the site.

Once all sites are up and ready, and the WAN connectivity is established, end-to-end service testing is conducted based on the NRFU test cases.

Any migration of existing services or any integration of networks with the existing network infrastructure is also carried out in this phase and the success validated against the test cases as defined in the NRFU document.

Operate phase

The operation phase of the network is when the actual users start using the network and the operations staff starts monitoring the network and services delivered on the network. It is important to have a multifaceted approach to network operations, that would include the domains of people, processes, and tools. The primary goal in the network operations phase is to maintain network and service uptime, at minimal cost. This can be done only if the organization has the right skills in the resources tasked with network operations, a structured process for the day-to-day activities so that the tasks are not dependent on individuals, but every single person carries a job in the same way as the others would. The tools aspect is essential to improve efficiencies, as mundane and routine tasks can be automated, thereby allowing the NOC resources to focus on actual problems and reducing the chances of manual error.

It is important to add at this point, that nearly 67% of the IT budget is operational and only 33% of the expense is of a capital nature. Since the operations involve a large expense, any advances in the process to improve availability or to bring down cost are of great value to an organization.

The network operations involve monitoring the vital parameters of the network to increase service availability uptime, improve service quality and mitigate outages, and monitor performance of the network devices for any potential signs that can cause an outage or security issues on the network. We will cover operations in more detail in Chapter 10, A Systematic Approach to Network Operations and Chapter 11, Basic Troubleshooting Skills and Techniques later in this book.

Optimize phase

One needs to constantly evolve and improve to maintain a competitive advantage. A continual/continuous improvement process (CIP) is an ongoing effort to improve products, services, or processes. It is this desire of the organization to continuously evolve and improve that is addressed in the optimization phase.

This process is closely tied to the operations phase as the results of the operations phase are analyzed to detect recurring problems and to see if there are any design discrepancies, or enhancements that can be made to the network to improve service availability or performance. The goal of the optimize phase is to identify and resolve problems before they actually start to manifest themselves on the network resulting in service disruptions.

The optimize phase of the network lifecycle can cause changes to the network design to meet the service specifications with respect to functionality, performance, security, availability, and so on. In such cases, the network engineers go back to the drawing board to evaluate new alternatives and propose new approaches to meet the changing needs. Some of the things might need minor tweaks to the design, and hence can be handled by going back to the design phase of the network lifecycle, followed by implementation and testing. However, optimization can also be triggered by the maturity of a new technology in the market, and hence that needs a much broader scope, and the organization needs to start at the plan phase, and follow the complete cycle all over again. Whatever be the case, it is important to document the reasons for change, the proposed solution, and then follow it up with the rigor of the complete lifecycle in order for the network to be capable of meeting new requirements in a sustainable manner.

 

Summary

We discussed the basics of networking in this chapter, delving deep into the TCP and IP protocols, as these are fundamental protocols that will enable all services in a modern enterprise.

We also looked at the network lifecycle approach and the various stages of the network as it is built, with stress upon the need for the multiple activities required for building a network.

We will apply these concepts in the next chapters to discuss the various parts of an IT network and the design considerations for each of these parts of the network. This will cover the campus local area network, the wireless networks, the wide area networks, and data center. We will discuss the various characteristics of the network as a whole in Chapter 2, Networks for Digital Enterprises and Chapter 3, Components of the Enterprise Network and delve deep into the different domains in the later chapters.

About the Author
  • Harpreet Singh

    Harpreet Singh has more than 20 years of experience in the data domain and has been designing and implementing networks and solutions across technologies from X.25, FR, ATM, TCP/IP, and MPLS-based networks. Harpreet is a gold medalist and earned his bachelor of engineering degree before completing his postgraduate diploma in business administration. He has been a part of the faculty at the Advanced Level Telecom Training Center, a premier institute under the UNDP program for the training of telecom officers, where he conducted training on data networks, including technologies such as X.25, Frame Relay, ATM, Siemens Switches, and IP/ MPLS networks. Harpreet has been a part of the core team for multiple pan-India network rollouts ranging from plain IP to Carrier Ethernet and MPLS. He has been involved with all major service providers in India. He was the network architect for the first pan-India IP network in 1997, the first MPLS network rollout in India in 2002, and the largest MetroE deployment in the world at the time in 2004. He was the technical director for the largest ever mobile backhaul IP network based on an IP/MPLS network. He is currently a technology consultant at Cisco Systems, engaged in large and complex cross-technology projects for strategic customers, advising them on network design, operations, and digital transformations. Harpreet has been a speaker at forums such as APRICOT, IETE, and other international conferences. He can be reached at harpreet_singh_2000@yahoo.com.

    Browse publications by this author
Latest Reviews (1 reviews total)
Low rate for my first buy from you.
Implementing Cisco Networking Solutions
Unlock this book and the full library FREE for 7 days
Start now