Networking Performance Design

Exclusive offer: get 50% off this eBook here
vSphere High Performance Cookbook

vSphere High Performance Cookbook — Save 50%

Over 60 recipes to help you improve vSphere performance and solve problems before they arise with this book and ebook

$32.99    $16.50
by Prasenjit Sarkar | August 2013 | Cookbooks Enterprise Articles

In this article created by Prasenjit Sarkar, the author of vSphere High Performance Cookbook, we will cover the tasks related with networking performance design. You will learn the following aspects of networking performance design:

  • Designing a network for load balancing and failover for vSphere Standard Switch
  • Designing a network for load balancing and failover for vSphere Distributed Switch
  • What to know when offloading checksum
  • Selecting the correct virtual network adapter
  • Improving performance through VMDirectPath I/O
  • Improving performance through NetQueue
  • Improving network performance using the SplitRx mode for multicast traffic
  • Designing a multi-NIC vMotion

(For more resources related to this topic, see here.)

Device and I/O virtualization involves managing the routing of I/O requests between virtual devices and the shared physical hardware. Software-based I/O virtualization and management, in contrast to a direct pass through to the hardware, enables a rich set of features and simplified management. With networking, virtual NICs and virtual switches create virtual networks between virtual machines which are running on the same host without the network traffic consuming bandwidth on the physical network

NIC teaming consists of multiple, physical NICs and provides failover and load balancing for virtual machines. Virtual machines can be seamlessly relocated to different systems by using VMware vMotion, while keeping their existing MAC addresses and the running state of the VM. The key to effective I/O virtualization is to preserve these virtualization benefits while keeping the added CPU overhead to a minimum.

The hypervisor virtualizes the physical hardware and presents each virtual machine with a standardized set of virtual devices. These virtual devices effectively emulate well-known hardware and translate the virtual machine requests to the system hardware. This standardization on consistent device drivers also helps with virtual machine standardization and portability across platforms, because all virtual machines are configured to run on the same virtual hardware, regardless of the physical hardware in the system. In this article we will discuss the following:

  • Describe various network performance problems
  • Discuss the causes of network performance problems
  • Propose solutions to correct network performance problems

Designing a network for load balancing and failover for vSphere Standard Switch

The load balancing and failover policies that are chosen for the infrastructure can have an impact on the overall design. Using NIC teaming we can group several physical network adapters attached to a vSwitch. This grouping enables load balancing between the different physical NICs and provides fault tolerance if a card or link failure occurs.

Network adapter teaming offers a number of available load balancing and load distribution options. Load balancing is load distribution based on the number of connections, not on network traffic. In most cases, load is managed only for the outgoing traffic and balancing is based on three different policies:

  • Route based on the originating virtual switch port ID (default)
  • Route based on the source MAC hash
  • Route based on IP hash

Also, we have two network failure detection options and those are:

  • Link status only
  • Beacon probing

Getting ready

To step through this recipe, you will need one or more running ESXi hosts, a vCenter Server, and a working installation of vSphere Client. No other prerequisites are required.

How to do it...

To change the load balancing policy and to select the right one for your environment, and also select the appropriate failover policy, you need to follow the proceeding steps:

  1. Open up your VMware vSphere Client.
  2. Log in to the vCenter Server.
  3. On the left hand side, choose any ESXi Server and choose configuration from the right hand pane.
  4. Click on the Networking section and select the vSwitch for which you want to change the load balancing and failover settings.
  5. You may wish to override this per port group level as well.

  6. Click on Properties.
  7. Select the vSwitch and click on Edit.
  8. Go to the NIC Teaming tab.
  9. Select one of the available policies from the Load Balancing drop-down menu.
  10. Select one of the available policies on the Network Failover Detection drop-down menu.
  11. Click on OK to make it effective.

How it works...

Route based on the originating virtual switch port ID (default)

In this configuration, load balancing is based on the number of physical network cards and the number of virtual ports used. With this configuration policy, a virtual network card connected to a vSwitch port will always use the same physical network card. If a physical network card fails, the virtual network card is redirected to another physical network card.

You typically do not see the individual ports on a vSwitch. However, each vNIC that gets connected to a vSwitch is implicitly using a particular port on the vSwitch. (It's just that there's no reason to ever configure which port, because that is always done automatically.)

It does a reasonable job of balancing your egress uplinks for the traffic leaving an ESXi host as long as all the virtual machines using these uplinks have similar usage patterns.

It is important to note that port allocation occurs only when a VM is started or when a failover occurs. Balancing is done based on a port's occupation rate at the time the VM starts up. This means that which pNIC is selected for use by this VM is determined at the time the VM powers on based on which ports in the vSwitch are occupied at the time. For example, if you started 20 VMs in a row on a vSwitch with two pNICs, the odd-numbered VMs would use the left pNIC and the even-numbered VMs would use the right pNIC and that would persist even if you shut down all the even-numbered VMs; the left pNIC, would have all the VMs and the right pNIC would have none. It might happen that two heavily-loaded VMs are connected to the same pNIC, thus load is not balanced.

This policy is the easiest one and we always call for the simplest one to map it to a best operational simplification.

Now when speaking of this policy, it is important to understand that if, for example, teaming is created with two 1 GB cards, and if one VM consumes more than one card's capacity, a performance problem will arise because traffic greater than 1 Gbps will not go through the other card, and there will be an impact on the VMs sharing the same port as the VM consuming all resources. Likewise, if two VMs each wish to use 600 Mbps and they happen to go to the first pNIC, the first pNIC cannot meet the 1.2 Gbps demand no matter how idle the second pNIC is.

Route based on source MAC hash

This principle is the same as the default policy but is based on the number of MAC addresses. This policy may put those VM vNICs on the same physical uplink depending on how the MAC hash is resolved.

For MAC hash, VMware has a different way of assigning ports. It's not based on the dynamically changing port (after a power off and power on the VM usually gets a different vSwitch port assigned), but is instead based on fixed MAC address. As a result one VM is always assigned to the same physical NIC unless the configuration is not changed. With the port ID, the VM could get different pNICs after a reboot or VMotion.

If you have two ESXi Servers with the same configuration, the VM will stay on the same pNIC number even after a vMotion. But again, one pNIC may be congested while others are bored. So there is no real load balancing.

Route based on IP hash

The limitation of the two previously-discussed policies is that a given virtual NIC will always use the same physical network card for all its traffic. IP hash-based load balancing uses the source and destination of the IP address to determine which physical network card to use. Using this algorithm, a VM can communicate through several different physical network cards based on its destination. This option requires configuration of the physical switch's ports to EtherChannel. Because the physical switch is configured similarly, this option is the only one that also provides inbound load distribution, where the distribution is not necessarily balanced.

There are some limitations and reasons why this policy is not commonly used. These reasons are described as follows:

  • The route based on IP hash load balancing option involves added complexity and configuration support from upstream switches. Link Aggregation Control Protocol (LACP) or EtherChannel is required for this algorithm to be used. However, this does not apply for a vSphere Standard Switch.
  • For IP hash to be an effective algorithm for load balancing there must be many IP sources and destinations. This is not a common practice for IP storage networks, where a single VMkernel port is used to access a single IP address on a storage device.

The same NIC will always send all its traffic to the same destination (for example, Google.com) through the same pNIC, though another destination (for example, bing.com) might go through another pNIC.

So, in a nutshell, due to the added complexity, the upstream dependency on the advanced switch configuration and the management overhead, this configuration is rarely used in production environments. The main reason is that if you use IP hash, the pSwitch must be configured with LACP or EtherChannel. Also, if you use LACP or EtherChannel, the load balancing algorithm must be IP hash. This is because with LACP, inbound traffic to the VM could come through either of the pNICs, and the vSwitch must be ready to deliver that to the VM and only IP Hash will do that (the other policies will drop the inbound traffic to this VM that comes in on a pNIC that the VM doesn't use).

We have only two failover detection options and those are:

Link status only

The link status option enables the detection of failures related to the physical network's cables and switch. However, be aware that configuration issues are not detected. This option also cannot detect the link state problems with upstream switches; it works only with the first hop switch from the host.

Beacon probing

The beacon probing option allows the detection of failures unseen by the link status option, by sending the Ethernet broadcast frames through all the network cards. These network frames authorize the vSwitch to detect faulty configurations or upstream switch failures and force the failover if the ports are blocked. When using an inverted U physical network topology in conjunction with a dual-NIC server, it is recommended to enable link state tracking or a similar network feature in order to avoid traffic black holes. According to VMware's best practices, it is recommended to have at least three cards before activating this functionality. However, if IP hash is going to be used, beacon probing should not be used as a network failure detection, in order to avoid an ambiguous state due to the limitation that a packet cannot hairpin on the port it is received. Beacon probing works by sending out and listening to beacon probes from the NICs in a team. If there are two NICs, then each NIC will send out a probe and the other NICs will receive that probe. Because EtherChannel is considered one link, this will not function properly as the NIC uplinks are not logically separate uplinks. If beacon probing is used, this can result in MAC address flapping errors, and the network connectivity may be interrupted.

Designing a network for load balancing and failover for vSphere Distributed Switch

The load balancing and failover policies that are chosen for the infrastructure can have an impact on the overall design. Using NIC teaming, we can group several physical network switches attached to a vSwitch. This grouping enables load balancing between the different Physical NICs, and provides fault tolerance if a card failure occurs.

The vSphere distributed vSwitch offers a load balancing option that actually takes the network workload into account when choosing the physical uplink. This is route based on a physical NIC load. This is also called Load Based Teaming (LBT). We recommend this load balancing option over the others when using a distributed vSwitch. Benefits of using this load balancing policy are as follows:

  • It is the only load balancing option that actually considers NIC load when choosing uplinks.
  • It does not require upstream switch configuration dependencies like the route based on IP hash algorithm does.
  • When the route based on physical NIC load is combined with the network I/O control, a truly dynamic traffic distribution is achieved.

Getting ready

To step through this recipe, you will need one or more running ESXi Servers, a vCenter Server, and a working installation of vSphere Client. No other prerequisites are required.

How to do it...

To change the load balancing policy and select the right one for your environment, and also select the appropriate failover policy you need to follow the proceeding steps:

  1. Open up your VMware vSphere Client.
  2. Log in to the vCenter Server.
  3. Navigate to Networking on the home screen.
  4. Navigate to a Distributed Port group and right click and select Edit Settings.
  5. Click on the Teaming and Failover section.
  6. From the Load Balancing drop-down menu, select Route Based on physical NIC load as the load balancing policy.
  7. Choose the appropriate network failover detection policy from the drop-down menu.
  8. Click on OK and your settings will be effective.

How it works...

Load based teaming, also known as route based on physical NIC load, maps vNICs to pNICs and remaps the vNIC to pNIC affiliation if the load exceeds specific thresholds on a pNIC. LBT uses the originating port ID load balancing algorithm for the initial port assignment, which results in the first vNIC being affiliated to the first pNIC, the second vNIC to the second pNIC, and so on. Once the initial placement is over after the VM being powered on, LBT will examine both the inbound and outbound traffic on each of the pNICs and then distribute the load across if there is congestion.

LBT will send a congestion alert when the average utilization of a pNIC is 75 percent over a period of 30 seconds. 30 seconds of interval period is being used for avoiding the MAC flapping issues. However, you should enable port fast on the upstream switches if you plan to use STP. VMware recommends LBT over IP hash when you use vSphere Distributed Switch, as it does not require any special or additional settings in the upstream switch layer. In this way you can reduce unnecessary operational complexity. LBT maps vNIC to pNIC and then distributes the load across all the available uplinks, unlike IP hash which just maps the vNIC to pNIC but does not do load distribution. So it may happen that when a high network I/O VM is sending traffic through pNIC0, your other VM will also get to map to the same pNIC and send the traffic.

What to know when offloading checksum

VMware takes advantage of many of the performance features from modern network adaptors.

In this section we are going to talk about two of them and those are:

  • TCP checksum offload
  • TCP segmentation offload

Getting ready

To step through this recipe, you will need a running ESXi Server and a SSH Client (Putty). No other prerequisites are required.

How to do it...

The list of network adapter features that are enabled on your NIC can be found in the file /etc/vmware/esx.conf on your ESXi Server. Look for the lines that start with /net/vswitch.

However, do not change the default NIC's driver settings unless you have a valid reason to do so. A good practice is to follow any configuration recommendations that are specified by the hardware vendor. Carry out the following steps in order to check the settings:

  1. Open up your SSH Client and connect to your ESXi host.
  2. Open the file etc/vmware/esx.conf
  3. Look for the line that starts with /net/vswitch
  4. Your output should look like the following screenshot:

How it works...

A TCP message must be broken down into Ethernet frames. The size of each frame is the maximum transmission unit (MUT). The default maximum transmission unit is 1500 bytes. The process of breaking messages into frames is called segmentation.

Modern NIC adapters have the ability to perform checksum calculations natively. TCP checksums are used to determine the validity of transmitted or received network packets based on error correcting code. These calculations are traditionally performed by the host's CPU. By offloading these calculations to the network adapters, the CPU is freed up to perform other tasks. As a result, the system as a whole runs better. TCP segmentation offload (TSO) allows a TCP/IP stack from the guest OS inside the VM to emit large frames (up to 64KB) even though the MTU of the interface is smaller.

Earlier operating system used the CPU to perform segmentation. Modern NICs try to optimize this TCP segmentation by using a larger segment size as well as offloading work from the CPU to the NIC hardware. ESXi utilizes this concept to provide a virtual NIC with TSO support, without requiring specialized network hardware.

  • With TSO, instead of processing many small MTU frames during transmission, the system can send fewer, larger virtual MTU frames.
  • TSO improves performance for the TCP network traffic coming from a virtual machine and for network traffic sent out of the server.
  • TSO is supported at the virtual machine level and in the VMkernel TCP/IP stack.
  • TSO is enabled on the VMkernel interface by default. If TSO becomes disabled for a particular VMkernel interface, the only way to enable TSO is to delete that VMkernel interface and recreate it with TSO enabled.
  • TSO is used in the guest when the VMXNET 2 (or later) network adapter is installed. To enable TSO at the virtual machine level, you must replace the existing VMXNET or flexible virtual network adapter with a VMXNET 2 (or later) adapter. This replacement might result in a change in the MAC address of the virtual network adapter.

Selecting the correct virtual network adapter

When you configure a virtual machine, you can add NICs and specify the adapter type. The types of network adapters that are available depend on the following factors:

  • The version of the virtual machine, which depends on which host created it or most recently updated it.
  • Whether or not the virtual machine has been updated to the latest version for the current host.
  • The guest operating system.

The following virtual NIC types are supported:

  • Vlance
  • VMXNET
  • Flexible
  • E 1000
  • Enhanced VMXNET (VMXNET 2)
  • VMXNET 3

If you want to know more about these network adapter types then refer to the following KB article:

http://kb.vmware.com/kb/1001805

Getting ready

To step through this recipe, you will need one or more running ESXi Servers, a vCenter Server, and a working installation of vSphere Client. No other prerequisites are required.

How to do it...

To choose a particular virtual network adapter you have two ways, one is while you create a new VM and the other one is while adding a new network adaptor to an existing VM.

To choose a network adaptor while creating a new VM is as follows:

  1. Open vSphere Client.
  2. Log in to the vCenter Server.
  3. Click on the File menu, and navigate to New| Virtual Machine.
  4. Go through the steps and hold on to the step where you need to create network connections. Here you need to choose how many network adaptors you need, which port group you want them to connect to, and an adaptor type.

To choose an adaptor type while adding a new network interface in an existing VM you should follow these steps:

  1. Open vSphere Client.
  2. Log in to the vCenter Server.
  3. Navigate to VMs and Templates on your home screen.
  4. Select an existing VM where you want to add a new network adaptor, right click and select Edit Settings.
  5. Click on the Add button.
  6. Select Ethernet Adaptor.
  7. Select the Adaptor type and select the network where you want this adaptor to connect.
  8. Click on Next and then click on Finish

How it works...

Among the entire supported virtual network adaptor types, VMXNETis the paravirtualized device driver for virtual networking. The VMXNET driver implements an idealized network interface that passes through the network traffic from the virtual machine to the physical cards with minimal overhead. The three versions of VMXNET are VMXNET, VMXNET 2 (Enhanced VMXNET), and VMXNET 3.

The VMXNET driver improves the performance through a number of optimizations as follows:

  • Shares a ring buffer between the virtual machine and the VMkernel, and uses zero copy, which in turn saves CPU cycles. Zero copy improves performance by having the virtual machines and the VMkernel share a buffer, reducing the internal copy operations between buffers to free up CPU cycles.
  • Takes advantage of transmission packet coalescing to reduce address space switching.
  • Batches packets and issues a single interrupt, rather than issuing multiple interrupts. This improves efficiency, but in some cases with slow packet-sending rates, it could hurt throughput while waiting to get enough packets to actually send.
  • Offloads TCP checksum calculation to the network hardware rather than use the CPU resources of the virtual machine monitor. Use vmxnet3 if you can, or the most recent model you can. Use VMware Tools where possible. For certain unusual types of network traffic, sometimes the generally-best model isn't optimal; if you have poor network performance, experiment with other types of vNICs to see which performs best.
vSphere High Performance Cookbook Over 60 recipes to help you improve vSphere performance and solve problems before they arise with this book and ebook
Published: July 2013
eBook Price: $32.99
Book Price: $54.99
See more
Select your format and quantity:

Improving performance through VMDirectPath I/O

VMware vSphere DirectPath I/O leverages Intel VT-d and AMD-Vi hardware support to allow guest operating systems to directly access hardware devices. In the case of networking, vSphere DirectPath I/O allows the virtual machine to access a physical NIC directly rather than using an emulated device or a paravirtualized device. An example of an emulated device is the E 1000 virtual NIC, and examples of paravirtualized devices are the VMXNET and VMXNET 3 virtual network adapters. vSphere DirectPath I/O provides limited increases in throughput, but it reduces the CPU cost for networking intensive workloads.

vSphere DirectPath I/O is not compatible with certain core virtualization features. However, when you run ESXi on certain vendor configurations, vSphere DirectPath I/O for networking is compatible with the following:

  • vSphere vMotion
  • Hot adding and removing of virtual devices, suspend and resume
  • VMware vSphere® high availability
  • VMware vSphere® Distributed Resource Scheduler (DRS)
  • Snapshots

Typical virtual machines and their workloads do not require the use of vSphere DirectPath I/O. However, for workloads that are networking intensive and do not need the core virtualization features just mentioned, vSphere DirectPath I/O might be useful to reduce CPU usage and/or latency. Another potential use case of this technology is passing through a network card to a guest when the network card is not supported by the hypervisor.

Getting ready

To step through this recipe, you will need one or more running ESXi Servers, the ESXi server hardware should have Intel VT-d or AMD-Vi hardware, a vCenter Server, and a working installation of vSphere Client. No other prerequisites are required.

How to do it...

For configuring VMDirectPath I/O direct PCI device connections for virtual machines you need to follow these steps:

  1. Open vSphere Client.
  2. Log in to the vCenter Server.
  3. On your Home screen, select Hosts and Clusters.
  4. Select an ESX host from the inventory of VMware vSphere Client.
  5. In the Configuration tab, click on Advanced Settings. The Pass through Configuration page lists all the available pass through devices.
  6. Click on Configure Passthrough.

    Now you will see a list of devices.

  7. Select the devices and click on OK.
  8. When the devices are selected, they are marked with an orange icon. Reboot the system for the change to take effect. After rebooting, the devices are marked with a green icon and are enabled.
  9. To configure a PCI device on a virtual machine:
    1. From the inventory in vSphere Client, right-click on the virtual machine and choose Edit Settings. Please note that the VM must be powered off to complete this operation.
    2. Click on the Hardware tab.
    3. Click on Add.
    4. Choose the PCI device.
  10. Click on Next.

When the device is assigned, the virtual machine must have a memory reservation for the full configured memory size.

Improving performance through NetQueue

NetQueueis a performance technology that improves performance in virtualized environments that use 10 GigE adapters which is supported by VMware. NetQueue takes advantage of the multiple queue capability that newer physical network adapters have. Multiple queues allow I/O processing to be spread across multiple CPUs in a multiprocessor system. So while one packet is queued up on one CPU, another packet can be queued up on another CPU at the same time.

Getting ready

To step through this recipe, you will need one or more running ESXi Servers and a working installation of vSphere CLI. No other prerequisites are required.

How to do it...

NetQueue is enabled by default. Disabling or enabling NetQueue on a host is done by using the VMware vSphere Command-Line Interface (vCLI).

To enable and disable this feature, you should perform the following activity:

  1. Open vSphere CLI.
  2. Now run this esxcli system settings kernel with the following command:

    setting=" netNetqueueEnabled" --value="TRUE"

  3. Use the VMware vSphere CLI to configure the NIC driver to use NetQueue. The following command assumes that you are using the s2io driver:

    ~ # esxcli system module parameters set -m s2io -p "intr_type=2
    rx_ring_num=8"

  4. Once you set the parameter then use the following command to list the parameters and options:

    ~ # esxcli system module parameters list -m s2io | more

  5. Reboot the host.

If you want to disable the NetQueue feature for any reason then you need to follow the proceeding steps:

  1. Open vSphere CLI.
  2. Now run this esxcli system settings kernel with the following command:

    set --setting=" netNetqueueEnabled" --value="FALSE"

  3. Now disable the NIC driver to use NetQueue by using the following command:

    ~ # esxcli system module parameters set -m s2io -p "intr_type= rx_
    ring_num="

  4. Now list the parameters as follows to see if it has been taken off or not

    ~ # esxcli system module parameters list -m s2io | more

  5. Reboot the host.

How it works...

NetQueue can use multiple transmit queues to parallelize access that is normally serialized by the device driver. Multiple transmit queues can also be used to get some sort of guarantee. A separate, prioritized queue can be used for different types of network traffic.

NetQueue monitors the load of the virtual machines as they are receiving packets and can assign queues to critical virtual machines. All other virtual machines use the default queue.

Improving network performance using the SplitRx mode for multicast traffic

Multicast is an efficient way of disseminating information and communicating over the network. Instead of sending a separate packet to every receiver, the sender sends one packet which is then distributed to every receiver that has subscribed to this multicast. Multiple receivers can be enabled on a single ESXi host only when you use multicast traffic. Because multiple receivers reside on the same host, packet replication is carried out in the hypervisor instead.

SplitRxmode uses multiple physical CPUs in an ESXi host to process network packets received in a single network queue. As it does not transfer the same copy of the network packet, it provides a scalable and efficient platform for multicast receivers. SplitRx mode improves throughput and CPU efficiency for multicast traffic workloads.

Only the VMXNET 3 network adapter supports SplitRx mode. This feature is disabled by default on vSphere 5.0, however it is enabled in 5.1 by default.

SplitRx mode is individually configured for each virtual NIC.

Getting ready

To step through this recipe, you will need one or more running ESXi Servers, a couple of running virtual machines, a vCenter Server, and a working installation of vSphere Client. No other prerequisites are required.

How to do it...

The behavior of SplitRx can be enabled or disabled entirely on an ESXi Server using the following steps:

  1. Open vSphere Client.
  2. Log in to the vCenter Server.
  3. Select Hosts and Clusters on the home screen.
  4. Select the ESXi host you wish to change.
  5. Navigate to the Configuration tab.
  6. Click on Advanced Settings in the Software pane.
  7. Click on the Net section of the left-hand side tree.
  8. Find NetSplitRxMode.
  9. Click on the value to be changed and configure it as you wish.

The possible values of NetSplitRxMode are as follows:

NetSplitRxMode = "0"

This value disables SplitRx mode for the ESXi host.

NetSplitRxMode = "1"

This value (the default) enables SplitRx mode for the ESXi host.

The change will take effect immediately and does not require the ESXi host to be restarted.

The SplitRx mode feature can also be configured individually for each virtual NIC using the ethernetX.emuRxMode variable in each virtual machine's .vmx file (where X is replaced with the network adapter's ID).

The possible values for this variable are:

ethernetX.emuRxMode = "0"

This value disables SplitRx mode for ethernetX.

ethernetX.emuRxMode = "1"

This value enables SplitRx mode for ethernetX.

So, if you want to change this value on individual VMs through vSphere Client, you should follow the proceeding steps:

  1. Select the virtual machine that you wish to change, and then click on Edit virtual machine settings.
  2. Go to the Options tab.
  3. Navigate to General, and then click on Configuration Parameters.
  4. Look for ethernetX.emuRxMode (where X is the number of the desired NIC). If the variable isn't present, click on Add Row and enter it as a new variable.
  5. Click on the value to be changed and configure it as you wish.
  6. The change will not take effect until the virtual machine has been restarted.

How it works...

SplitRx mode uses multiple physical CPUs to process network packets received in a single network queue. This feature can significantly improve the network performance for certain workloads.

These workloads include:

  • Multiple virtual machines on one ESXi host and they are all receiving multicast traffic from the same source.
  • Traffic via the DVFilter API between two virtual machines on the same ESXi host.

vSphere 5.1 automatically enables this feature for a VMXNET 3 virtual network adapter (the only adapter type on which it is supported) when it detects that a single network queue on a physical NIC is both heavily utilized and servicing more than eight clients (that is, virtual machines or the vmknic) that have evenly distributed loads.

Designing a multi-NIC vMotion

Before the release of VMware vSphere 5, designing a vMotion network was relatively easy as it was straight-forward. vMotion in VMware vSphere 5.0 is able to leverage multiple NICs.

In vSphere 5.x vMotion balances the operations across all available NICs. It does this for a single vMotion operation and for multiple concurrent vMotion operations. By using multiple NICs it reduces the duration of a vMotion operation.

Getting ready

To step through this recipe, you will need one or more running ESXi Servers, a vCenter Server, and a working installation of vSphere Client. No other prerequisites are required.

How to do it…

So, to create a multi NIC vMotion network, you need to follow the proceeding steps:

  1. Open up vSphere Client.
  2. Log in to the vCenter Server.
  3. Navigate to the Network section.
  4. Select the distributed switch, right click on it, and then select New Port Group.
  5. Provide a name, call it vMotion-01 and confirm it's the correct distributed switch.
  6. Enter the VLAN type and specify the VLAN number.
  7. Accept all of the other settings as default as of now and then select Next
  8. Review the settings and click on Finish

Once you are done, it will create a vMotion port group, but you need to change the load balancing and failover configuration.

  1. Select distributed port group vMotion-01 in the left side of your screen, right-click and select Edit settings.
  2. Go to Teaming and Failover and move the second dvUplink down to mark it as a Standby uplink. Verify that load balancing is set to Route based on originating virtual port.
  3. Click on OK.

Repeat the instructions for distributed Port group vMotion-02, but use the VLAN ID used by the IP address of the second VMkernel NIIC.

Go to Teaming and Failover and configure the uplinks in an alternate order, ensuring that the second vMotion VMkernel NIC is using dvUplink2.

Now once you are done with creating two different distributed port groups, you need to create two vMotion VMK interfaces and tag them to each of these port groups, as follows:

  1. Select the first host in the cluster, go to Configure, and then click on Networking.
  2. Click on vSphere Distributed Switch.
  3. Now click on Manage Virtual Adaptor.
  4. Click on Add.
  5. Navigate to New Virtual Adaptor.
  6. Select the VMkernel and go to Next.
  7. Now select the port group where you want it to connect to.
  8. Select the checkbox Use this Virtual Adaptor for vMotion and click on Next.
  9. Specify the IP address there and click on Next.
  10. Review the configuration and click on Finish.
  11. Create the second vMotion enabled VMkernel NIC. Configure it identically, except:
    1. Select the second vMotion port group.
    2. Enter the IP address corresponding to the VLAN ID on distributed port group vMotion-02.

Now you have a ready multi-NIC vMotion Network.

Summary

In this article, we have covered the tasks related with networking performance design and have learned the various aspects of networking performance design.

Resources for Article :


Further resources on this subject:


vSphere High Performance Cookbook Over 60 recipes to help you improve vSphere performance and solve problems before they arise with this book and ebook
Published: July 2013
eBook Price: $32.99
Book Price: $54.99
See more
Select your format and quantity:

About the Author :


Prasenjit Sarkar

Prasenjit Sarkar (@stretchcloud) is a senior member of technical staff at VMware Service Provider Cloud R&D, where he provides architectural oversight and technical guidance for designing, implementing, and testing VMware's Cloud datacenters. He is an author, R&D guy, and a blogger focusing on virtualization, Cloud computing, storage, networking, and other enterprise technologies. He has more than 10 years of expert knowledge in R&D, professional services, alliances, solution engineering, consulting, and technical sales with expertise in architecting and deploying virtualization solutions and rolling out new technologies and solution initiatives. His primary focus is on VMware vSphere Infrastructure and Public Cloud using VMware vCloud Suite. His aim is to own the entire life cycle of a VMware based IaaS (SDDC), especially vSphere, vCloud Director, vShield Manager, and vCenter Operations. He was one of the VMware vExperts of 2012 and is well known for his acclaimed virtualization blog http://stretch-cloud.info. He holds certifications from VMware, Cisco, Citrix, Red Hat, Microsoft, IBM, HP, and Exin. Prior to joining VMware, he served other fine organizations (such as Capgemini, HP, and GE) as a solution architect and infrastructure architect.

Books From Packt


VMware vSphere 5.1 Cookbook
VMware vSphere 5.1 Cookbook

Citrix XenDesktop 5.6 Cookbook
Citrix XenDesktop 5.6 Cookbook

Implementing VMware Horizon View 5.2
Implementing VMware Horizon View 5.2

 VMware ThinApp 4.7 Essentials
VMware ThinApp 4.7 Essentials

Instant VMware vCloud Starter [Instant]
Instant VMware vCloud Starter [Instant]

VMware View 5 Desktop Virtualization Solutions
VMware View 5 Desktop Virtualization Solutions

OpenStack Cloud Computing Cookbook
OpenStack Cloud Computing Cookbook

Raspberry Pi Networking Cookbook
Raspberry Pi Networking Cookbook


No votes yet

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
y
f
u
Z
r
K
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software