VMware Performance and Capacity Management - Second Edition

4.7 (3 reviews total)
By Iwan 'e1' Rahabok
  • Instant online access to over 8,000+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. VM – It Is Not What You Think!

About this book

Performance management and capacity management are the two top-most issues faced by enterprise IT when doing virtualization. Until the first edition of the book, there was no in-depth coverage on the topic to tackle the issues systematically. The second edition expands the first edition, with added information and reorganizing the book into three logical parts.

The first part provides the technical foundation of SDDC Management. It explains the difference between a software-defined data center and a classic physical data center, and how it impacts both architecture and operations. From this strategic view, it zooms into the most common challenges—performance management and capacity management. It introduces a new concept called Performance SLA and also a new way of doing capacity management.

The next part provides the actual solution that you can implement in your environment. It puts the theories together and provides real-life examples created together with customers. It provides the reasons behind each dashboard, so that you get the understanding on why it is required and what problem it solves.

The last part acts as a reference section. It provides a complete reference to vSphere and vRealize Operations counters, explaining their dependencies and providing practical guidance on the values you should expect in a healthy environment.

Publication date:
March 2016


Chapter 1. VM – It Is Not What You Think!

In this chapter, we will dive into why a seemingly simple technology, a virtualized x86 machine, has huge ramifications for the IT industry. In fact, it is turning a lot of things upside down and breaking down silos that have existed for decades in large IT organizations. We will cover the following topics:

  • Why virtualization is not what we think it is

  • Virtualization versus partitioning

  • A comparison between a physical server and Virtual Machine


Our journey into the virtual world

Virtual Machines, or simply, VMs—who doesn't know what they are? Even a business user who has never seen one knows what it is. It is just a physical server, virtualized—nothing more.

Wise men say that small leaks sink the ship. I think that's a good way to explain why IT departments that manage physical servers well struggle when the same servers are virtualized.

We can also use the Pareto principle (80/20 rule): 80 percent of a VM is identical to a physical server. But it's the 20 percent of differences that hits you. We will highlight some of this 20 percent portion, focusing on areas that impact data center management.

The change caused by virtualization is much larger than the changes brought about by previous technologies. In the past two or more decades, we transitioned from mainframes to the client/server-based model and then to the web-based model. These are commonly agreed upon as the main evolutions in IT architecture. However, all of these are just technological changes. They changed the architecture, yes, but they did not change the operation in a fundamental way. Both the client-server and web shifts did not talk about the "journey". There was no journey to the client-server based model. However, with virtualization, we talk about the virtualization journey. It is a journey because the changes are massive and involve a lot of people.

Gartner correctly predicted the impact of virtualization in 2007 (http://www.gartner.com/newsroom/id/505040). More than 8 years later, we are still in the midst of the journey. Proving how pervasive the change is, here is the summary of the article from Gartner:

Notice how Gartner talks about a change in culture. So, virtualization has a cultural impact too. In fact, if your virtualization journey is not fast enough, look at your organization's structure and culture. Have you broken the silos? Do you empower your people to take risks and do things that have never been done before? Are you willing to flatten the organizational chart?


The silos that have served you well are likely your number one barrier to a hybrid cloud.

So why exactly is virtualization causing such a fundamental shift? To understand this, we need to go back to the basics, which is exactly what virtualization is. It's pretty common that chief information officers (CIOs) have a misconception about what it is.

Take a look at the following comments. Have you seen them in your organization?

  • VM is just a physical machine that has been virtualized. Even VMware says the Guest OS is not aware it's virtualized and it does not run differently.

  • It is still about monitoring CPU, RAM, disk, network, and other resources—no difference.

  • It is a technological change. Our management process does not have to change.

  • All of these VMs must still feed into our main enterprise IT management system. This is how we have run our business for decades, and it works.

If only life were that simple; we would all be 100-percent virtualized and have no headaches! Virtualization has been around for years, and yet, most organizations have not mastered it. The proof of mastering it is if you complete the journey and reach the highest level of the virtualization maturity model.


Not all virtualizations are equal

There are plenty of misconceptions about the topic of virtualization, especially among IT folks who are not familiar with virtualization. CIOs who have not felt the strategic impact of virtualization (be it a good or bad experience) tend to carry these misconceptions. Although virtualization looks similar to a physical system from the outside, it is completely re-architected under the hood.

So let's take a look at the first misconception: what exactly is virtualization?

Because it is an industry trend, virtualization is often generalized to include other technologies that are not virtualized. This is a typical strategy of IT vendors that have similar technology. A popular technology often branded under virtualization is hardware partitioning; once it is parked under the umbrella of virtualization, both are expected be managed in the same way. Since both are actually different, customers who try to manage both with a single piece of management software struggle to do well.

Partitioning and virtualization are two different architectures in computer engineering, resulting in major differences between their functionalities. They are shown in the following screenshot:

Virtualization versus partitioning

With partitioning, there is no hypervisor that virtualizes the underlying hardware. There is no software layer separating the VM and the physical motherboard. There is, in fact, no VM. This is why some technical manuals about partitioning technology do not even use the term "VM". They use the terms "domain", "partition", or "container" instead.

There are two variants of partitioning technology, hardware-level and OS-level partitioning, which are covered in the following bullet points:

  • In hardware-level partitioning, each partition runs directly on the hardware. It is not virtualized. This is why it is more scalable and has less of a performance hit. Because it is not virtualized, it has to have an awareness of the underlying hardware. As a result, it is not fully portable. You cannot move the partition from one hardware model to another. The hardware has to be built for the purpose of supporting that specific version of the partition. The partitioned OS still needs all the hardware drivers and will not work on other hardware if the compatibility matrix does not match. As a result, even the version of the OS matters, as it is just like the physical server.

  • In OS-level partitioning, there is a parent OS that runs directly on the server motherboard. This OS then creates an OS partition, where another "OS" can run. I use double quotes as it is not exactly the full OS that runs inside that partition. The OS has to be modified and qualified to be able to run as a Zone or Container. Because of this, application compatibility is affected. This is different in a VM, where there is no application compatibility issue because the hypervisor is transparent to the Guest OS.

Hardware partitioning

We covered the difference from an engineering point of view. However, does it translate into different data center architectures and operations? We will focus on hardware partitioning as there are fundamental differences between hardware partitioning and software partitioning. The use cases for both are also different. Software partitioning is typically used in native cloud applications.

With that, let's do a comparison between hardware partitioning and virtualization. Let's take availability as a start.

With virtualization, all VMs become protected by vSphere High Availability (vSphere HA)—100 percent protection and that too done without VM awareness. Nothing needs to be done at the VM layer. No shared or quorum disk and no heartbeat-network VM is required to protect a VM with basic HA.

With hardware partitioning, protection has to be configured manually, one by one for each Logical Partition (LPAR) or Logical Domain (LDOM). The underlying platform does not provide it.

With virtualization, you can even go beyond five nines, that is, 99.999 percent, and move to 100 percent with vSphere Fault Tolerance. This is not possible in the partitioning approach as there is no hypervisor that replays CPU instructions. Also, because it is virtualized and transparent to the VM, you can turn on and off the Fault Tolerance capability on demand. Fault Tolerance is fully defined in the software.

Another area of difference between partitioning and virtualization is Disaster Recovery (DR). With partitioning technology, the DR site requires another instance to protect the production instance. It is a different instance, with its own OS image, hostname, and IP address. Yes, we can perform a Storage Area Network (SAN) boot, but that means another Logical Unit Number (LUN) is required to manage, zone, replicate, and so on. DR is not scalable to thousands of servers. To make it scalable, it has to be simpler.

Compared to partitioning, virtualization takes a different approach. The entire VM fits inside a folder; it becomes like a document and we migrate the entire folder as if it were one object. This is what vSphere Replication in vSphere or Site Recovery Manager does. It performs a replication per VM; there is no need to configure SAN boot. The entire DR exercise, which can cover thousands of virtual servers, is completely automated and has audit logs automatically generated. Many large enterprises have automated their DR with virtualization. There is probably no company that has automated DR for their entire LPAR, LDOM, or container.

In the previous paragraph, we're not implying LUN-based or hardware-based replication to be inferior solutions. We're merely driving the point that virtualization enables you to do things differently.

We're also not saying that hardware partitioning is an inferior technology. Every technology has its advantages and disadvantages and addresses different use cases. Before I joined VMware, I was a Sun Microsystems sales engineer for 5 years, so I'm aware of the benefit of UNIX partitioning. This book is merely trying to dispel the misunderstanding that hardware partitioning equals virtualization.

OS partitioning

We've covered the differences between hardware partitioning and virtualization.

Let's switch gears to software partitioning. In 2016, the adoption of Linux containers will continue its rapid rise. You can actually use both containers and virtualization, and they complement each other in some use cases. There are two main approaches to deploying containers:

  • Running them directly on bare metal

  • Running them inside a Virtual Machine

As both technologies evolve, the gap gets wider. As a result, managing a software partition is different from managing a VM. Securing a container is different to securing a VM. Be careful when opting for a management solution that claims to manage both. You will probably end up with the most common denominator. This is one reason why VMware is working on vSphere Integrated Containers and the Photon platform. Now that's a separate topic by itself!


Virtual Machine – it is not what you think!

A VM is not just a physical server that has been virtualized. Yes, there is a Physical-to-Virtual (P2V) process; however, once it is virtualized, it takes on a new shape. This shape has many new and changed properties, and some old properties are no longer applicable or available. My apologies if the following is not the best analogy:


We P2V the soul, not the body.

On the surface, a VM looks like a physical server. So, let's actually look at VM properties. The following screenshot shows a VM's settings in vSphere 5.5. It looks familiar as it has a CPU, memory, hard disk, network adapter, and so on. However, look at it closely. Do you see any properties that you don't see in a physical server?

VM properties in vSphere 5.5

Let's highlight some of the virtual server properties that do not exist in a physical server. I'll focus on the properties that have an impact on management, as management is the topic of this book.

At the top of the dialog box, there are four tabs:

  • Virtual Hardware

  • VM Options

  • SDRS Rules

  • vApp Options

The Virtual Hardware tab is the only tab that has similar properties to a physical server. The other three tabs do not have their equivalent physical server counterparts. For example, SDRS Rules pertains to Storage DRS. It means that the VM storage can be automatically moved by vCenter. Its location in the data center is not static. This includes the drive where the OS resides (the C:\ drive in Windows systems). This directly impacts your server management tool. It has to have awareness of Storage DRS and can no longer assume that a VM is always located in the same datastore or Logical Unit Number (LUN). Compare this with a physical server. Its OS typically resides on a local disk, which is part of the physical server. You don't want your physical server's OS drive being moved around in a data center, do you?

In the Virtual Hardware tab, notice the New device option at the bottom of the screen. Yes, you can add devices, some of them on the fly, while an OS such as Windows or Linux is running. All the VM's devices are defined in the software. This is a major difference compared to a physical server, where the physical hardware defines it and you cannot change it. With virtualization, you can have a VM with five sockets on an ESXi host with two sockets. Windows or Linux can run on five physical CPUs even though the underlying ESXi actually only runs on two physical CPUs.

Your server management tool needs to be aware of this and recognize that the new Configuration Management Database (CMDB) is vCenter. vCenter is certainly not a CMDB product. We're only saying that in a situation when there is a conflict between vCenter and a CMDB product, the one you trust is vCenter. In a Software-Defined Data Center (SDDC), the need for a CMDB is further reduced.

The following screenshot shows a bit more detail. Look at the CPU device. Again, what do you see that does not exist in a physical server?

VM CPU and network properties in vSphere 5.5

Let's highlight some of the options.

Look at the Reservation, Limit, and Shares options under CPU. None of them exist in a physical server, as a physical server is standalone by default. It does not share any resource on the motherboard (such as CPU or RAM) with another server. With these three levers, you can perform Quality of Service (QoS) on a virtual data center. So, QoS is actually built into the platform. This has an impact on management, as the platform is able to do some of the management by itself. There is no need to get another console to do what the platform provides you out of the box.

Other properties in the previous screenshot, such as Hardware virtualization, Performance counters, HT Sharing, and CPU/MMU Virtualization, also do not exist in a physical server. It is beyond the scope of this book to explain every feature, and there are many blogs and technical papers freely available on the Internet that explain them. Two of my favorites are http://blogs.vmware.com/performance/ and http://www.vmware.com/vmtn/resources/.

The next screenshot shows the VM Options tab. Again, which properties do you see that do not exist in a physical server?

VM Options in vSphere 5.5

I'd like to highlight a few of the properties present in the VM Options tab. The VMware Tools property is a key component. It provides you with drivers and improves manageability. The VMware Tools property is not present in a physical server. A physical server has drivers, but none of them are from VMware. A VM, however, is different. Its motherboard (virtual motherboard, naturally) is defined and supplied by VMware. Hence, the drivers are supplied by VMware. The VMware Tools property is the mechanism of supplying those drivers. The VMware Tools property comes in different versions. So, now you need to be aware of VMware Tools and it is something you need to manage.

We've just covered a few VM properties from the VM settings dialog box. There are literally hundreds of properties in VMs that do not exist in physical systems. Even the same properties are implemented differently. For example, although vSphere supports N_Port ID Virtualization (NPIV), the Guest OS does not see the World Wide Name (WWN). This means that data center management tools have to be aware of the specific implementation of vSphere. And these properties change with every vSphere release. Notice the line right at the bottom of the screenshot. It says Compatibility: ESXi 5.5 and later (VM version 10). This is your VM motherboard. It has a dependency on the ESXi version and yes, this becomes another new thing to manage too.

Every vSphere release typically adds new properties too, making a VM more manageable than a physical machine and differentiating a VM further from a physical server.


Physical server versus Virtual Machine

Hopefully, I've driven home the point that a VM is different from a physical server. I'll now list the differences from a management point of view. The following table shows the differences that impact how you manage your infrastructure. Let's begin with the core properties:


Physical server

Virtual Machine


Every brand and model has a unique BIOS. Even the same model (for example, HP DL 380 Generation 9) can have multiple BIOS versions.

The BIOS needs updates and management, often with physical access to a data center. This requires downtime.

This is standardized in a VM. There is only one type, which is the VMware motherboard. This is independent from the ESXi motherboard.

The VM BIOS needs far fewer updates and management. The inventory management system no longer needs the BIOS management module.

Virtual HW

Not applicable.

This is a new layer below the BIOS.

It needs an update after every vSphere release. A data center management system needs to be aware of this as it requires a deep knowledge of vSphere. For example, to upgrade the virtual hardware, the VM has to be in the powered-off state.


Many drivers are loaded and bundled with the OS. Often, you need to get the latest drivers from their respective hardware vendors.

All these drivers need to be managed. This can be a complex operation, as they vary from model to model and brand to brand. The management tool has rich functionalities, such as being able to check compatibility, roll out drivers, roll them back if there is an issue, and so on.

Relatively fewer drivers are loaded with the Guest OS; some drivers are replaced by the ones provided by VMware Tools.

Even with NPIV, the VM does not need the FC HBA driver. VMware Tools needs to be managed, with vCenter being the most common management tool.

How do all these differences impact the hardware upgrade process? Let's take a look:

Physical server

Virtual Machine

Downtime is required. It is done offline and is complex.

OS reinstallation and updates are required, hence it is a complex project in physical systems. Sometimes, a hardware upgrade is not even possible without upgrading the application.

It is done online and is simple. Virtualization decouples the application from hardware dependencies.

A VM can be upgraded from 5-year-old hardware to a new one, moving from the local SCSI disk to 10 Gigabit Fiber Channel over Ethernet (FCoE), from a dual-core to an 18-core CPU. So yes, MS-DOS can run on 10 Gigabit Ethernet, accessing SSD storage via the PCIe lane. You just need to migrate to the new hardware with vMotion. As a result, the operation is drastically simplified.

In the preceding table, we compared the core properties of a physical server with a VM. Every server needs storage, so let's compare their storage properties:

Physical server

Virtual Machine

Servers connected to a SAN can see the SAN and FC fabric. They need HBA drivers and have FC PCI cards, and they have multipathing software installed.

They normally need an advanced file system or volume manager to Redundant Array of Inexpensive Disks (RAID) local disk.

No VM is connected to the FC fabric or SAN. The VM only sees the local disk. Even with N_Port ID Virtualization (NPIV) and physical Raw Device Mapping (RDM), the VM does not send FC frames. Multipathing is provided by vSphere, transparent to the VM.

There is no need for a RAID local disk. It is one virtual disk, not two. Availability is provided at the hardware layer.

A backup agent and backup LAN are required in a majority of cases.

These are not needed in a majority of cases, as backup is done via the vSphere VADP API, which is a VMware vStorage API that backs up and restores vSphere VMs. An agent is only required for application-level backup.

There's a big difference in storage. How about network and security? Let's see:

Physical server

Virtual Machine

NIC teaming is common. This typically requires two cables per server.

NIC teaming is provided by ESXi. The VM is not aware and only sees one vNIC.

The Guest OS is VLAN-aware. It is configured inside the OS. Moving the VLAN requires reconfiguration.

The VLAN is generally provided by vSphere and not done inside the Guest OS. This means the VM can be moved from one VLAN to another with no downtime.

With network virtualization, the VM moves from a VLAN to VXLAN.

The AV agent is installed on the Guest and can be seen by an attacker.

An AV agent runs on the ESXi host as a VM (one per ESXi). It cannot be seen by the attacker from inside the Guest OS.

The AV consumes OS resources. AV signature updates cause high storage usage.

The AV consumes minimal Guest OS resources as it is offloaded to the ESXi Agent VM. AV signature updates do not require high Input/Output Operations Per Second (IOPS) inside the Guest OS. The total IOPS is also lower at the ESXi host level as it is not done per VM.

Finally, let's take a look at the impact on management. As can be seen here, even the way we manage a server changes once it is converted into a VM:


Physical server

Virtual Machine

Monitoring approach

An agent is commonly deployed. It is typical for a server to have multiple agents.

In-Guest counters are accurate as the OS can see the physical hardware.

A physical server has an average of 5 percent CPU utilization due to a multicore chip. As a result, there is no need to monitor it closely.

An agent is typically not deployed. Certain areas, such as application and Guest OS monitoring, are still best served by an agent.

The key in-Guest counters are not accurate as the Guest OS does not see the physical hardware.

A VM has an average of 50 percent CPU utilization as it is rightsized. This is 10 times higher compared to a physical server. As a result, there is a need to monitor it closely, especially when physical resources are oversubscribed. Capacity management becomes a discipline in itself.

Availability approach

HA is provided by clusterware, such as Microsoft Windows Server Failover Clusters (WSFC) and Veritas Cluster Server (VCS). Clusterware tends to be complex and expensive.

Cloning a physical server is a complex task and requires the boot drive to be on the SAN or LAN, which is not typical.

A snapshot is rarely made, due to cost and complexity. Only very large IT departments are found to perform physical server snapshots.

HA is a built-in core component of vSphere. From what I see, most clustered physical servers end up as just a single VM since vSphere HA is good enough.

Cloning can be done easily. It can even be done live. The drawback is that the clone becomes a new area of management.

Snapshots can be made easily. In fact, this is done every time as part of the backup process. Snapshots also become a new area of management.

Company asset

The physical server is a company asset and it has book value in the accounting system. It needs proper asset management as components vary among servers.

Here, an annual stock-take process is required.

A VM is not an asset as it has no accounting value. It is like a document. It is technically a folder with files in it.

A stock-take process is no longer required as the VM cannot exist outside vSphere.



I hope you enjoyed the comparison and found it useful. We covered, to a great extent, the impact caused by virtualization and the changes it introduces. We started by clarifying that virtualization is a different technology compared to partitioning. We then explained that once a physical server is converted into a Virtual Machine, it takes on a different form and has radically different properties. The changes range from the core property of the server itself to how we manage it.

The changes create a ripple effect in the bigger picture. The entire data center changes once we virtualize it, and this the topic of our next chapter.

About the Author

  • Iwan 'e1' Rahabok

    Iwan 'e1' Rahabok was the first VMware SE for strategic accounts in ASEAN. Joining VMware in 2008 from Sun Microsystems, he has seen how enterprises adopt virtualization and cloud computing and reap the benefits while overcoming the challenges. It is a journey that is very much ongoing and the book reflects a subset of that undertaking. Iwan was one of the first to achieve the VCAP-DCD certification globally and has since helped others to achieve the same, via his participation in the community. He started the user community in ASEAN, and today, the group is one of the largest VMware communities on Facebook. Iwan is a member of VMware CTO Ambassadors program since 2014, representing the Asia Pacific region at the global level and representing the product team and CTO office to the Asia Pacific customers. He is a vExpert since 2013, and has been helping others to achieve this global recognition for their contribution to the VMware community. After graduating from Bond University, Australia, Iwan moved to Singapore in 1994, where he has lived ever since.

    Browse publications by this author

Latest Reviews

(3 reviews total)
Iwan makes the topic understandable without attending any formal training. I've been able to start optimizing our vRealize Operations cluster to make things more visible to our management team.
seems to be good solid information on capacity management
Book Title
Unlock this full book with a FREE 10-day trial
Start Free Trial