In this chapter, we will dive into why a seemingly simple technology, a virtualized x86 machine, has huge ramifications for the IT industry. In fact, it is turning a lot of things upside down and breaking down silos that have existed for decades in large IT organizations. We will cover the following topics:
Why virtualization is not what we think it is
Virtualization versus partitioning
A comparison between a physical server and Virtual Machine
Virtual Machines, or simply, VMs—who doesn't know what they are? Even a business user who has never seen one knows what it is. It is just a physical server, virtualized—nothing more.
Wise men say that small leaks sink the ship. I think that's a good way to explain why IT departments that manage physical servers well struggle when the same servers are virtualized.
We can also use the Pareto principle (80/20 rule): 80 percent of a VM is identical to a physical server. But it's the 20 percent of differences that hits you. We will highlight some of this 20 percent portion, focusing on areas that impact data center management.
The change caused by virtualization is much larger than the changes brought about by previous technologies. In the past two or more decades, we transitioned from mainframes to the client/server-based model and then to the web-based model. These are commonly agreed upon as the main evolutions in IT architecture. However, all of these are just technological changes. They changed the architecture, yes, but they did not change the operation in a fundamental way. Both the client-server and web shifts did not talk about the "journey". There was no journey to the client-server based model. However, with virtualization, we talk about the virtualization journey. It is a journey because the changes are massive and involve a lot of people.
Gartner correctly predicted the impact of virtualization in 2007 (http://www.gartner.com/newsroom/id/505040). More than 8 years later, we are still in the midst of the journey. Proving how pervasive the change is, here is the summary of the article from Gartner:

Notice how Gartner talks about a change in culture. So, virtualization has a cultural impact too. In fact, if your virtualization journey is not fast enough, look at your organization's structure and culture. Have you broken the silos? Do you empower your people to take risks and do things that have never been done before? Are you willing to flatten the organizational chart?
So why exactly is virtualization causing such a fundamental shift? To understand this, we need to go back to the basics, which is exactly what virtualization is. It's pretty common that chief information officers (CIOs) have a misconception about what it is.
Take a look at the following comments. Have you seen them in your organization?
VM is just a physical machine that has been virtualized. Even VMware says the Guest OS is not aware it's virtualized and it does not run differently.
It is still about monitoring CPU, RAM, disk, network, and other resources—no difference.
It is a technological change. Our management process does not have to change.
All of these VMs must still feed into our main enterprise IT management system. This is how we have run our business for decades, and it works.
If only life were that simple; we would all be 100-percent virtualized and have no headaches! Virtualization has been around for years, and yet, most organizations have not mastered it. The proof of mastering it is if you complete the journey and reach the highest level of the virtualization maturity model.
There are plenty of misconceptions about the topic of virtualization, especially among IT folks who are not familiar with virtualization. CIOs who have not felt the strategic impact of virtualization (be it a good or bad experience) tend to carry these misconceptions. Although virtualization looks similar to a physical system from the outside, it is completely re-architected under the hood.
So let's take a look at the first misconception: what exactly is virtualization?
Because it is an industry trend, virtualization is often generalized to include other technologies that are not virtualized. This is a typical strategy of IT vendors that have similar technology. A popular technology often branded under virtualization is hardware partitioning; once it is parked under the umbrella of virtualization, both are expected be managed in the same way. Since both are actually different, customers who try to manage both with a single piece of management software struggle to do well.
Partitioning and virtualization are two different architectures in computer engineering, resulting in major differences between their functionalities. They are shown in the following screenshot:

Virtualization versus partitioning
With partitioning, there is no hypervisor that virtualizes the underlying hardware. There is no software layer separating the VM and the physical motherboard. There is, in fact, no VM. This is why some technical manuals about partitioning technology do not even use the term "VM". They use the terms "domain", "partition", or "container" instead.
There are two variants of partitioning technology, hardware-level and OS-level partitioning, which are covered in the following bullet points:
In hardware-level partitioning, each partition runs directly on the hardware. It is not virtualized. This is why it is more scalable and has less of a performance hit. Because it is not virtualized, it has to have an awareness of the underlying hardware. As a result, it is not fully portable. You cannot move the partition from one hardware model to another. The hardware has to be built for the purpose of supporting that specific version of the partition. The partitioned OS still needs all the hardware drivers and will not work on other hardware if the compatibility matrix does not match. As a result, even the version of the OS matters, as it is just like the physical server.
In OS-level partitioning, there is a parent OS that runs directly on the server motherboard. This OS then creates an OS partition, where another "OS" can run. I use double quotes as it is not exactly the full OS that runs inside that partition. The OS has to be modified and qualified to be able to run as a Zone or Container. Because of this, application compatibility is affected. This is different in a VM, where there is no application compatibility issue because the hypervisor is transparent to the Guest OS.
We covered the difference from an engineering point of view. However, does it translate into different data center architectures and operations? We will focus on hardware partitioning as there are fundamental differences between hardware partitioning and software partitioning. The use cases for both are also different. Software partitioning is typically used in native cloud applications.
With that, let's do a comparison between hardware partitioning and virtualization. Let's take availability as a start.
With virtualization, all VMs become protected by vSphere High Availability (vSphere HA)—100 percent protection and that too done without VM awareness. Nothing needs to be done at the VM layer. No shared or quorum disk and no heartbeat-network VM is required to protect a VM with basic HA.
With hardware partitioning, protection has to be configured manually, one by one for each Logical Partition (LPAR) or Logical Domain (LDOM). The underlying platform does not provide it.
With virtualization, you can even go beyond five nines, that is, 99.999 percent, and move to 100 percent with vSphere Fault Tolerance. This is not possible in the partitioning approach as there is no hypervisor that replays CPU instructions. Also, because it is virtualized and transparent to the VM, you can turn on and off the Fault Tolerance capability on demand. Fault Tolerance is fully defined in the software.
Another area of difference between partitioning and virtualization is Disaster Recovery (DR). With partitioning technology, the DR site requires another instance to protect the production instance. It is a different instance, with its own OS image, hostname, and IP address. Yes, we can perform a Storage Area Network (SAN) boot, but that means another Logical Unit Number (LUN) is required to manage, zone, replicate, and so on. DR is not scalable to thousands of servers. To make it scalable, it has to be simpler.
Compared to partitioning, virtualization takes a different approach. The entire VM fits inside a folder; it becomes like a document and we migrate the entire folder as if it were one object. This is what vSphere Replication in vSphere or Site Recovery Manager does. It performs a replication per VM; there is no need to configure SAN boot. The entire DR exercise, which can cover thousands of virtual servers, is completely automated and has audit logs automatically generated. Many large enterprises have automated their DR with virtualization. There is probably no company that has automated DR for their entire LPAR, LDOM, or container.
In the previous paragraph, we're not implying LUN-based or hardware-based replication to be inferior solutions. We're merely driving the point that virtualization enables you to do things differently.
We're also not saying that hardware partitioning is an inferior technology. Every technology has its advantages and disadvantages and addresses different use cases. Before I joined VMware, I was a Sun Microsystems sales engineer for 5 years, so I'm aware of the benefit of UNIX partitioning. This book is merely trying to dispel the misunderstanding that hardware partitioning equals virtualization.
We've covered the differences between hardware partitioning and virtualization.
Let's switch gears to software partitioning. In 2016, the adoption of Linux containers will continue its rapid rise. You can actually use both containers and virtualization, and they complement each other in some use cases. There are two main approaches to deploying containers:
Running them directly on bare metal
Running them inside a Virtual Machine
As both technologies evolve, the gap gets wider. As a result, managing a software partition is different from managing a VM. Securing a container is different to securing a VM. Be careful when opting for a management solution that claims to manage both. You will probably end up with the most common denominator. This is one reason why VMware is working on vSphere Integrated Containers and the Photon platform. Now that's a separate topic by itself!
A VM is not just a physical server that has been virtualized. Yes, there is a Physical-to-Virtual (P2V) process; however, once it is virtualized, it takes on a new shape. This shape has many new and changed properties, and some old properties are no longer applicable or available. My apologies if the following is not the best analogy:
On the surface, a VM looks like a physical server. So, let's actually look at VM properties. The following screenshot shows a VM's settings in vSphere 5.5. It looks familiar as it has a CPU, memory, hard disk, network adapter, and so on. However, look at it closely. Do you see any properties that you don't see in a physical server?

VM properties in vSphere 5.5
Let's highlight some of the virtual server properties that do not exist in a physical server. I'll focus on the properties that have an impact on management, as management is the topic of this book.
At the top of the dialog box, there are four tabs:
Virtual Hardware
VM Options
SDRS Rules
vApp Options
The Virtual Hardware tab is the only tab that has similar properties to a physical server. The other three tabs do not have their equivalent physical server counterparts. For example, SDRS Rules pertains to Storage DRS. It means that the VM storage can be automatically moved by vCenter. Its location in the data center is not static. This includes the drive where the OS resides (the C:\
drive in Windows systems). This directly impacts your server management tool. It has to have awareness of Storage DRS and can no longer assume that a VM is always located in the same datastore or
Logical Unit Number (LUN). Compare this with a physical server. Its OS typically resides on a local disk, which is part of the physical server. You don't want your physical server's OS drive being moved around in a data center, do you?
In the Virtual Hardware tab, notice the New device option at the bottom of the screen. Yes, you can add devices, some of them on the fly, while an OS such as Windows or Linux is running. All the VM's devices are defined in the software. This is a major difference compared to a physical server, where the physical hardware defines it and you cannot change it. With virtualization, you can have a VM with five sockets on an ESXi host with two sockets. Windows or Linux can run on five physical CPUs even though the underlying ESXi actually only runs on two physical CPUs.
Your server management tool needs to be aware of this and recognize that the new Configuration Management Database (CMDB) is vCenter. vCenter is certainly not a CMDB product. We're only saying that in a situation when there is a conflict between vCenter and a CMDB product, the one you trust is vCenter. In a Software-Defined Data Center (SDDC), the need for a CMDB is further reduced.
The following screenshot shows a bit more detail. Look at the CPU device. Again, what do you see that does not exist in a physical server?

VM CPU and network properties in vSphere 5.5
Let's highlight some of the options.
Look at the Reservation, Limit, and Shares options under CPU. None of them exist in a physical server, as a physical server is standalone by default. It does not share any resource on the motherboard (such as CPU or RAM) with another server. With these three levers, you can perform Quality of Service (QoS) on a virtual data center. So, QoS is actually built into the platform. This has an impact on management, as the platform is able to do some of the management by itself. There is no need to get another console to do what the platform provides you out of the box.
Other properties in the previous screenshot, such as Hardware virtualization, Performance counters, HT Sharing, and CPU/MMU Virtualization, also do not exist in a physical server. It is beyond the scope of this book to explain every feature, and there are many blogs and technical papers freely available on the Internet that explain them. Two of my favorites are http://blogs.vmware.com/performance/ and http://www.vmware.com/vmtn/resources/.
The next screenshot shows the VM Options tab. Again, which properties do you see that do not exist in a physical server?

VM Options in vSphere 5.5
I'd like to highlight a few of the properties present in the VM Options tab. The VMware Tools property is a key component. It provides you with drivers and improves manageability. The VMware Tools property is not present in a physical server. A physical server has drivers, but none of them are from VMware. A VM, however, is different. Its motherboard (virtual motherboard, naturally) is defined and supplied by VMware. Hence, the drivers are supplied by VMware. The VMware Tools property is the mechanism of supplying those drivers. The VMware Tools property comes in different versions. So, now you need to be aware of VMware Tools and it is something you need to manage.
We've just covered a few VM properties from the VM settings dialog box. There are literally hundreds of properties in VMs that do not exist in physical systems. Even the same properties are implemented differently. For example, although vSphere supports N_Port ID Virtualization (NPIV), the Guest OS does not see the World Wide Name (WWN). This means that data center management tools have to be aware of the specific implementation of vSphere. And these properties change with every vSphere release. Notice the line right at the bottom of the screenshot. It says Compatibility: ESXi 5.5 and later (VM version 10). This is your VM motherboard. It has a dependency on the ESXi version and yes, this becomes another new thing to manage too.
Every vSphere release typically adds new properties too, making a VM more manageable than a physical machine and differentiating a VM further from a physical server.
Hopefully, I've driven home the point that a VM is different from a physical server. I'll now list the differences from a management point of view. The following table shows the differences that impact how you manage your infrastructure. Let's begin with the core properties:
Properties |
Physical server |
Virtual Machine |
---|---|---|
BIOS |
Every brand and model has a unique BIOS. Even the same model (for example, HP DL 380 Generation 9) can have multiple BIOS versions. The BIOS needs updates and management, often with physical access to a data center. This requires downtime. |
This is standardized in a VM. There is only one type, which is the VMware motherboard. This is independent from the ESXi motherboard. The VM BIOS needs far fewer updates and management. The inventory management system no longer needs the BIOS management module. |
Virtual HW |
Not applicable. |
This is a new layer below the BIOS. It needs an update after every vSphere release. A data center management system needs to be aware of this as it requires a deep knowledge of vSphere. For example, to upgrade the virtual hardware, the VM has to be in the powered-off state. |
Drivers |
Many drivers are loaded and bundled with the OS. Often, you need to get the latest drivers from their respective hardware vendors. All these drivers need to be managed. This can be a complex operation, as they vary from model to model and brand to brand. The management tool has rich functionalities, such as being able to check compatibility, roll out drivers, roll them back if there is an issue, and so on. |
Relatively fewer drivers are loaded with the Guest OS; some drivers are replaced by the ones provided by VMware Tools. Even with NPIV, the VM does not need the FC HBA driver. VMware Tools needs to be managed, with vCenter being the most common management tool. |
How do all these differences impact the hardware upgrade process? Let's take a look:
In the preceding table, we compared the core properties of a physical server with a VM. Every server needs storage, so let's compare their storage properties:
There's a big difference in storage. How about network and security? Let's see:
Finally, let's take a look at the impact on management. As can be seen here, even the way we manage a server changes once it is converted into a VM:
I hope you enjoyed the comparison and found it useful. We covered, to a great extent, the impact caused by virtualization and the changes it introduces. We started by clarifying that virtualization is a different technology compared to partitioning. We then explained that once a physical server is converted into a Virtual Machine, it takes on a different form and has radically different properties. The changes range from the core property of the server itself to how we manage it.
The changes create a ripple effect in the bigger picture. The entire data center changes once we virtualize it, and this the topic of our next chapter.