Understanding the Need to Move to a Cloud Platform
Welcome! If you have any connection to delivering software at scale in the enterprise, we hope this book will be beneficial to you. In this first chapter, we’ll establish the context in which a product such as VMware Tanzu could emerge. We’ll do this with a walk-through of the evolution of enterprise software over the last 20 years covering the major milestones, inflection points, and problems that surfaced along the way.
Then, once we have established the necessary context, we can give a 10,000-foot overview of Tanzu covering the tools, benefits, and features. And finally, we’ll set you up for success by covering some of the technical prerequisites you should have in place if you want to engage with any of the hands-on content in this book.
In this chapter, we will cover the following:
- The challenges of running a software supply chain
- The emergence of the cloud and containers
- Kubernetes
- Outcome-driven approach
- The need for VMware Tanzu
The challenges of running a software supply chain
VMware Tanzu is a modular software application platform that runs natively on multiple clouds and is geared toward important business outcomes such as developer productivity, operator efficiency, and security by default. If you are looking for a hands-on detailed treatment of VMware Tanzu, you won’t be disappointed.
However, before diving into the platform’s components, it may help to understand some history and background. If you’re reading this, there’s a good chance you participate in the coding, designing, architecting, operating, monitoring, or managing of software. However, you may not have considered that you are participating in a supply chain.
According to Adam Hayes in his Investopedia article, The Supply Chain: From Raw Materials to Order Fulfillment, a supply chain “refers to the network of organizations, people, activities, information and resources involved in delivering a product or service to a consumer.”
When a piece of software makes the journey from a developer’s workstation to an end user, that’s as much of a supply chain as when Red Bull and ramen noodles make the trek from raw ingredients to a production facility to a warehouse to the neighborhood grocery store.
Every supply chain has its own set of challenges, and software supply chains are no exception. Most software written today consists of libraries and frameworks containing millions of lines of open source software developed by people who are essentially anonymous and whose motivations are not entirely clear.
Much of that software changes hands many times as it moves from an open source repository to the developer, to source control, to building and packaging, to testing, to staging, and finally, to running in production. Furthermore, the infrastructure on which that software runs is often open source as well, with a worldwide community of hackers working to identify vulnerabilities in the operating systems, network protocol implementations, and utilities that make up the dial tone that your software runs on. This ecosystem presents an enormous surface area for bad things to happen.
For further reading on real-world examples of what can go wrong with software supply chains, I’d recommend a quick search of the web for the 2020 SolarWinds incident or the 2021 emergence of Log4Shell (CVE-2021-44228). The authors of this book, in their capacity as Tanzu solution engineers, have seen first-hand the impact software supply chain issues can have across the financial, government, telecom, retail, and entertainment sectors.
The emergence of the cloud and containers
Happily, the tech industry has begun to coalesce around some key technologies that have come a long way in addressing some of these concerns. The first is what is known colloquially as The Cloud. Enterprises with a big technology footprint began to realize that managing data centers and infrastructure was not the main focus of their business and could be done more efficiently by third parties for which those tasks were a core competency.
Rather than staff their own data centers and manage their own hardware, they could rent expertly managed and maintained technology infrastructure, and they could scale their capacity up and back down based on real-time business needs. This was a game-changer on many levels. One positive outcome of this shift was a universal raising of the bar with regard to vulnerable and out-of-date software on the internet. As vulnerabilities emerged, the cloud providers could make the right thing to do the easiest thing to do. Managed databases that handle their own operating system updates and object storage that is publicly inaccessible by default are two examples that come immediately to mind. Another outcome was a dramatic increase in deployment velocity as infrastructure management was taken off developers’ plates.
As The Cloud became ubiquitous in the world of enterprise software and infrastructure capacity became a commodity, this allowed some issues that were previously obscured by infrastructure concerns to take center stage. Developers could have something running perfectly in their development environment only to have it fall over in production. This problem became so common that it earned its own subgenre of programmer humor called It Works on My Machine.
Another of these issues was unused capacity. It had become so easy to stand up new server infrastructure, that app teams were standing up large (and expensive) fleets only to have them running nearly idle most of the time.
Containers
That brings us to the subject of containers. Many application teams would argue that they needed their own fleet of servers because they had a unique set of dependencies that needed to be installed on those servers, which conflicted with the libraries and utilities required by other apps that may want to share those servers. It was the happy confluence of two technical streams that solved this problem, allowing applications with vastly different sets of dependencies to run side by side on the same server without them even being aware of one another.
Container runtimes
The first stream was the concept of cgroups and kernel namespaces. These were abstractions that were built into the Linux kernel that gave a process some guarantee as to how much memory and processor capacity it would have available to it, as well as the illusion of its own process space, its own networking stack, and its own root filesystem, among other things.
Container packaging and distribution
The second was an API by which you could package up an entire Linux root filesystem, complete with its own unique dependencies, store it efficiently, unpack it on an arbitrary server, and run it in isolation from other processes that were running with their own root filesystems.
When combined, developers found that they could stand up a fleet of servers and run many heterogeneous applications safely on that single fleet, thereby using their cloud infrastructure much more efficiently.
Then, just as the move to the cloud exposed a new set of problems that would lead to the evolution of a container ecosystem, those containers created a new set of problems, which we’ll cover in the next section about Kubernetes.
Kubernetes
When containers caught on, they took off in a big way, but they were not the be-all-and-end-all solution developers had hoped for. A container runtime on a server often required big trade-offs between flexibility and security. Because the container runtime needed to work closely with the Linux kernel, users often required elevated permissions just to run their containers. Furthermore, there were multiple ways to run containers on a server, some of which were tightly coupled to specific cloud providers. Finally, while container runtimes let developers start up their applications, they varied widely in their support for things like persistent storage and networking, which often required manual configuration and customization.
These were the problems that Joe Beda, Craig McLuckie, and Brendan Burns at Google were trying to solve when they built Kubernetes. Rather than just a means of running containerized applications on a server, Kubernetes evolved into what Google Distinguished Developer Advocate Kelsey Hightower called ”a platform for building platforms.” Kubernetes offered many benefits over running containers directly on a server:
- It provided a single flexible declarative API for describing the desired state of a running application – 9 instances, each using 1 gigabyte of RAM and 500 millicores of CPU spread evenly over 3 availability zones, for example
- It handled running the instances across an elastic fleet of servers complete with all the necessary networking and resource management
- It provided a declarative way to expose cloud-provider-specific implementations of networking and persistent storage to container workloads
- It provided a framework for custom APIs such that any arbitrary object could be managed by Kubernetes
- It shipped with developer-oriented abstractions such as Deployments, Stateful Sets, Config Maps, and Secrets, which handled many common use cases
Many of us thought that perhaps Kubernetes was the technological advance that would finally solve all of our problems, but just as with each previous technology iteration, the solution to a particular set of problems simply exposes a new generation of problems.
As companies with large teams of developers began to onboard onto Kubernetes, these problems became increasingly pronounced. Here are some examples:
- Technology sprawl took hold, with each team solving the same problem differently
- Teams had their own ops tooling and processes making it difficult to scale operations across applications
- Enforcing best practices involved synchronous human-bound processes that slowed developer velocity
- Each cloud provider’s flavor of Kubernetes was slightly different, making multi-cloud and hybrid-cloud deployments difficult
- Many of the core components of a Kubernetes Deployment – container images, for example – simply took existing problems and allowed developers to deploy vulnerable software much more quickly and widely than before, actually making the problem worse
- Entire teams had to be spun up just to manage developer tooling and try to enforce some homogeneity across a wide portfolio of applications
- Running multiple different applications on a single Kubernetes cluster requires significant operator effort and investment
Alas, Kubernetes was not the panacea we had hoped it would be; rather, it was just another iteration of technology that moves the industry forward by solving one set of problems but inevitably surfacing a new set of problems. This is where the Tanzu team at VMware comes into the picture.
Outcome-driven approach
The Tanzu team at VMware came into existence just as Kubernetes was hitting its stride in the enterprise. VMware was poised for leadership in the space with the acquisition of Heptio, which brought deep Kubernetes knowledge and two of the original creators. It also acquired a well-honed philosophy of software delivery through the acquisition of Pivotal. The Tanzu team continues to deliver a thoughtful and nuanced Kubernetes-based application platform focused on meaningful business outcomes that were important to customers.
There is no doubt that many mature Tanzu customers were facing some of the problems with Kubernetes mentioned in the last section, but they were also focused on some key outcomes, such as the following:
- Maximizing developer velocity, productivity, and impact
- Maximizing operator efficiency
- Operating seamlessly across the data center and multiple cloud providers
- Making software secure by default
These were the outcomes Tanzu set out to achieve for customers, and in the process, they would take on many of the issues people were running into with Kubernetes.
The Tanzu portfolio would take an outcome-driven approach to deliver an opinionated Kubernetes-based cloud-native application platform that was optimized for operator efficiency, developer productivity, seamless multi-cloud operation, and application security. That is the platform that we’ll cover in this book.
The need for VMware Tanzu
Companies with a large development footprint that were looking at or actively using Kubernetes faced two sources of resistance. First, they faced a myriad of problems running Kubernetes at scale across multiple teams. These are the problems listed at the end of the Kubernetes section of this chapter. Second, these teams were under pressure to use technology to deliver meaningful outcomes. This meant developers needed to be operating at their full potential with minimum friction, and operators needed to be able to scale across multiple teams with a unified set of tools. This is the outcome-driven approach described in the previous section. VMware Tanzu is a portfolio of tools geared specifically toward addressing both sets of needs for software teams.
This diagram highlights where VMware Tanzu fits both as the next iteration of software platforms on Kubernetes, as well as the integrated toolkit enabling world-class outcomes from software development, operations, and security engineers:

Figure 1.1 – VMware Tanzu in context
Now that we’ve established the context that prompted the creation of VMware Tanzu, we’re ready to describe the product itself.
Features, tools, benefits, and applications of VMware Tanzu
It is a combination of open source, proprietary, and Software as a Service (SaaS) offerings that work together to enable the outcomes that are important to software teams. These tools are tightly integrated and give developers a single toolset geared toward delivering important outcomes and running Kubernetes at scale. These tools and applications fall into three broad groups.
Build and develop
Tools that enable developers to efficiently and reliably develop and build software go in this group. This includes Application Accelerator for VMware Tanzu, VMware Tanzu Build Service, VMware Application Catalog, and API portal for VMware Tanzu.
Run
This group contains the tools and applications to efficiently deliver and run applications on an ongoing basis. It includes Harbor, Tanzu Kubernetes Grid, and Tanzu Application Platform.
Manage
The final group contains tools for the management of applications and the platform itself. It includes Tanzu Mission Control, VMware Aria operations for Applications, and Tanzu Service Mesh.
Prerequisites
Now that we’ve laid out the why and the what of VMware Tanzu, we’re ready to get our hands dirty solving some real-world problems. This book is geared toward software professionals, and there are some tools and concepts that this book assumes you know about. Don’t worry if you’re not strong across all of these areas as each chapter will walk you through step by step.
The Linux console and tools
You can follow along with most chapters in this book using a Windows machine, but experience dictates that things work much more smoothly if you use a Mac or a Linux workstation. There are numerous options available for Windows users, including virtual machines, dual-booting into Linux, or working from a cloud-based virtual machine. This book assumes you are comfortable with navigating a filesystem, finding and viewing files, and editing them with an editor such as Vim or nano.
Docker
This book is heavily geared toward containers. The primary way to interact with APIs that build and run containers is with the Docker CLI. You will need both a Docker daemon and the Docker CLI to work through some of the chapters in this book. It assumes that you are comfortable listing container images as well as running containers.
Kubernetes
Kubernetes is at the core of the Tanzu portfolio. This book assumes you can stand up a Kubernetes cluster locally or on a public cloud provider. It also assumes that you are comfortable with the kubectl CLI to interact with a Kubernetes cluster. Finally, you should be able to read YAML Kubernetes manifests.
Workstation requirements and public cloud resources
Some of the tools discussed in this book can be run locally on your workstation, while others are better suited to the public cloud. Others require only a web browser and a pre-existing Kubernetes cluster.
If you want to run Tanzu Kubernetes Grid locally, a minimum of 32 gigabytes of RAM is strongly recommended. You may find that other tools, such as Tanzu Application Platform or Harbor, run best on a Kubernetes cluster provided by a public cloud provider. I highly recommend using the providers’ built-in budgeting tools to make sure that you don’t rack up an unexpected bill.
Now that you know what you’ll need, I encourage you to find a topic of interest and dive in. All Packt books are organized with self-contained chapters so you can work through the book front-to-back or jump straight to the topics that interest you.
Summary
In this chapter, we gave a brief history of enterprise software and how its evolution set the stage for VMware Tanzu. Then, we gave a high-level overview of the Tanzu product itself. And finally, we covered some technical requirements to set you up for success in the hands-on exercises.
With all that in place, you’re free to jump to any chapter that piques your interest; they’re all self-contained and can be read on their own. Or, if you’re going in order, we’ll start at the beginning of the software development process. Application Accelerator for VMware Tanzu is a tool for bootstrapping a new application with all of the enterprise standards and approved libraries built in, giving you a more uniform portfolio of apps and preventing software teams from repeating past mistakes. Regardless of your approach to the book, we hope you enjoy the material, and we wish you great success putting it into practice.