In this first chapter, I want to talk about cloud computing. What exactly is the cloud?
Starting with a small history of virtualization, I want to explain how the transformation of physical hardware into hardware components that are build-in software, made it possible to go beyond the borders of the classic data center in many ways.
After that, I'll explain the different terminology used in cloud technology.
Here is a list of topics covered in this chapter:
- Virtualization of compute, network, and storage
- Software Defined Networking, storage, and the data center
- Service-oriented architecture (SOA)
- Cloud services
- Cloud types
If you are starting in a new area of expertise in Information Technology (IT), most of the time you'll start studying the concepts, the architecture, and sooner or later you'll start playing around and getting familiar with the topic.
However, in cloud computing, it really helps if you not only understand the concept and the architecture, but also where it comes from. I don't want to give you a lesson in the facts of history, but I want to show you that inventions and ideas in the past are still in use in modern cloud environments. This will give you a better understanding of what the cloud is and how to use it within your organization.
In the early 1970s, IBM was working on some sort of virtualization: each user had their own separated operating system, while still sharing the overall resources of the underlying system.
The main reason to develop this system was the possibility of assigning the resources based on the application needs, to add extra security and reliability: if a virtual machine crashes, the other virtual machines are still running without any problem. Nowadays, this type of virtualization has evolved into container virtualization!
Fast forward to 2001, and another type of virtualization, called hardware virtualization, was introduced by companies such as VMWare. In their product, VMware Workstation, they added a layer on top of an existing operating system that provided a set of standard hardware, build-in software instead of physical elements, to run a virtual machine. This layer become known as a hypervisor. Later on, they built their own operating system that specialized in running virtual machines: VMware ESX.
In 2008, Microsoft entered the hardware-virtualization market with the Hyper-V product, as an optional component of Windows 2008.
Hardware virtualization is all about separating software from hardware, breaking the traditional boundaries between hardware and software. The hypervisor is responsible for mapping the virtual resources on physical resources.
This type of virtualization was the enabler for a revolution in data centers:
- Because of the standard set of hardware, every virtual machine can run everywhere
- Because virtual machines are isolated from each other, there is no problem if a virtual machine crashes
- Because a virtual machine is just a set of files, you have new possibilities for backup, moving virtual machines, and so on
- New options possible in high availability (HA), the migration of running virtual machines
- New deployment options, for example, working with templates
- New options in central management, orchestration, and automation, because it's all software
- Isolation, reservation, and limiting of resources where needed, sharing resources where possible
Of course, if you can transform hardware into software for compute, it's only a matter of time before someone realizes you can do the same for network and storage.
For networking, it all started with the concept of virtual switches. Like every other form of hardware virtualization, it is nothing more than building a network switch in the software instead of hardware.
In 2004, development started on Software Defined Networking (SDN), to decouple the control plane and the data plane. In 2008, there was the first real switch implementation that achieved this goal using the OpenFlow protocol at Stanford University.
Using SDN, you have similar advantages as in compute virtualization:
- Central management, automation, and orchestration
- More granular security by traffic isolation and providing firewall and security policies
- Shaping and controlling data traffic
- New options available for HA and scalability
In 2009, Software-Defined Storage (SDS) development started at several companies, such as scality and cleversafe. Again, it's about abstraction: decoupling services (logical volumes and so on) from the physical storage elements.
If you have a look into the concepts of SDS, some vendors added a new feature to the already existing advantages of virtualization. You can add a policy to a virtual machine, defining the options you want: for instance, replication of data or a limit on the number of IOPS. This is transparent for the administrator; there is communication between the hypervisor and the storage layer to provide the functionality. Later on, this concept was also adopted by some SDN vendors.
You can actually see that virtualization slowly changed to a more service-oriented way of thinking.
If you can virtualize every component of the physical data center, you have a Software-Defined Datacenter (SDDC). The virtualization of networking, storage, and compute function made it possible to go further than the limits of one piece of hardware. SDDC makes it possible, by abstracting the software from the hardware, to go beyond the borders of the physical data center.
In the SDDC environment, everything is virtualized and often fully automated by the software. It totally changes the traditional concept of data centers. It doesn't really matter where the service is hosted or how long it's available (24-7 or on demand), and there are possibilities to monitor the service, maybe even add options such as automatic reporting and billing, which all make the end user happy.
SDDC is not the same as the cloud, not even a private cloud running in your data center, but you can argue that, for instance, Microsoft Azure is a full-scale implementation of SDDC. Azure is by definition software-defined.
In the same period that hardware virtualization become mainstream in the data center, and the development of SDN and SDS started, something new was coming in the world of software development and implementation for web-based applications' SOA:
- Minimal services that can talk to each other, using a protocol such as SOAP. Together they deliver a complete web-based application.
- The location of the service doesn't matter, the service must be aware of the presence of the other service, and that's about it.
- A service is a sort of black box; the end user doesn't need to know what's inside the box.
- Every service can be replaced.
For the end user, it doesn't matter where the application lives or that it consists of several smaller services. In a way, it's similar to virtualization: what seems to be one physical resource, for instance, a storage LUN, can actually include several physical resources (storage devices) in multiple locations.
The power of virtualization combined with SOA gives you even more options in scalability, reliability, and availability.
There are many similarities between the SOA model and SDDC, but there is a difference: SOA is about interaction between different services; SDDC is more about the delivery of services to the end user.
The modern implementation of SOA is microservices, provided by cloud environments such as Azure, running standalone or running in virtualization containers such as Docker.
here's that magic word: cloud. It's not that easy to find out exactly what it means. One way to describe it is that you want to provide a service that:
- Is always available, or available on-demand
- Can be managed by self-service
- Is able to scale up/down, and so is elastic
- Offers rapid deployment
- Can be fully automated and orchestrated
On top of that, you want monitoring and new types of billing options: most of the time, you only pay for what you use.
Cloud technology is about the delivery of a service via the internet, in order to give an organization access to resources such as software, storage, network, and other types of IT infrastructure and components.
The cloud can offer you many service types, here are the most important ones:
- Infrastructure as a service (IaaS): A platform to host your virtual machines
- Platform as a service (PaaS): A platform to develop, build, and run your applications, without the complexity of building and running your own infrastructure
- Software as a service (SaaS): Using an application running in the cloud, such as Office 365
There are several cloud implementations possible:
- Public cloud: Running all the services at a service provider. Microsoft's Azure is an implementation of this type.
- Private cloud: Running your own cloud in your data center. Microsoft recently developed a special version of Azure for this: Azure Stack.
- Hybrid cloud: A combination of a public and private cloud. One example is combining the power of Azure and Azure Stack, but you can also think about new disaster recovery options or moving services from your data center to the cloud and back if more resources are temporarily needed.
The choice for one of these implementations depends on several factors, to name a few:
- Costs: Hosting your services in the cloud can be more expensive than hosting them locally, caused by resource usage. On the other hand, it can be cheaper; for example, you don't need to implement complex and costly availability options.
- Legal restrictions: Sometimes you are not allowed to host data in a public cloud.
- Internet connectivity: There are still countries where the necessary bandwidth or even the stability of the connection is a problem.
- Complexity: Hybrid environments can be especially difficult to manage; support for applications and user-management can be challenging.
Now that you know more about virtualization and cloud computing, it's time to introduce you to the Microsoft implementation of the cloud: Azure.
Starting again with some history, in this chapter, you'll find out about the technology behind Azure and that Azure can be a very good solution for your organization.
In 2002, Microsoft started a project called Whitehorse, to streamline the development, deployment, and implementation of an application within an SOA model. In this project, there was a focus on delivering small prebuilt web applications and the ability to transform them into a service. This project died silently around 2006.
Many of the lessons learned in this project and the appearance of Amazon Web Services (AWS) were the drivers for Microsoft to start a project called RedDog in 2006.
After a while, Microsoft added three other development teams to this project:
- .NET Services: Services for developers using the SOA model. .NET Services offered Service Bus as a secure, standards-based messaging infrastructure.
- Live Services and Live Mesh: A SaaS project to enable PCs and other devices to communicate with each other through the internet.
- SQL Services: A SaaS project to deliver Microsoft SQL through the internet.
In 2008, Microsoft announced the start of Azure, and with its public release in 2010, Azure was ready to deliver IaaS and PaaS solutions. The name RedDog survived for a while: the classic portal was also known as RedDog Front-End (RDFE).
Nowadays, Azure is the Microsoft solution for the public cloud, delivering all kinds of services, such as virtual machines, Web and Mobile Apps, Active Directory, and databases.
It's still growing in its number of features, customers, and availability. Azure is available in more than 36 regions. This is very important for scalability, performance, and redundancy.
Having these many regions also helps compliance with legal rules and security/privacy policies. Microsoft is using the same Online Services Terms (http://www.microsoftvolumelicensing.com/) for all their online services, such as Office 365, which includes rulings such as the EU Standard Contractual Clause. Information and documents regarding security, privacy, and compliance are available via Microsoft's Trust Center: https://www.microsoft.com/en-us/TrustCenter.
Microsoft Azure is running on a customized, stripped-down, and hardened version of Hyper-V, also known as the Azure Hypervisor.
On top of this hypervisor, there is a cloud layer. This layer or fabric is a cluster of many hosts hosted in Microsoft's data center and is responsible for the deployment, management, and health of the infrastructure.
This layer is managed by the fabric controller, which is responsible for resource management, scalability, reliability, and availability.
This layer also provides the management interface via an API, built on REST, HTTP, and XML. Another way to interact with the fabric controller is provided by the Azure Portal and software such as the Azure CLI via the Azure Resource Manager.
These user-interfacing services will communicate through resource providers to the fabric:
- Compute Resource Provider
- Network Resource Provider
- Storage Resource Provider
These resource providers will create the needed services, such as a virtual machine.
Azure can deliver IaaS: it's easy to deploy virtual machines, manually or automated, and use these virtual machines to develop, test, and host your applications. There are many extra services available to make your life as a system engineer easier, such as backup and restore options, adding storage, and availability options. For web applications, it's even possible to deliver the service without creating a virtual machine!
Of course, Azure can also be used for PaaS solutions; like IaaS, PaaS includes all components for your infrastructure but adds support for the complete life cycle of your cloud applications: building, testing, deploying, managing, and updating. There are precoded application components available as well; you can save time transforming these components together with your code into the service you want to deliver. Containers can be another part of your PaaS solution, the Azure Container Service simplifies the deployment, management, and operations on containers using Kubernetes or another orchestrator, such as Mesos.
If you are a company or organization that wants to host an SaaS solution in Azure, this is possible using AppSource. You can even provide integration with other Microsoft products, such as Office 365 and Dynamics.
In 2017, Microsoft announced Azure Stack. You can run Azure now in your own data center or run it in the data center from a service provider of your choice to provide IaaS and PaaS. It will give you the power of Azure in scalability and availability, without worrying about the configuration. You only need to add more physical resources if needed. And if you want, you can use it in a hybrid solution with the public Azure for disaster recovery or consistent workloads in both cloud and on-premises deployments.
Azure Stack is not the only thing you can use for hybrid environments. You can, for instance, connect your local Active Directory with Azure Active Directory, or use the Azure Active Directory application to provide SSO to both local and hosted web applications.
In 2009, before Azure went public, Microsoft started adding support for open source frameworks, such as PHP, and in 2012, added support for Linux virtual machines, due to requests from many customers.
At that time, Microsoft was not a big friend of the open source community, and it's fair to say that they really didn't like the Linux operating system. This changed around 2014, when Satya Nadella succeeded Steve Ballmer as CEO of Microsoft. In October of that year, he even announced at a Microsoft Conference in San Francisco that Microsoft loves Linux!
Since that time, Azure has grown into a very open-source-friendly environment:
- It offers a platform for many open source solutions, such as Linux instances, container technology, and application/development frameworks.
- Integration with open source solutions by providing open and compatible APIs. For instance, the CosmoDB service offers a MongoDB-compatible API.
- Documentation, SDKs, and examples are all Open Source and available on GitHub: https://github.com/Azure.
- Microsoft is working together with open source projects and vendors and is also a major contributor of code to many open source projects.
In 2016, Microsoft entered the Linux Foundation organization as a Platinum member to confirm their steadily increasing interest and engagement in open source development.
In October 2017, Microsoft said that more than 40% of all virtual machines in Azure are running the Linux Operating System and Azure is running many containerized workloads. Besides that, the microservices are all using open source programming languages and interfaces.
Microsoft is very serious about open source technology, open source PowerShell, and many other products. Not every Microsoft product in Azure is open source, but at least you can install and run Microsoft SQL on Linux.
In this chapter, we discussed the history of virtualization, the concept of the cloud, and explained the terminology used in cloud environments.
Some people think that Microsoft was a little bit late entering the world of the clouds, but actually they started researching and developing techniques in 2006, and many parts of that work survived in Azure. Some of the projects died, because it was too early and many people were skeptical about the cloud in those days.
We also covered the architecture of the Azure cloud and the services that Azure can offer your organization.
In the last part of this chapter, I showed you that Azure is a very open-source-friendly environment and that Microsoft puts in a lot of effort to make Azure an open, standard cloud solution with interoperability in mind.
In the next chapter, we'll start using Azure and learn how to deploy and use Linux in Azure.
- What components in your physical data center can be transformed into software?
- What is the difference between container virtualization and hardware virtualization?
- If you want to host an application in the cloud, which service type is the best solution?
- Let's say one of your applications needs strict privacy policies. Is it still a good idea to use cloud technology for your organization?
- Why are there so many regions available in Azure?
- What is the purpose of Azure Active Directory?
If you want to learn more about Hyper-V and how you can use Azure together with Hyper-V for site recovery and protection of your workloads, check out Windows Server 2016 Hyper-V Cookbook, Second Edition by Packt.
There are many nice technical articles about the history of virtualization, cloud computing, and their relationship. One I really want to mention is about the Formal Discussion on Relationship between Virtualization and Cloud Computing (ISBN 978-1-4244-9110-0).
Don't forget to visit the Microsoft website and GitHub repository as mentioned in this chapter!