Chapter 1: Exploring the Microsoft Azure Cloud
People often get confused due to the ambiguity surrounding the term cloud computing. Here, we are not referring to cloud storage solutions such as OneDrive, Dropbox, and so on. Instead, we are referring to actual computing solutions that are used by organizations, companies, or even individuals.
Microsoft Azure (previously known as Windows Azure) is Microsoft's public cloud computing platform. It offers a wide range of cloud services, including compute, analytics, storage, networking, and more. If you go through the list of services offered by Azure, you'll see that you can work with practically anything—from virtual machines to artificial intelligence and machine learning.
Starting with a brief history of virtualization, we will explain how the transformation of physical hardware into virtualized hardware made it possible to go beyond the borders of classic datacenters in many ways.
After that, we'll explain the different terminology used in cloud technology.
Here is the list of key topics that we'll cover:
- Virtualization of compute, network, and storage
- Cloud services
- Cloud types
Fundamentals of Cloud Computing
When you first start learning a new subject in Information Technology (IT), you'll usually begin by studying the underlying concepts (that is, the theory). You'll then familiarize yourself with the architecture, and sooner or later you'll start playing around and getting hands-on to see how it works in practice.
However, in cloud computing, it really helps if you not only understand the concepts and the architecture but also where it comes from. We don't want to give you a history lesson, but we want to show you that inventions and ideas from the past are still in use in modern cloud environments. This will give you a better understanding of what the cloud is and how to use it within your organization.
The following are the key fundamentals of cloud computing:
- Software-Defined Datacenter (SDDC)
- Service-Oriented Architecture (SOA)
- Cloud services
- Cloud types
Let's take a look at each of these and understand what these terms refer to.
In computing, virtualization refers to the creation of a virtual form of a device or a resource, such as a server, storage device, network, or even an operating system. The concept of virtualization came into the picture when IBM developed its time-sharing solutions in the late 1960s and early 1970s. Time-sharing refers to the sharing of computer resources between a large group of users, increasing the productivity of users and eliminating the need to purchase a computer for each user. This was the beginning of a revolution in computer technology where the cost of purchasing new computers reduced significantly and it became possible for organizations to utilize the under-utilized computer resources they already had.
Nowadays, this type of virtualization has evolved into container-based virtualization. Virtual machines have their own operating system, which is virtualized on top of a physical server; on the other hand, containers on one machine (either physical or virtual) all share the same underlying operating system. We will talk more about containers in Chapter 9, Container Virtualization in Azure.
Fast-forward to 2001, when another type of virtualization, called hardware virtualization, was introduced by companies such as VMware. In their product, VMware Workstation, they added a layer on top of an existing operating system that provided a set of standard hardware and built-in software instead of physical elements to run a virtual machine. This layer became known as a hypervisor. Later, they built their own operating system, which specialized in running virtual machines: VMware ESXi (formerly known as ESX).
In 2008, Microsoft entered the hardware-virtualization market with the Hyper-V product, as an optional component of Windows Server 2008.
Hardware virtualization is all about separating software from hardware, breaking the traditional boundaries between hardware and software. A hypervisor is responsible for mapping virtual resources on physical resources.
This type of virtualization was the enabler for a revolution in datacenters:
- Because of the standard set of hardware, every virtual machine can run on any physical machine where the hypervisor is installed.
- Since virtual machines are isolated from each other, if a particular virtual machine crashes, it will not affect any other virtual machine running on the same hypervisor.
- Because a virtual machine is just a set of files, you have new possibilities for backup, moving virtual machines, and so on.
- New options became available to improve the availability of workloads, with high availability (HA), and the possibility to migrate a virtual machine, even if it's still running.
- New deployment options also became available, for example, working with templates.
- There were also new options for central management, orchestration, and automation because it's all software defined.
- Isolation, reservation, and the limiting of resources where needed, sharing resources where possible.
Of course, if you can transform hardware into software for compute, it's only a matter of time before someone realizes you can do the same for network and storage.
For networking, it all started with the concept of virtual switches. Like every other form of hardware virtualization, it is nothing more than building a network switch in software instead of hardware.
The Internet Engineering Task Force (IETF) started working on a project called Forwarding and Control Element Separation, which was a proposed standard interface to decouple the control plane and the data plane. In 2008, the first real switch implementation that achieved this goal took place using the OpenFlow protocol at Stanford University. Software-Defined Networking (SDN) was commonly associated with the OpenFlow protocol.
Using SDN, you have similar advantages as in compute virtualization:
- Central management, automation, and orchestration
- More granular security through traffic isolation and providing firewall and security policies
- Shaping and controlling data traffic
- New options available for HA and scalability
In 2009, Software-Defined Storage (SDS) development started at several companies, such as Scality and Cleversafe. Again, it's about abstraction: decoupling services (logical volumes and so on) from physical storage elements.
If you have a look into the concepts of SDS, some vendors added a new feature to the already existing advantages of virtualization. You can add a policy to a virtual machine, defining the options you want: for instance, replication of data or a limit on the number of Input/Output Operations per Second (IOPS). This is transparent for the administrator; there is communication between the hypervisor and the storage layer to provide the functionality. Later on, this concept was also adopted by some SDN vendors.
You can actually see that virtualization slowly changed the management of different datacenter layers into a more service-oriented approach.
If you can virtualize every component of a physical datacenter, you have an SDDC. The virtualization of networking, storage, and compute functions made it possible to go further than the limits of one piece of hardware. SDDC makes it possible, by abstracting the software from the hardware, to go beyond the borders of a physical datacenter.
In an SDDC environment, everything is virtualized and often fully automated by the software. It totally changes the traditional concept of datacenters. It doesn't really matter where the service is hosted or how long it's available (24-7 or on demand). Also, there are possibilities to monitor the service, perhaps even adding options such as automatic reporting and billing, which all make the end user happy.
SDDC is not the same as the cloud, not even a private cloud running in your datacenter, but you could argue that, for instance, Microsoft Azure is a full-scale implementation of SDDC—Azure is, by definition, software-defined.
In the same period that hardware virtualization became mainstream in datacenters and the development of SDN and SDS started, something new appeared in the world of software development for web-based applications: SOA, which offers several benefits. Here are some of the key points:
- Minimal services that can talk to each other, using a protocol such as Simple Object Access Protocol (SOAP). Together, they deliver a complete web-based application.
- The location of the service doesn't matter; the service must be aware of the presence of the other service, and that's about it.
- A service is a sort of black box; the end user doesn't need to know what's inside the box.
- Every service can be replaced by another service.
For the end user, it doesn't matter where the application lives or that it consists of several smaller services. In a way, it's like virtualization: what seems to be one physical resource, for instance, a storage LUN (Logical Unit Number) could actually include several physical resources (storage devices) in multiple locations. As mentioned earlier, if one service is aware of the presence of another service (it could be in another location), they'll act together and deliver the application. Many websites that we interact with daily are based on SOA.
The power of virtualization combined with SOA gives you even more options in terms of scalability, reliability, and availability.
There are many similarities between the SOA model and SDDC, but there is a difference: SOA is about the interaction between different services; SDDC is more about the delivery of services to the end user.
The modern implementation of SOA is microservices, provided by cloud environments such as Azure, running standalone or running in virtualization containers such as Docker.
Here's that magic word: cloud. A cloud service is any service available to organizations, companies, or users provided by a cloud solution or computing provider such as Microsoft Azure. Cloud services are appropriate if you want to provide a service that:
- Is highly available and always on demand.
- Can be managed via self-service.
- Has scalability, which enables a user to scale up (making the hardware stronger) or scale out (adding additional nodes).
- Has elasticity – the ability to dynamically expand or shrink the number of resources based on business requirements.
- Offers rapid deployment.
- Can be fully automated and orchestrated.
On top of that, there are cloud services for monitoring your resources and new types of billing options: most of the time, you only pay for what you use.
Cloud technology is about the delivery of a service via the internet, in order to give an organization access to resources such as software, storage, network, and other types of IT infrastructure and components.
The cloud can offer you many service types. Here are the most important ones:
- Infrastructure as a Service (IaaS): A platform to host your virtual machines. Virtual machines deployed in Azure are a good example of this.
- Platform as a Service (PaaS): A platform to develop, build, and run your applications, without the complexity of building and running your own infrastructure. For example, there is Azure App Service, where you can push your code and Azure will host the infrastructure for you.
- Software as a Service (SaaS): Ready-to-go applications, running in the cloud, such as Office 365.
Even though the aforementioned are the key pillars of cloud services, you might also hear about FaaS (Function as a Service), CaaS (Containers as a Service), SECaaS (Security as a Service), and the list goes on as the number of service offerings in the cloud increases day by day. Function App in Azure would be an example for FaaS, Azure Container Service for CaaS, and Azure Active Directory for SECaaS.
Cloud services can be classified based on their location or based on the platform the service is hosted on. As mentioned in the previous section, based on the platform, we can classify cloud offerings as IaaS, PaaS, SaaS, and so on; however, based on location, we can classify cloud as:
- Public cloud: All services are hosted by a service provider. Microsoft's Azure is an implementation of this type.
- Private cloud: Your own cloud in your datacenter. Microsoft recently developed a special version of Azure for this: Azure Stack.
- Hybrid cloud: A combination of a public and private cloud. One example is combining the power of Azure and Azure Stack, but you can also think about new disaster recovery options or moving services from your datacenter to the cloud and back if more resources are temporarily needed.
- Community cloud: A community cloud is where multiple organizations work on the same shared platform, provided that they have similar objectives or goals.
Choosing one of these cloud implementations depends on several factors; to name just a few:
- Costs: Hosting your services in the cloud can be more expensive than hosting them locally, depending on resource usage. On the other hand, it can be cheaper; for example, you don't need to implement complex and costly availability options.
- Legal restrictions: Some organizations would not be able to use the public cloud. For example, the US Government has its own Azure offering called Azure Government. Likewise, Germany and China have their own Azure offerings.
- Internet connectivity: There are still countries where the necessary bandwidth or even the stability of the connection is a problem.
- Complexity: Hybrid cloud environments, in particular, can be difficult to manage; support for applications and user management can be challenging.
Understanding the Microsoft Azure Cloud
Now that you know more about virtualization and cloud computing, it's time to introduce you to the Microsoft implementation of the cloud: Azure.
Starting again with some history, in this section, you'll find out about the technology behind Azure and that Azure can be a very good solution for your organization.
A Brief History of the Microsoft Azure Cloud
In 2002, Microsoft started a project called Whitehorse to streamline the development, deployment, and implementation of an application within an SOA model. In this project, there was a focus on delivering small, prebuilt web applications and the ability to transform them into services. This project died silently around 2006.
Many of the lessons learned in that project and the appearance of Amazon Web Services (AWS) were the drivers for Microsoft, in 2006, to start a project called RedDog.
After a while, Microsoft added three other development teams to this project:
- .NET Services: Services for developers using the SOA model. .NET Services offered Service Bus as a secure, standards-based messaging infrastructure.
- Live Services and Live Mesh: A SaaS project to enable PCs and other devices to communicate with each other through the internet.
- SQL Services: A SaaS project to deliver Microsoft SQL through the internet.
In 2008, Microsoft announced the start of Azure, and with its public release in 2010, Azure was ready to deliver IaaS and PaaS solutions. The name RedDog survived for a while: the classic portal was also known as RedDog Front-End (RDFE). The classic portal was based on the Service Management Model. On the other hand, the Azure portal is based on Azure Resource Manager (ARM). These two portals are based on two different APIs.
Nowadays, Azure is one of three Microsoft clouds (the others are Office 365 and Xbox) for delivering different kinds of services, such as virtual machines, web and mobile apps, Active Directory, databases, and so on.
It's still growing in terms of the number of features, customers, and availability. Azure is available in more than 54 regions. This is very important for scalability, performance, and redundancy.
Having these many regions also helps compliance with laws and security/privacy policies. Information and documents regarding security, privacy, and compliance are available via Microsoft's Trust Center: https://www.microsoft.com/en-us/TrustCenter.
Microsoft Azure runs on a customized, stripped-down, and hardened version of Hyper-V, also known as the Azure Hypervisor.
On top of this hypervisor, there is a cloud layer. This layer, or fabric, is a cluster of many hosts hosted in Microsoft's datacenter and is responsible for the deployment, management, and health of the infrastructure.
This cloud layer is managed by the fabric controller, which is responsible for resource management, scalability, reliability, and availability.
This layer also provides the management interface via an API built on REST, HTTP, and XML. Another way to interact with the fabric controller is provided by the Azure portal and software such as the Azure CLI via Azure Resource Manager.
The following is a pictorial representation of the architecture of Azure:
Figure 1.1: Azure architecture
These user-interfacing services (Azure portal, PowerShell, Azure CLI, and API) will communicate with the fabric through resource providers. For example, if you want to create, delete, or update a compute resource, a user will interact with the Microsoft.Compute resource provider, which is also known as Compute Resource Provider (CRP). Likewise, network resources are communicated via Network Resource Provider (NRP) or the Microsoft.Network resource provider, and storage resources are communicated via Storage Resource Provider (SRP) or the Microsoft.Storage resource provider.
These resource providers will create the required services, such as a virtual machine.
Azure in Your Organization
Azure can deliver IaaS: it's easy to deploy virtual machines, manually or automated, and use those virtual machines to develop, test, and host your applications. There are many extra services available to make your life as a system engineer easier, such as backup and restore options, adding storage, and availability options. For web applications, it's even possible to deliver the service without creating a virtual machine!
Of course, Azure can also be used for PaaS solutions; like IaaS, PaaS includes all of the components of your infrastructure but adds support for the complete life cycle of your cloud applications: building, testing, deploying, managing, and updating. There are pre-defined application components available as well; you can save time transforming these components together with your code into the service you want to deliver. Containers can be another part of your PaaS solution. Azure Container Service simplifies deployment, management, and operations on containers using Kubernetes or another orchestrator, such as Mesos.
If you are a company or organization that wants to host a SaaS solution in Azure, this is possible using AppSource. You can even provide integration with other Microsoft products, such as Office 365 and Dynamics.
In 2017, Microsoft announced Azure Stack. You can run Azure now in your own datacenter or run it in a datacenter from a service provider of your choice to provide IaaS and PaaS. It gives you the power of Azure in terms of scalability and availability, without worrying about the configuration. You only need to add more physical resources if needed. And if you want, you can use it in a hybrid solution with public Azure for disaster recovery or consistent workloads in both cloud and on-premises deployments.
Azure Stack is not the only thing you can use for hybrid environments. You can, for instance, connect your local Active Directory with Azure Active Directory, or use the Azure Active Directory application to provide Single Sign-On (SSO) to both local and hosted web applications.
Azure and Open Source
In 2009, even before Azure went public, Microsoft started adding support for open-source frameworks, such as PHP, and in 2012, Microsoft added support for Linux virtual machines, due to requests from many customers.
At that time, Microsoft was not a big friend of the open-source community, and it's fair to say that they really didn't like the Linux operating system. This changed around 2014, when Satya Nadella succeeded Steve Ballmer as CEO of Microsoft. In October of that year, he even announced at a Microsoft conference in San Francisco that Microsoft loves Linux!
Since that time, Azure has grown into a very open source–friendly environment:
- It offers a platform for many open-source solutions, such as Linux instances, container technology, and application/development frameworks.
- It offers integration with open-source solutions by providing open and compatible APIs. For instance, the Cosmos DB service offers a MongoDB-compatible API.
- The documentation, Software Development Kits (SDK), and examples are all open source and available on GitHub: https://github.com/Azure.
- Microsoft is working together with open-source projects and vendors and is also a major contributor of code to many open-source projects.
In 2016, Microsoft entered the Linux Foundation organization as a Platinum member to confirm their steadily increasing interest and engagement in open-source development.
In October 2017, Microsoft said that more than 40% of all virtual machines in Azure are running the Linux operating system and Azure is running many containerized workloads. Looking at the current statistics, the number of workloads has reached more than 60%. Besides that, microservices are all using open-source programming languages and interfaces.
Microsoft is very serious about open-source technology, open-source PowerShell, and many other products. Not every Microsoft product in Azure is open source, but at least you can install and run Microsoft SQL on Linux or you can get a container image for Microsoft SQL.
In this chapter, we discussed the history of virtualization and the concept of the cloud, and we explained the terminology used in cloud environments.
Some people think that Microsoft was a little bit late entering the cloud world, but actually, they started researching and developing techniques in 2006, and many parts of that work survived in Azure. Some of the projects died, because they were too early, and many people were skeptical about the cloud in those days.
We also covered the architecture of the Azure cloud and the services that Azure can offer your organization.
In the last part of this chapter, we saw that Azure is a very open source–friendly environment and that Microsoft puts in a lot of effort to make Azure an open, standard cloud solution with interoperability in mind.
In the next chapter, we'll start using Azure and learn how to deploy and use Linux in Azure.
- What components in your physical datacenter can be transformed into software?
- What is the difference between container virtualization and hardware virtualization?
- If you want to host an application in the cloud, which service type is the best solution?
- Let's say one of your applications needs strict privacy policies. Is it still a good idea to use cloud technology for your organization?
- Why are there so many regions available in Azure?
- What is the purpose of Azure Active Directory?
If you want to learn more about Hyper-V and how you can use Azure together with Hyper-V for site recovery and the protection of your workloads, check out Windows Server 2016 Hyper-V Cookbook, Second Edition by Packt Publishing.
There are many nice technical articles about the history of virtualization, cloud computing, and their relationship. One we really want to mention is Formal Discussion on Relationship between Virtualization and Cloud Computing (ISBN 978-1-4244-9110-0).
Don't forget to visit the Microsoft website and GitHub repository as mentioned in this chapter!