Azure for Architects - Third Edition

By Ritesh Modi , Jack Lee , Rithin Skaria
  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. 1. Getting started with Azure

About this book

Thanks to its support for high availability, scalability, security, performance, and disaster recovery, Azure has been widely adopted to create and deploy different types of application with ease. Updated for the latest developments, this third edition of Azure for Architects helps you get to grips with the core concepts of designing serverless architecture, including containers, Kubernetes deployments, and big data solutions.

You'll learn how to architect solutions such as serverless functions, you'll discover deployment patterns for containers and Kubernetes, and you'll explore large-scale big data processing using Spark and Databricks. As you advance, you'll implement DevOps using Azure DevOps, work with intelligent solutions using Azure Cognitive Services, and integrate security, high availability, and scalability into each solution. Finally, you'll delve into Azure security concepts such as OAuth, OpenConnect, and managed identities.

By the end of this book, you'll have gained the confidence to design intelligent Azure solutions based on containers and serverless functions.

Publication date:
July 2020
Publisher
Packt
Pages
698
ISBN
9781839215865

 

1. Getting started with Azure

Every few years, a technological innovation emerges that permanently changes the entire landscape and ecosystem around it. If we go back in time, the 1970s and 1980s were the time of mainframes. These mainframes were massive, often occupying large rooms, and were solely responsible for almost all computing work. Since the technology was difficult to procure and time-consuming to use, many enterprises used to place orders for mainframes one month in advance before they could have an operational mainframe set up.

Then, the early 1990s witnessed a boom in demand for personal computing and the internet. As a result, computers became much smaller in size and comparatively easy to procure for the general public. Consistent innovations on the personal computing and internet fronts eventually changed the entire computer industry. Many people had desktop computers that were capable of running multiple programs and connecting to the internet. The rise of the internet also propagated the rise of client-server deployments. Now there could be centralized servers hosting applications, and services could be reached by anyone who had a connection to the internet anywhere on the globe. This was also a time when server technology gained prominence; Windows NT was released during this time and was soon followed by Windows 2000 and Windows 2003 at the turn of the century.

The most remarkable innovation of the 2000s was the rise and adoption of portable devices, especially smartphones, and with these came a plethora of apps. Apps could connect to centralized servers on the internet and carry out business as usual. Users were no longer dependent on browsers to do this work; all servers were either self-hosted or hosted using a service provider, such as an internet service provider (ISP).

Users did not have much control over their servers. Multiple customers and their deployments were part of the same server, even without customers knowing about it.

However, something else happened in the middle and latter parts of the first decade of the 2000s. This was the rise of cloud computing, and it again rewrote the entire landscape of the IT industry. Initially, adoption was slow, and people approached it with caution, either because the cloud was in its infancy and still had to mature, or because people had various negative notions about what it was.

To gain a better understanding of the disruptive technology, we will cover the following topics in this chapter:

  • Cloud computing
  • Infrastructure as a service (IaaS), platform as a service (PaaS), and Software as a service (SaaS)
  • Understanding Azure
  • Azure Resource Manager (ARM)
  • Virtualization, containers, and Docker
  • Interacting with the intelligent cloud
 

Cloud computing

Today, cloud computing is one of the most promising upcoming technologies, and enterprises, no matter how big or small, are adopting it as a part of their IT strategy. It is difficult these days to have any meaningful conversation about an IT strategy without including cloud computing in the overall solution discussions.

Cloud computing, or simply the cloud in layman's terms, refers to the availability of resources on the internet. These resources are made available to users on the internet as services. For example, storage is available on-demand through the internet for users to store their files, documents, and more. Here, storage is a service that is offered by a cloud provider.

A cloud provider is an enterprise or consortium of companies that provides cloud services to other enterprises and consumers. They host and manage these services on behalf of the user. They are responsible for enabling and maintaining the health of services. There are large datacenters across the globe that have been opened by cloud providers to cater to the IT demands of users.

Cloud resources consist of hosting services on on-demand infrastructures, such as computing infrastructures, networks, and storage facilities. This flavor of the cloud is known as IaaS.

The advantages of cloud computing

Cloud adoption is at an all-time high and is growing because of several advantages, such as these:

  • Pay-as-you-go model: Customers do not need to purchase hardware and software for cloud resources. There is no capital expenditure for using a cloud resource; customers simply pay for the time that they use or reserve a resource.
  • Global access: Cloud resources are available globally through the internet. Customers can access their resources on-demand from anywhere.
  • Unlimited resources: The scaling capability of cloud technology is unlimited; customers can provision as many resources as they want, without any constraints. This is also known as unlimited scalability.
  • Managed services: The cloud provider provides numerous services that are managed by them for customers. This takes away any technical and financial burden from the customer.

Why cloud computing?

To understand the need for cloud computing, we must understand the industry's perspective.

Flexibility and agility

Instead of creating a large monolithic application using a big-bang approach deployment methodology, today, applications comprise smaller services using the microservices paradigm. Microservices help to create services in an independent and autonomous manner that can be evolved in isolation without bringing the entire application down. They offer large amounts of flexibility and agility in bringing changes to production in a faster and better way. There are many microservices that come together to create an application and provide integrated solutions for customers. These microservices should be discoverable and have well-defined endpoints for integration. The number of integrations with the microservices approach is very high compared to traditional monolithic applications. These integrations add complexity in both the development and deployment of applications.

Speed, standardization, and consistency

It follows that the methodology for deployments should also undergo changes to adapt to the needs of these services, that is, frequent changes and frequent deployments. For frequent changes and deployments, it is important to use processes that help in bringing about these changes in a predictable and consistent manner. Automated agile processes should be used such that smaller changes can be deployed and tested in isolation.

Staying relevant

Finally, deployment targets should be redefined. Not only should deployment targets be easily creatable within seconds, but also the environment built should be consistent across versions, with appropriate binaries, runtimes, frameworks, and configuration. Virtual machines were used with monolithic applications but microservices need more agility, flexibility, and a more lightweight option than virtual machines. Container technology is the preferred mechanism for deployment targets for these services, and we will cover more about that later in this chapter.

Scalability

Some important tenets of using microservices are that they have an unlimited scaling capability in isolation, global high availability, disaster recovery with a near-zero recovery point, and time objectives. These qualities of microservices necessitate infrastructure that can scale in an unlimited fashion. There should not be any resource constraints. While this is the case, it is also important that an organization does not pay for resources up front when they are not utilized.

Cost-effectiveness

Paying for resources that are being consumed and using them optimally by increasing and decreasing the resource counts and capacity automatically is the fundamental tenet of cloud computing. These emerging application requirements demand the cloud as the preferred platform to scale easily, be highly available, be disaster-resistant, bring in changes easily, and achieve predictable and consistent automated deployments in a cost-effective manner.

Deployment paradigms in Azure

There are three different deployment patterns that are available in Azure; they are as follows:

  • IaaS
  • PaaS
  • SaaS

The difference between these three deployment patterns is the level of control that is exercised by customers via Azure. Figure 1.1 displays the different levels of control within each of these deployment patterns:

A tabular summary on different levels of control within each of the deployment patterns in IaaS, PaaS, and SaaS.
Figure 1.1: Cloud services—IaaS, PaaS, and SaaS

It is clear from Figure 1.1 that customers have more control when using IaaS deployments, and this level of control continually decreases as we progress from PaaS to SaaS deployments.

IaaS

IaaS is a type of deployment model that allows customers to provision their own infrastructure on Azure. Azure provides several infrastructure resources and customers can provision them on-demand. Customers are responsible for maintaining and governing their own infrastructure. Azure will ensure the maintenance of the physical infrastructure on which these virtual infrastructure resources are hosted. Under this approach, customers require active management and operations in the Azure environment.

PaaS

PaaS takes away infrastructure deployment and control from the customer. This is a higher-level abstraction compared to IaaS. In this approach, customers bring their own application, code, and data, and deploy them on the Azure-provided platform. These platforms are managed and governed by Azure and customers are solely responsible for their applications. Customers perform activities related to their application deployment only. This model provides faster and easier options for the deployment of applications compared to IaaS.

SaaS

SaaS is a higher-level abstraction compared to PaaS. In this approach, software and its services are available for customer consumption. Customers only bring their data into these services—they do not have any control over these services. Now that we have a basic understanding of service types in Azure, let's get into the details of Azure and understand it from the ground up.

 

Understanding Azure

Azure provides all the benefits of the cloud while remaining open and flexible. Azure supports a wide variety of operating systems, languages, tools, platforms, utilities, and frameworks. For example, it supports Linux and Windows, SQL Server, MySQL, and PostgreSQL. It supports most of the programming languages, including C#, Python, Java, Node.js, and Bash. It supports NoSQL databases, such as MongoDB and Cosmos DB, and it also supports continuous integration tools, such as Jenkins and Azure DevOps Services (formerly Visual Studio Team Services (VSTS)). The whole idea behind this ecosystem is to enable customers to have the freedom to choose their own language, platform, operating system, database, storage, and tools and utilities. Customers should not be constrained from a technology perspective; instead, they should be able to build and focus on their business solution, and Azure provides them with a world-class technology stack that they can use.

Azure is very much compatible with the customer's choice of technology stack. For example, Azure supports all popular (open-source and commercial) database environments. Azure provides Azure SQL, MySQL, and Postgres PaaS services. It provides the Hadoop ecosystem and offers HDInsight, a 100% Apache Hadoop–based PaaS. It also provides a Hadoop on Linux virtual machine (VM) implementation for customers who prefer the IaaS approach. Azure also provides the Redis Cache service and supports other popular database environments, such as Cassandra, Couchbase, and Oracle as an IaaS implementation.

The number of services is increasing by the day in Azure and the most up-to-date list of services can be found at https://azure.microsoft.com/services.

Azure also provides a unique cloud computing paradigm known as the hybrid cloud. The hybrid cloud refers to a deployment strategy in which a subset of services is deployed on a public cloud, while other services are deployed on an on-premises private cloud or datacenter. There is a virtual private network (VPN) connection between the public and private clouds. Azure offers customers the flexibility to divide and deploy their workload on both the public cloud and an on-premises datacenter.

Azure has datacenters across the globe and combines these datacenters into regions. Each region has multiple datacenters to ensure that recovery from disasters is quick and efficient. At the time of writing, there are 58 regions across the globe. This provides customers with the flexibility to deploy their services in their choice of location. They can also combine these regions to deploy a solution that is disaster-resistant and deployed near their customer base.

Note

In China and Germany, the Azure Cloud Services are separate for general use and for governmental use. This means that the cloud services are maintained in separate datacenters.

 

Azure as an intelligent cloud

Azure provides infrastructure and services to ingest billions of transactions using hyper-scale processing. It provides petabytes of storage for data, and it provides a host of interconnected services that can pass data among themselves. With such capabilities in place, data can be processed to generate meaningful knowledge and insights. There are multiple types of insights that can be generated through data analysis, which are as follows:

  • Descriptive: This type of analysis provides details about what is happening or has happened in the past.
  • Predictive: This type of analysis provides details about what is going to happen in the future.
  • Prescriptive: This type of analysis provides details about what should be done to either enhance or prevent current or future events.
  • Cognitive: This type of analysis actually executes actions that are determined by prescriptive analytics in an automated manner.

While deriving insights from data is good, it is equally important to act on them. Azure provides a rich platform to ingest large volumes of data, process and transform it, store and generate insights from it, and display it on real-time dashboards. It is also possible to take action on these insights automatically. These services are available to every customer of Azure and they provide a rich ecosystem in which customers can create solutions. Enterprises are creating numerous applications and services that are completely disrupting industries because of the easy availability of these intelligent services from Azure, which are combined to create meaningful value for end customers. Azure ensures that services that are commercially not viable to implement for small and medium companies can now be readily consumed and deployed in a few minutes.

 

Azure Resource Manager

Azure Resource Manager (ARM) is the technology platform and orchestration service from Microsoft that ties up all the components that were discussed earlier. It brings Azure's resource providers, resources, and resource groups together to form a cohesive cloud platform. It makes Azure services available as subscriptions, resource types available to resource groups, and resource and resource APIs accessible to the portal and other clients, and it authenticates access to these resources. It also enables features such as tagging, authentication, role-based access control (RBAC), resource locking, and policy enforcement for subscriptions and their resource groups. It also provides deployment and management features using the Azure portal, Azure PowerShell, and command-line interface (CLI) tools.

The ARM architecture

The architecture of ARM and its components is shown in Figure 1.2. As we can see, an Azure Subscription comprises multiple resource groups. Each resource group contains resource instances that are created from resource types that are available in the resource provider:

The ARM architecture consisting of different components, such as Resource Groups, Resource Providers, Azure Resource Manager, and so on.
Figure 1.2: The ARM architecture

Why ARM?

Prior to ARM, the framework used by Azure was known as Azure Service Manager (ASM). It is important to have a small introduction to it so that we can get a clear understanding of the emergence of ARM and the slow and steady deprecation of ASM.

Limitations of ASM

ASM has inherent constraints. For example, ASM deployments are slow and blocking—operations are blocked if an earlier operation is already in progress. Some of the limitations of ASM are as follows:

  • Parallelism: Parallelism is a challenge in ASM. It is not possible to execute multiple transactions successfully in parallel. The operations in ASM are linear and so they are executed one after another. If multiple transactions are executed at the same time, there will either be parallel operation errors or the transactions will get blocked.
  • Resources: Resources in ASM are provisioned and managed in isolation of each other; there is no relation between ASM resources. Grouping services and resources or configuring them together is not possible.
  • Cloud services: Cloud services are the units of deployment in ASM. They are reliant on affinity groups and are not scalable due to their design and architecture.

Granular and discrete roles and permissions cannot be assigned to resources in ASM. Customers are either service administrators or co-administrators in the subscription. They either get full control over resources or do not have access to them at all. ASM provides no deployment support. Either deployments are done manually, or we need to resort to writing procedural scripts in .NET or PowerShell. ASM APIs are not consistent between resources.

ARM advantages

ARM provides distinct advantages and benefits over ASM, which are as follows:

  • Grouping: ARM allows the grouping of resources together in a logical container. These resources can be managed together and go through a common life cycle as a group. This makes it easier to identify related and dependent resources.
  • Common life cycles: Resources in a group have the same life cycle. These resources can evolve and be managed together as a unit.
  • RBAC: Granular roles and permissions can be assigned to resources providing discrete access to customers. Customers can also have only those rights that are assigned to them.
  • Deployment support: ARM provides deployment support in terms of templates, enabling DevOps and infrastructure as code (IaC). These deployments are faster, consistent, and predictable.
  • Superior technology: The cost and billing of resources can be managed as a unit. Each resource group can provide its usage and cost information.
  • Manageability: ARM provides advanced features, such as security, monitoring, auditing, and tagging, for better manageability of resources. Resources can be queried based on tags. Tags also provide cost and billing information for resources that are tagged similarly.
  • Migration: Migration and updating resources is easier within and across resource groups.

ARM concepts

With ARM, everything in Azure is a resource. Examples of resources are VMs, network interfaces, public IP addresses, storage accounts, and virtual networks. ARM is based on concepts that are related to resource providers and resource consumers. Azure provides resources and services through multiple resource providers that are consumed and deployed in groups.

Resource providers

These are services that are responsible for providing resource types through ARM. The top-level concept in ARM is the resource provider. These providers are containers for resource types. Resource types are grouped into resource providers. They are responsible for deploying and managing resources. For example, a VM resource type is provided by a resource provider called Microsoft.Compute/virtualMachines resource. Representational state transfer (REST) API operations are versioned to distinguish between them. The version naming is based on the dates on which they are released by Microsoft. It is necessary for a related resource provider to be available to a subscription to deploy a resource. Not all resource providers are available to a subscription out of the box. If a resource is not available to a subscription, then we need to check whether the required resource provider is available in each region. If it is available, the customer can explicitly register for the subscription.

Resource types

Resource types are an actual resource specification defining the resource's public API interface and implementation. They implement the working and operations supported by the resource. Similar to resource providers, resource types also evolve over time in terms of their internal implementation, and there are multiple versions of their schemas and public API interfaces. The version names are based on the dates that they are released by Microsoft as a preview or general availability (GA). The resource types become available as a subscription after a resource provider is registered to them. Also, not every resource type is available in every Azure region. The availability of a resource is dependent on the availability and registration of a resource provider in an Azure region and must support the API version needed for provisioning it.

Resource groups

Resource groups are units of deployment in ARM. They are containers grouping multiple resource instances in a security and management boundary. A resource group is uniquely named in a subscription. Resources can be provisioned on different Azure regions and yet belong to the same resource group. Resource groups provide additional services to all the resources within them. Resource groups provide metadata services, such as tagging, which enables the categorization of resources; the policy-based management of resources; RBAC; the protection of resources from accidental deletion or updates; and more. As mentioned before, they have a security boundary, and users that don't have access to a resource group cannot access resources contained within it. Every resource instance needs to be part of a resource group; otherwise, it cannot be deployed.

Resources and resource instances

Resources are created from resource types and are an instance of a resource type. An instance can be unique globally or at a resource group level. The uniqueness is defined by both the name of the resource and its type. If we compare this with object-oriented programming constructs, resource instances can be seen as objects and resource types can be seen as classes. The services are consumed through the operations that are supported and implemented by resource instances. The resource type defines properties and each instance should configure mandatory properties during the provisioning of an instance. Some are mandatory properties, while others are optional. They inherit the security and access configuration from their parent resource group. These inherited permissions and role assignments can be overridden for each resource. A resource can be locked in such a way that some of its operations can be blocked and not made available to roles, users, and groups even though they have access to it. Resources can be tagged for easy discoverability and manageability.

ARM features

Here are some of the main features that are provided by ARM:

  • RBAC: Azure Active Directory (Azure AD) authenticates users to provide access to subscriptions, resource groups, and resources. ARM implements OAuth and RBAC within the platform, enabling authorization and access control for resources, resource groups, and subscriptions based on roles assigned to a user or group. A permission defines access to the operations in a resource. These permissions can allow or deny access to the resource. A role definition is a collection of these permissions. Roles map Azure AD users and groups to particular permissions. Roles are subsequently assigned to a scope; this can be an individual, a collection of resources, a resource group, or the subscription. The Azure AD identities (users, groups, and service principals) that are added to a role gain access to the resource according to the permissions defined in the role. ARM provides multiple out-of-the-box roles. It provides system roles, such as the owner, contributor, and reader. It also provides resource-based roles, such as SQL DB contributor and VM contributor. ARM also allows the creation of custom roles.
  • Tags: Tags are name-value pairs that add additional information and metadata to resources. Both resources and resource groups can be tagged with multiple tags. Tags help in the categorization of resources for better discoverability and manageability. Resources can be quickly searched for and easily identified. Billing and cost information can also be fetched for resources that have the same tags. While this feature is provided by ARM, an IT administrator defines its usage and taxonomy with regard to resources and resource groups. Taxonomy and tags, for example, can relate to departments, resource usage, location, projects, or any other criteria that are deemed fit from a cost, usage, billing, or search perspective. These tags can then be applied to resources. Tags that are defined at the resource group level are not inherited by their resources.
  • Policies: Another security feature that is provided by ARM is custom policies. Custom policies can be created to control access to resources. Policies are defined as conventions and rules, and they must be adhered to while interacting with resources and resource groups. The policy definition contains an explicit denial of actions on resources or access to resources. By default, every access is allowed if it is not mentioned in the policy definition. These policy definitions are assigned to the resource, resource group, and subscription scope. It is important to note that these policies are not replacements or substitutes for RBAC. In fact, they complement and work together with RBAC. Policies are evaluated after a user is authenticated by Azure AD and authorized by the RBAC service. ARM provides a JSON-based policy definition language for defining policies. Some examples of policy definitions are that a policy must tag every provisioned resource, and resources can only be provisioned to specific Azure regions.
  • Locks: Subscriptions, resource groups, and resources can be locked to prevent accidental deletions or updates by an authenticated user. Locks applied at higher levels flow downstream to the child resources. Alternatively, locks that are applied at the subscription level lock every resource group and the resources within it.
  • Multi-region: Azure provides multiple regions for provisioning and hosting resources. ARM allows resources to be provisioned at different locations while still residing within the same resource group. A resource group can contain resources from different regions.
  • Idempotent: This feature ensures predictability, standardization, and consistency in resource deployment by ensuring that every deployment will result in the same state of resources and configuration, no matter the number of times it is executed.
  • Extensible: ARM provides an extensible architecture to allow the creation and plugging in of new resource providers and resource types on the platform.
 

Virtualization

Virtualization was a breakthrough innovation that completely changed the way that physical servers were looked at. It refers to the abstraction of a physical object into a logical object.

The virtualization of physical servers led to virtual servers known as VMs. These VMs consume and share the physical CPU, memory, storage, and other hardware of the physical server on which they are hosted. This enables the faster and easier provisioning of application environments on-demand, providing high availability and scalability with reduced cost. One physical server is enough to host multiple VMs, with each VM containing its own operating system and hosting services on it.

There was no longer any need to buy additional physical servers for deploying new applications and services. The existing physical servers were sufficient to host more VMs. Furthermore, as part of rationalization, many physical servers were consolidated into a few with the help of virtualization.

Each VM contains the entire operating system, and each VM is completely isolated from other VMs, including the physical hosts. Although a VM uses the hardware that is provided by the host physical server, it has full control over its assigned resources and its environment. These VMs can be hosted on a network such as a physical server with its own identity.

Azure can create Linux and Windows VMs in a few minutes. Microsoft provides its own images, along with images from its partners and the community; users can also provide their own images. VMs are created using these images.

 

Containers

Containers are also a virtualization technology; however, they do not virtualize a server. Instead, a container is operating system–level virtualization. What this means is that containers share the operating system kernel (which is provided by the host) among themselves along with the host. Multiple containers running on a host (physical or virtual) share the host operating system kernel. Containers ensure that they reuse the host kernel instead of each having a dedicated kernel to themselves.

Containers are completely isolated from their host or from other containers running on the host. Windows containers use Windows storage filter drivers and session isolation to isolate operating system services such as the file system, registry, processes, and networks. The same is true even for Linux containers running on Linux hosts. Linux containers use the Linux namespace, control groups, and union file system to virtualize the host operating system.

The container appears as if it has a completely new and untouched operating system and resources. This arrangement provides lots of benefits, such as the following:

  • Containers are fast to provision and take less time to provision compared to virtual machines. Most of the operating system services in a container are provided by the host operating system.
  • Containers are lightweight and require fewer computing resources than VMs. The operating system resource overhead is no longer required with containers.
  • Containers are much smaller than VMs.
  • Containers can help solve problems related to managing multiple application dependencies in an intuitive, automated, and simple manner.
  • Containers provide infrastructure in order to define all application dependencies in a single place.

Containers are an inherent feature of Windows Server 2016 and Windows 10; however, they are managed and accessed using a Docker client and a Docker daemon. Containers can be created on Azure with a Windows Server 2016 SKU as an image. Each container has a single main process that must be running for the container to exist. A container will stop when this process ends. Additionally, a container can either run in interactive mode or in detached mode like a service:

The container architecture showing all the technical layers that enable containers, including the infrastructure, the operating system, HCSShim, Docker Engine, containers, and applications.
Figure 1.3: Container architecture

Figure 1.3 shows all the technical layers that enable containers. The bottom-most layer provides the core infrastructure in terms of network, storage, load balancers, and network cards. At the top of the infrastructure is the compute layer, consisting of either a physical server or both physical and virtual servers on top of a physical server. This layer contains the operating system with the ability to host containers. The operating system provides the execution driver that the layers above use to call the kernel code and objects to execute containers. Microsoft created Host Container System Shim (HCSShim) for managing and creating containers and uses Windows storage filter drivers for image and file management.

Container environment isolation is enabled for the Windows session. Windows Server 2016 and Nano Server provide the operating system, enable the container features, and execute the user-level Docker client and Docker Engine. Docker Engine uses the services of HCSShim, storage filter drivers, and sessions to spawn multiple containers on the server, with each containing a service, application, or database.

 

Docker

Docker provides management features to Windows containers. It comprises the following two executables:

  • The Docker daemon
  • The Docker client

The Docker daemon is the workhorse for managing containers. It is a Windows service responsible for managing all activities on the host that are related to containers. The Docker client interacts with the Docker daemon and is responsible for capturing inputs and sending them across to the Docker daemon. The Docker daemon provides the runtime, libraries, graph drivers, and engine to create, manage, and monitor containers and images on the host server. It also has the ability to create custom images that are used for building and shipping applications to multiple environments.

 

Interacting with the intelligent cloud

Azure provides multiple ways to connect, automate, and interact with the intelligent cloud. All these methods require users to be authenticated with valid credentials before they can be used. The different ways to connect to Azure are as follows:

  • The Azure portal
  • PowerShell
  • The Azure CLI
  • The Azure REST API

The Azure portal

The Azure portal is a great place to get started. With the Azure portal, users can log in and start creating and managing Azure resources manually. The portal provides an intuitive and user-friendly user interface through the browser. The Azure portal provides an easy way to navigate to resources using blades. The blades display all the properties of a resource, including its logs, cost, relationship with other resources, tags, security options, and more. An entire cloud deployment can be managed through the portal.

PowerShell

PowerShell is an object-based command-line shell and scripting language that is used for the administration, configuration, and management of infrastructure and environments. It is built on top of .NET Framework and provides automation capabilities. PowerShell has truly become a first-class citizen among IT administrators and automation developers for managing and controlling the Windows environment. Today, almost every Windows environment and many Linux environments can be managed by PowerShell. In fact, almost every aspect of Azure can also be managed by PowerShell. Azure provides rich support for PowerShell. It provides a PowerShell module for each resource provider containing hundreds of cmdlets. Users can use these cmdlets in their scripts to automate interaction with Azure. The Azure PowerShell module is available through the web platform installer and through the PowerShell Gallery. Windows Server 2016 and Windows 10 provide package management and PowerShellGet modules for the quick and easy downloading and installation of PowerShell modules from the PowerShell Gallery. The PowerShellGet module provides the Install-Module cmdlet for downloading and installing modules on the system.

Installing a module is a simple act of copying the module files at well-defined module locations, which can be done as follows:

Import-module PowerShellGet
Install-Module -Name az -verbose

The Import-module command imports a module and its related functions within the current execution scope and Install-Module helps in installing modules.

The Azure CLI

Azure also provides Azure CLI 2.0, which can be deployed on Linux, Windows, and macOS operating systems. Azure CLI 2.0 is Azure's new command-line utility for managing Azure resources. Azure CLI 2.0 is optimized for managing and administering Azure resources from the command line, and for building automation scripts that work against ARM. The CLI can be used to execute commands using the Bash shell or the Windows command line. The Azure CLI is very famous among non-Windows users as it allows you to talk to Azure on Linux and macOS. The steps for installing Azure CLI 2.0 are available at https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest.

The Azure REST API

All Azure resources are exposed to users through REST endpoints. REST APIs are service endpoints that implement HTTP operations (or methods) by providing create, retrieve, update, or delete (CRUD) access to the service's resources. Users can consume these APIs to create and manage resources. In fact, the CLI and PowerShell mechanisms use these REST APIs internally to interact with resources on Azure.

ARM templates

In an earlier section, we looked at deployment features such as multi-service, multi-region, extensible, and idempotent features that are provided by ARM. ARM templates are the primary means of provisioning resources in ARM. ARM templates provide implementation support for ARM's deployment features.

ARM templates provide a declarative model through which resources, their configuration, scripts, and extensions are specified. ARM templates are based on the JavaScript Object Notation (JSON) format. They use JSON syntax and conventions to declare and configure resources. JSON files are text-based, user-friendly, and easily readable files.

They can be stored in a source code repository and have version control. They are also a means to represent IaC that can be used to provision resources in an Azure resource group again and again, predictably and uniformly. A template needs a resource group for deployment. It can only be deployed to a resource group, and the resource group should exist before executing a template deployment. A template is not capable of creating a resource group.

Templates provide the flexibility to be generic and modular in their design and implementation. Templates provide the ability to accept parameters from users, declare internal variables, define dependencies between resources, link resources within the same resource group or different resource groups, and execute other templates. They also provide scripting language type expressions and functions that make them dynamic and customizable at runtime.

Deployments

PowerShell allows the following two modes for the deployment of templates:

  • Incremental: Incremental deployment adds resources declared in the template that don't exist in a resource group, leaves resources unchanged in a resource group that is not part of a template definition, and leaves resources unchanged in a resource group that exists in both the template and resource group with the same configuration state.
  • Complete: Complete deployment, on the other hand, adds resources declared in a template to the resource group, deletes resources that do not exist in the template from the resource group, and leaves resources unchanged that exist in both the resource group and template with the same configuration state.
 

Summary

The cloud is a relatively new paradigm and is still in its nascent stage. There will be a lot of innovation and capabilities added over time. Azure is one of the top cloud providers today and it provides rich capabilities through IaaS, PaaS, SaaS, and hybrid deployments. In fact, Azure Stack, which is an implementation of the private cloud from Microsoft, will be released soon. This will have the same features available on a private cloud as on the public cloud. They both will, in fact, connect and work seamlessly and transparently together.

It is very easy to get started with Azure, but developers and architects can also fall into a trap if they do not design and architect their solutions appropriately. This book is an attempt to provide guidance and directions for architecting solutions the right way, using appropriate services and resources. Every service on Azure is a resource. It is important to understand how these resources are organized and managed in Azure. This chapter provided context around ARM and groups—which are the core frameworks that provide the building blocks for resources. ARM offers a set of services to resources that help provide uniformity, standardization, and consistency in managing them. The services, such as RBAC, tags, policies, and locks, are available to every resource provider and resource. Azure also provides rich automation features to automate and interact with resources. Tools such as PowerShell, ARM templates, and the Azure CLI can be incorporated as part of release pipelines, continuous deployment, and delivery. Users can connect to Azure from heterogeneous environments using these automation tools.

The next chapter will discuss some of the important architectural concerns that help to solve common cloud-based deployment problems and ensure applications are secure, available, scalable, and maintainable in the long run.

About the Authors

  • Ritesh Modi

    Ritesh Modi is a former Microsoft senior technology evangelist. He has been recognized as a Microsoft Regional Director for his contributions to Microsoft products, services, and communities. He is a cloud architect, a published author, a speaker, and a leader who is popular for his contributions to datacenters, Azure, Kubernetes, blockchain, cognitive services, DevOps, artificial intelligence, and automation. He is the author of eight books. Ritesh has spoken at numerous national and international conferences and is a published author for MSDN magazine. He has more than a decade of experience in building and deploying enterprise solutions for customers, and has more than 25 technical certifications. His hobbies are writing books, playing with his daughter, watching movies, and learning new technologies. He currently lives in Hyderabad, India. You can follow him on Twitter at @automationnext.

    Browse publications by this author
  • Jack Lee

    Jack Lee is a senior Azure certified consultant and an Azure practice lead with a passion for software development, cloud, and DevOps innovations. Jack has been recognized as a Microsoft MVP for his contributions to the tech community. He has presented at various user groups and conferences, including the Global Azure Bootcamp at Microsoft Canada. Jack is an experienced mentor and judge at hackathons and is also the president of a user group that focuses on Azure, DevOps, and software development. He is the co-author of Cloud Analytics with Microsoft Azure, published by Packt Publishing. You can follow Jack on Twitter at @jlee_consulting.

    Browse publications by this author
  • Rithin Skaria

    Rithin Skaria is an open source evangelist with over 7 years of experience of managing open source workloads in Azure, AWS, and OpenStack. He is currently working for Microsoft and is a part of several open source community activities conducted within Microsoft. He is a Microsoft Certified Trainer, Linux Foundation Certified Engineer and Administrator, Kubernetes Application Developer and Administrator, and also a Certified OpenStack Administrator. When it comes to Azure, he has four certifications (solution architecture, Azure administration, DevOps, and security), and he is also certified in Office 365 administration. He has played a vital role in several open source deployments, and the administration and migration of these workloads to the cloud. He also co-authored Linux Administration on Azure, published by Packt Publishing. Connect with him on LinkedIn at @rithin-skaria.

    Browse publications by this author
Book Title
Unlock this full book FREE 10 day trial
Start Free Trial