This book will help you plan, build, run, and develop your own Azure-based datacenter running Azure Stack technology. The goal is that the technology in your datacenter will be 100 percent consistent using Azure, which provides flexibility and elasticity to your IT infrastructure.
We will learn about:
- Cloud basics
- The Microsoft Azure Stack
- Core management services
- Using Azure Stack
- Migrating services to Azure Stack
Regarding the technical requirements of today's IT, the cloud is always a part of the general IT strategy. It does not depend upon the region in which the company is working in, nor does it depend upon the part of the economy; 99.9 percent of all companies have cloud technology already in their environment.
The good question for a lot of CIOs in general is: "To what extent do we allow cloud services, and what does that mean to our infrastructure?". So, it's a matter of compliance, allowance, and willingness.
The top 10 most important questions for a CIO to prepare for the cloud are as follows:
- Are we allowed to save our data in the cloud?
- What classification of data can be saved in the cloud?
- How flexible are we regarding the cloud?
- Do we have the knowledge to work with cloud technology?
- How does our current IT setup and infrastructure fit into the cloud's requirements?
- Is our current infrastructure already prepared for the cloud?
- Are we already working with a cloud-ready infrastructure?
- Is our internet bandwidth good enough?
- What does the cloud mean to my employees?
- Which technology should we choose?
The definition of the term cloud is not simple, but we need to differentiate between the following:
- Private cloud: This is a highly dynamic IT infrastructure based on a virtualization technology that is flexible and scalable. The resources are saved in a privately owned datacenter either in your company, or a service provider of your choice.
- Public cloud: This is a shared offering of IT infrastructure services that are provided via the internet.
- Hybrid cloud: This is a mixture of a private and public cloud. Depending on the compliance or other security regulations, the services that could be run in a public datacenter are already deployed there, but the services that need to be stored inside the company are running there. The goal is to run these services on the same technology to provide the agility, flexibility, and scalability to move services between public and private datacenters.
In general, there are some big players within the cloud market (for example, Amazon Web Services, Google, Azure, and even Alibaba). If a company is quite Microsoft-minded from the infrastructure point of view, they should have a look at the Microsoft Azure datacenters. Microsoft started in 2008 with their first datacenter, and today, they invest a billion dollars every month in Azure.
As of today, there are about 34 official datacenters around the world that form Microsoft Azure, besides the ones that Microsoft does not talk about (for example, US Government Azure). There are some dedicated datacenters, such as the German Azure cloud, that do not have connectivity to Azure worldwide. Due to compliance requirements, these frontiers need to exist, but the technology of each Azure datacenter is the same although the services offered may vary.
The following map gives an overview of the locations (so-called regions) in Azure as of today and provides an idea of which ones will be coming soon:
When Microsoft started their public cloud, they decided that there must be a private cloud stack too, especially, to prepare their infrastructure to run in Azure sometime in the future.
The first private cloud solution was the System Center suite, with System Center Orchestrator and Service Provider Foundation (SPF) and Service Manager as the self-service portal solution. Later on, Microsoft launched the Windows Azure Pack for Windows Server. Today, Windows Azure Pack is available as a product focused on the private cloud and provides a self-service portal (the well-known old Azure portal, code name red dog frontend), and it uses the System Center suite as its underlying technology:
In May 2015, Microsoft formally announced a new solution that brings Azure to your datacenter. This solution was named Microsoft Azure Stack. To put it in one sentence: Azure Stack is the same technology with the same APIs and portal as public Azure, but you could run it in your datacenter or in that of your service providers. With Azure Stack, System Center is completely gone because everything is the way it is in Azure now, and in Azure, there is no System Center at all. This is what the primary focus of this book is.
The following diagram gives a current overview of the technical design of Azure Stack compared with Azure:
The one and only difference between Microsoft Azure Stack and Microsoft Azure is the Cloud infrastructure. In Azure, there are thousands of servers that are part of the solution; with Azure Stack, the number is slightly smaller. That's why there is the Cloud-inspired infrastructure based on Windows Server, Hyper-V, and Azure technologies as the underlying technology stack. There is no System Center product in this stack anymore. This does not mean that it cannot be there (for example, SCOM for on-premise monitoring), but Azure Stack itself provides all functionality with the solution itself.
For stability and functionality, Microsoft decided to provide Azure Stack as a so-called integrated system, so it will come to your door with the hardware stack included. The customer buys Azure Stack as a complete technology stack. At the general availability (GA) stage, the hardware OEMs are HPE, Dell EMC, and Lenovo. In addition to this, there will be a one-host development toolkit available for download that could be run as a proof of concept solution on every type of hardware, as soon as it meets the hardware requirements.
Looking at the technical design a bit more in depth, there are some components that we need to dive deeper into:
The general basis of Azure Stack is Windows Server 2016 technology, which builds the cloud-inspired infrastructure:
- Storage Spaces Direct (S2D)
- Nano Server
- Azure Resource Manager (ARM)
Storage Spaces and Scale-Out File Server were technologies that came with Windows Server 2012. The lack of stability in the initial versions and the issues with the underlying hardware was a bad phase. The general concept was a shared storage setup using JBODs controlled from Windows Server 2012 Storage Spaces servers, and a magic Scale-Out File Server cluster that acted as the single point of contact for storage:
With Windows Server 2016, the design is quite different and the concept relies on a shared-nothing model, even with local attached storage:
This is the storage design Azure Stack has come up with as one of its main pillars.
With Windows Server 2012, Microsoft introduced Software-defined Networking (SDN) and the NVGRE technology. Hyper-V Network Virtualization supports Network Virtualization using Generic Routing Encapsulation (NVGRE) as the mechanism to virtualize IP addresses. In NVGRE, the virtual machine's packet is encapsulated inside another packet:
VxLAN comes as the new SDNv2 protocol; it is RFC compliant and is supported by most network hardware vendors by default. The Virtual eXtensible Local Area Network (VxLAN) RFC 7348 protocol has been widely adopted in the marketplace, with support from vendors such as Cisco, Brocade, Arista, Dell, and HP. The VxLAN protocol uses UDP as the transport:
Nano Server offers a minimal-footprint, headless version of Windows Server 2016. It completely excludes the graphical user interface, which means that it is quite small, headless, and easy to handle regarding updates and security fixes, but it doesn't provide the GUI expected by customers of Windows Server.
The magical Azure Resource Manager is a 1-1 bit share with ARM from Azure, so it has the same update frequency and features that are available in Azure, too.
ARM is a consistent management layer that saves resources, dependencies, inputs, and outputs as an idempotent deployment as a JSON file called an ARM template. This template defines the tastes of a deployment, whether be it VMs, databases, websites, or anything else. The goal is that once a template is designed, it can be run on each Azure-based cloud platform, including Azure Stack. ARM provides cloud consistency with the finest granularity, and the only difference between the clouds is the region the template is being deployed to and the corresponding REST endpoints.
ARM not only provides a template for a logical combination of resources within Azure, it also manages subscriptions and role-based access control (RBAC) and defines the gallery, metric, and usage data, too. This means quite simply, that everything that needs to be done with Azure resources should be done with ARM.
Not only does Azure Resource Manager design one virtual machine, it is responsible for setting up one to a bunch of resources that fit together for a specific service. Even ARM templates can be nested; this means they can depend on each other.
When working with ARM, you should know the following vocabulary:
- Resource: A resource is a manageable item available in Azure.
- Resource group: A resource group is the container of resources that fit together within a service.
- Resource provider: A resource provider is a service that can be consumed within Azure.
- Resource manager template: A resource manager template is the definition of a specific service.
- Declarative syntax: Declarative syntax means that the template does not define the way to set up a resource; it just defines how the result and the resource itself have the feature to set up and configure itself to fulfill the syntax. To create your own ARM templates, you need to fulfill the following minimum requirements:
- A test editor of your choice
- Visual Studio Community edition
- Azure SDK
Visual Studio Community edition is available for free from the internet. After setting these things up, you could start it, and define your own templates:
Setting up a simple blank template looks like this:
There are different ways to get a template that you can work on and modify it to fit your needs:
- Visual Studio templates
- Quick-start templates on GitHub
- Azure ARM templates
You could export the ARM template directly from Azure portal if the resource has been deployed:
After clicking on
View template, the following opens up:
For further reading on ARM basics, the Getting started with Azure Resource Manager document is a good place to begin:http://aka.ms/GettingStartedWithARM.
In the previous section, we talked about ARM and ARM templates that define resources, but they are unable to design the way a VM looks inside, specify which software needs to be installed, and how the deployment should be done. This is why we need to have a look at VM extensions. VM extensions define what should be done after ARM deployment has finished. In general, the extension could be anything that's a script. The best practice is to use PowerShell and it's add-on called Desired State Configuration (DSC).
DSC defines quite similarly to ARM, how the software needs to be installed and configured. The great concept is, that it also monitors whether the desired state of a virtual machine is changing (for example, because an administrator uninstalls or re-configures a machine). If it does, it makes sure within minutes whether the original state will be fulfilled again and rolls back the actions to the desired state:
When Azure Stack is deployed, the following VMs are brought up on the Hyper-V hosts:
As of GA, Azure Stack consists of 13 VMs that all have their functions to make Azure Stack work. All of them are Core server instances configured with static resources (up to 8 GB of RAM and 4 vCPUs each). For the multi node environments most of these VMs are redundant and load balanced using the Software Load Balancer (SLB).
A resource provider adds features and functionality to the Azure Stack using a predefined structure, API set, design, and VM.
ACS01 VM is hosting the Azure Stack storage provider service. It is responsible for one of the most important resource providers. As the underlying storage technology is Storage Spaces Direct, this VM manages it.
If a tenant creates a new resource, it adds the storage account to a resource group. The storage account then manages the different storage services types on the physical host, such as BLOB, page, table, SOFS, ReFS cluster shared volumes, virtual disks, and Storage Spaces Direct. In addition, the storage account is the point to set up security, too. It's possible to add temporary (token based) and long-term (key based) storage.
When it comes to the roles for storage management, Azure Stack provides the following three levels:
- Storage Tenant Administrator (consumer of storage services).
- Storage Developer (developer of cloud-storage-based apps).
- Storage Service Provider (provides storage services for tenants on a shared infrastructure and can be divided into two separate roles):
- Storage Fabric Administrator (responsible for fabric storage lifecycle)
- Storage Service Administrator (responsible for cloud storage lifecycle)
The Azure consistent storage can be managed with:
- REST APIs
- PowerShell commandlets
- A modern UI
- Other tools (scripts and third-party tools)
Storage always needs to be a part of the tenant offer. It's one of the necessary pillars to providing resources within the Azure Stack.
ADFS01 VM provides the technical basis for Active Directory Federation Services (ADFS or AD FS), which provides one authentication and authorization model for Azure Stack. Specifically, if a deployment does not rely on Azure AD, there needs to be a feature to authenticate and authorize users from other Active Directory domains.
This VM is the most important one in disconnected scenarios, because it is the ADFS target for internal Active Directory domains connected for identity providers services.
SQL01 VM provides complete SQL services for Azure Stack. A lot of services need to store data (for example, offers, tenant plans, and ARM templates), and this is the place where it is stored. Compared to other products in the past (such as Windows Azure Pack (WAP)), there is no high load on this service because, it stores only internal data for infrastructure roles.
BGPNAT01 VM provides NAT and VPN access based on the BGP routing protocol, which is the default for Azure, too. This VM does not exist in multi node deployments and is replaced by the TOR switch (TOR stands for Top of the Rack). As a tenant is able to deploy a VPN device in its Azure Stack-based cloud, and connect it to another on or off premises networking environment, all traffic goes through this VM. Compared to other designs, this VM is at the top of the rack switch and requires the following features:
- Border Gateway Protocol (BGP): This is the internet protocol for connecting autonomous systems and allows communication.
- Data Center Bridging (DCB): This is a technology for the Ethernet LAN communication protocol, especially for datacenter environments, for example, for clustering and SAN. It consists of the following subsets:
- Enhanced Transmission Selection (ETS): This provides a framework for assigning bandwidth priorities to frames.
- Priority Flow Control (PFC): This provides a link-level flow-control technology for each frame priority.
- Switch Independent Teaming (SIT): This is a teaming mode launched with Windows Server 2012. The teaming configuration will work with any Ethernet switch, even non-intelligent switches, because the operating system is responsible for the overall technology.
CA01 runs the certificate authority services for deploying and controlling certificates for authentication within Azure Stack. As all communication is secured using certificates, this service is mandatory and needs to work properly. Each certificate will be refactored once every 30 days, completely in the Azure Stack management environment.
As the complete Azure Stack environment runs in a dedicated Active Directory domain, this VM is the source for all Azure Pack internal authentications and authorizations. As there is no other domain controller available, it's responsible for Flexible Single Master Operation (FSMO) roles and global cataloging, too. It provides the Microsoft Graph resource provider which is a REST endpoint to Active Directory. Finally, it is the VM running the DHCP and DNS services for the Azure Stack environment.
In case of an issue with Azure Stack itself (so called break the cloud scenario) it may be suitable to receive support from Microsoft. Therefore, there is the MAS-ERCS01 VM which provides the possibility to connect to an Azure Stack deployment using Just Enough Administration (JEA) and Just in Time Administration (JIT) externally.
MAS-Gwy01 VM is responsible for site-to-site VPN connections of tenant networks in order to provide in-between network connectivity. It is one of the most important VMs for tenant connectivity.
NC01 is responsible for the network controller services. The network controller works based on the SDN capabilities of Windows Server 2016. It is the central control plane for all networking stuff and provides network fault tolerance, and it's the magic key to bringing your own address space for IP addressing (VxLAN and NVGRE technology is supported, but VxLAN is the prioritized one).
Azure Stack uses virtual IP addressing for the following services:
- Azure Resource Manager
- Portal-UI (where it's admin or tenant)
- ADFS and Graph API
- Key vault
- Site-to-site endpoints
The network controller (or network resource provider) makes sure that all communication goes its predefined way and is aware of security, priority, high availability, and flexibility.
In addition, it is responsible for all VMs that are part of the networking stack of Azure Stack:
SLB01 is the VM responsible for all load balancing. With the former product, Azure Pack, there was no real load balancer available as Windows load balancing always had its issues with network devices at the tenant side. Therefore, the only solution for this was adding a third-party load balancer.
With Microsoft Azure, a software load balancer was always present, and
SLB01 is again the same one running in public Azure. It has just been moved to Azure Stack. It is responsible for tenant load balancing but also provides high availability for Azure Stack infrastructure services. As expected, providing the SLB to Azure Stack cloud instances means deploying the corresponding ARM template. The underlying technology is load balancing based on hashes. By default, a 5-tuple hash is used, and it contains the following:
- Source IP
- Source port
- Destination IP
- Destination port
- Protocol type
The stickiness is only provided within one transport session, and packages of TCP or UDP sessions will always be transferred to the same instance beyond the load balancer. The following chart shows an overview of the hash-based traffic's contribution:
WASP01 is responsible for the Azure Stack tenant portal and running the Azure Resource Manager services for it.
The VM named
WAS01 runs the portal and your Azure Resource Manager instance. It is responsible for running the Azure Stack administrative portal, as you know already from Azure (codenamed Ibiza). In addition, your Azure Resource Manager instance is running on this VM.
ARM is your instance responsible for the design of the services provided in your Azure Stack instance. ARM will make sure that the resources will be deployed the way you design your templates and that they will be running the same throughout their lifecycle.
XRP01 VM is responsible for the core resource provider: compute, storage, and networks. It holds the registration of these providers and knows how they interact with each other; therefore, it can be called the heart of Azure Stack, too.
As you have seen, these VMs provide the management environment of Azure Stack, and they are all available only in one instance. But we all know that scalability means deploying more instances of each service, and as we already have a built-in software load balancer, the product itself is designed for scale. Another way to scale is to implement another Azure Stack integrated system in your environment and provide a place familiar to Azure users. Indeed, there are two ways to scale. A good question is what scale unit will we need: if we need more performance, a scale with more VMs providing the same services is a good choice. The other option is to scale with a second region, which provides geo-redundancy.
As the Azure Stack integrated system is a set of VMs, we need to talk about what to do when restarting the entire environment. By default, each VM is set to go into saved mode when the environment is being shut down. In general, this should not be a problem because when the environment is restarting, the VMs should recover from the saved mode, too. If there are any delays with the VMs starting, the environment may run into issues and since a complete restart is not always a good idea, the following boot order is the best practice. Between each of these, there should be a delay of 60 seconds. The AD domain controller itself should reboot with the host machine:
The shutdown sequence is the other way round.
If you want to make it easy for yourself, I would prefer to set up a PowerShell script for shutting down or restarting the VMs. Thanks to Daniel Neumann (TSP Microsoft Germany), there is a good script available on his blog at http://www.danielstechblog.info/shutdown-and-startup-order-for-the-microsoft-azure-stack-tp2-vms/.
The main concept of the Azure Stack extensibility feature is that a web service is called a resource provider. This concept makes the product itself quite easy to maintain and extend:
Regarding the logical design of Azure Stack shown in the preceding diagram, there are the following resource providers in the product today. The three main ones are as follows:
- Storage Resource Provider (SRP)
- Compute Resource Provider (CRP)
- Network Resource Provider (NRP)
We need to differentiate between them and the following additional resource providers:
- Fabric Resource Provider (FRP)
- Health Resource Provider (HRP)
- Update Resource Provider (URP)
Finally, we have the third-party resource providers. To make sure that each third-party resource provider acts as intended, there is a REST API and a certification for it with Azure Stack by Microsoft.
By default, Azure Stack provides some core management services that everybody already knows from Azure. They are as follows:
- The authorization management service
- Subscriptions, Plans, and Offers
Azure Stack authorization leverages the Azure authorization management service. For general availability, there are three different authentication designs. There's a good chance there's an authentication design available that works for most companies.
The Azure authorization management service works based on Azure Active Directory (Azure AD), which is a multi-tenant, cloud-based identity-management service.
This means that each Azure Stack environment needs to have proper internet connectivity; otherwise, no authentication is possible. This makes life quite easy, but service providers or hosters (and even some medium and larger companies) especially do not allow communications from their internal infrastructure-management environment to the internet (public Azure) for authentication. This security requirement makes the creation of a Proof of Concept (POC) not as easy as before.
Starting with TP3 there is support for Active Directory Federation Services. This service provides single sign-on (SSO) and secure remote access for web applications hosted on premises. In addition, it ensures that authentication is possible even if the connection to Azure AD is not available for a certain amount of time.
Another concept you may already know from Azure and even from Azure Pack is the concept of subscriptions, plans, and offers. This makes it quite easy for administrators to provide access to cloud services:
A Plan is a product that is described by predefined services from Azure Stack, for example, the Infrastructure as a Service Plan or Website Plan. Best practice is for the quality of this service to be included in the plan, too. This means we could define bronze, silver, gold, or platinum level services. This means, for example, that we have different storage IOPS from slow to high-end storage sitting on SSD drives.
An offer is a set of plans or a piece of one with a price on it. So it can be best described as a product itself.
A subscription puts it altogether, which means that a dedicated user is given access to the cloud service with a username and password, which is linked to an offer that has predefined plans that are part of it. A Subscription could be set up by logging into the portal and creating a new subscription. This new Subscription then has to be linked to an offer by an administrator with the appropriate permissions.
The event service is an essential service for Azure Stack and provides information about a deployment-whether it is running properly or whether there are issues. So in general, it is a kind of event log of an Azure Stack resource. Like all Azure Stack services it has its own API, so you may collect data in another way than the portal (using PowerShell or other programming languages).
If a resource is up and running in Azure Stack, this is where the monitoring service opts in and provides general vital information. This is not the option to disable all your monitoring features for your environments. Your monitoring solutions provide an overall status of the resource itself and all services that are being provided by that service (for example, a VM providing email or database services). It is more than worth it to have it up and running too. The monitoring features of Azure Stack itself will be described later in this book.
Finally, everybody needs to make money and be profitable with a cloud solution. This is why we need a billing model. The basis for a billing model in general is the usage data that provides information about how long which resource is running and being used. This data is saved in SQL and builds the basis for your billing. The best way to report usage data is PowerBI. PowerBI is a SQL big data solution by Azure that gives you a nice overview of data. The billing possibilities will be described later in this book too:
In addition to the possibilities of rich reporting, this data can be exported to CSV and reused in the customer's billing tool for charging its customers on a more or less easy and half-automated way.
If you need more features and functionality, there are third-party resource providers available to provide a more comprehensive but easy-to-use usage reporting feature that could fit better with customer needs, but this also means investing money and resources in these resource providers, because in general, you need a dedicated server to provide the business logic of this tool.
Azure Stack provides the same features for connectivity as Azure itself, so we have the following:
- Azure Stack Portal
- PowerShell commandlets
- Azure CLI
Depending on what you need, you should use one or more of them to work with Azure Stack.
The most general way to use Azure Stack is the Portal. This is the UI and it provides more than 95 percent of all features, including RBAC. Depending on whether you are an administrator or a generic user, you will have different features available in the portal, but you always use the same portal.
The portal looks like this after a new installation:
As you can see, the portal looks almost exactly like Azure's and provides the same usability. It's quite easy and intuitive to use from the end user's perspective. You don't need to train your users in general, they could just start with the same experience they hopefully already have from Azure.
The second way to communicate with Azure Stack is PowerShell. With the wide range of PowerShell commandlets (cmdlets), everything is possible. From the administration point of view, it is always the better choice to use PowerShell, because it is reusable and redoable, and each script is a documentation itself.
The steps to enable PowerShell are as follows:
- Enter the following command to check for installed PowerShell modules:
- Install the
Install-PackageProvider -Name NuGet -MinimumVersion 220.127.116.11 -Force
- Verify the installation status:
Get-Command -Module AzureRM.AzureStackAdmin
- Now you can start over with
AzureRMPowerShell commands. Connecting to Azure Stack using PowerShell should look like this:
$AADUserName='[email protected]'$AADPassword='YourAADPassword'|ConvertTo-SecureString -Force -AsPlainText$AADCredential=New-Object PSCredential($AADUserName,$AADPassword)$AADTenantID = "YourAADDomain"Add-AzureRmEnvironment -Name "Azure Stack" `-ActiveDirectoryEndpoint ("https://login.windows.net/$AADTenantID/") `-ActiveDirectoryServiceEndpointResourceId "https://azurestack.local-api/" `-ResourceManagerEndpoint ("https://api.azurestack.local/") `-GalleryEndpoint ("https://gallery.azurestack.local:30016/") `-GraphEndpoint "https://graph.windows.net/"$env = Get-AzureRmEnvironment 'Azure Stack'Add-AzureRmAccount -Environment $env -Credential $AADCredential - VerboseGet-AzureRmSubscription -SubscriptionName "youroffer" | Select-AzureRmSubscriptionGet-AzureRmResource
Simple, isn't it?
The third option to connect to and work with Azure Stack is the API, which is again the same with Azure Stack as with Azure.
Instructions to install the Azure Software Development Kit (SDK) can be found at https://azure.microsoft.com/en-us/downloads/.
The next step is the big choice: which SDK platform should be used. The following ones are available:
From development perspective, nearly everything is possible. In general, the most popular tool is Microsoft Visual Studio, and the developing language, .NET.
Coding for Azure Stack is always a development project. In general, you do not use it for daily tasks. You should use it for integrating Azure Stack into existing web shops or other solutions. In general, it's always a make or buy decision.
Creating your own custom portal always means a huge investment and an ongoing process of supporting each update being installed on Azure Stack itself. Each service you would like to offer to your customers with the custom solution needs to be developed. This means that the developers need to understand the API, the way Azure Stack works, and how to code against this solution. From real-world project experiences, I know a custom portal is possible using the APIs, but the question should be more like, whether it is worth it, taking into account the amount of money that needs to be spent in the form of development hours and manpower.
Finally, the Azure Stack command-line interface is a toolset that can be installed on Azure Stack. It is available for Windows, Mac, and Linux.
As Azure Stack is a solution in a box, the first question when talking about tools for Azure Stack is "where do I have to install them to be supported?". The answer,
MAS-CON01, because it is the management VM. A wide variety of tools are provided that help with the administration of Azure Stack.
- Visual Studio: Visual Studio, including the Azure SDK, is a must have for creating and modifying ARM templates. You can download it from https://www.visualstudio.com.
- AzCopy: AzCopy is a command-line utility for copying data to and from Azure BLOB, file, and table storage with optimal performance. You can copy data from one object to another within or between storage accounts. As Azure Stack behaves in the same way, you can just use the same EXE for running it against itself.
- Azure storage emulator: The Microsoft Azure storage emulator provides a local environment that emulates the Azure storage services for development purposes. This tool is suitable for testing an application against storage services locally without connecting to Azure or Azure Stack.
- Azure storage explorer: If you need a solution to connect to and browse for Azure storage, it is available for various OSes:
If you are running virtual machines today, you're already using a cloud-based technology, although we do not call it cloud today. Basically, this is the idea of a private cloud. If you are running Azure Pack today, you are quite near Azure Stack from the processes point of view but not the technology part. There is a solution called connectors for Azure Pack that lets you have one portal UI for both cloud solutions. This means that the customer can manage everything out of the Azure Stack Portal, although services run in Azure Pack as a legacy solution.
Basically, there is no real migration path within Azure Stack. But the way to solve this is quite easy, because you could use every tool that you can use to migrate services to Azure.
The Azure Website Migration Assistant will provide a high-level readiness assessment for existing websites. This report outlines sites that are ready to move and elements that may need changes, and it highlights unsupported features. If everything is prepared properly, the tool creates any website and the associated database automatically and synchronizes the content.
You can learn more about it at https://azure.microsoft.com/en-us/downloads/migration-assistant/:
For virtual machines, there are two tools available:
Virtual Machines Readiness Assessment
Virtual Machines Optimization Assessment
The Virtual Machines Readiness Assessment tool will automatically inspect your environment and provide you with a checklist and detailed report on steps for migrating the environment to the cloud.
The download location is https://azure.microsoft.com/en-us/downloads/vm-readiness-assessment/.
If you run the tool, you will get an output like this:
The Virtual Machines Optimization Assessment tool will at first start with a questionnaire and ask several questions about your deployment. Then, it will create an automated data collection and analysis of your Azure VMs. It generates a custom report with ten prioritized recommendations across six focus areas. These areas are security and compliance, performance and scalability, and availability and business continuity.
The download location is https://azure.microsoft.com/en-us/downloads/vm-optimization-assessment/.
Azure Stack provides a real Azure experience in your datacenter. The UI, administrative tools, and even third-party solutions should work properly. The design of Azure Stack is a very small instance of Azure with some technical design modifications, especially regarding the compute, storage, and network resource providers. These modifications give you a means to start small, think big, and deploy large when migrating services directly to public Azure some time in the future, if needed.
The most important tool for planning, describing, defining, and deploying Azure Stack services is Azure Resource Manager, just like in Azure. This provides you with a way to create your services just once, but deploy them many times. From the business perspective, this means you have better TCO and lower administrative costs.
Azure Stack itself will be available as an integrated system (for production use). The minimum number of hosts is four and the maximum is 20 for version 1. For the development toolkit or setting up lab environments, there is a single host deployment available based on very basic hardware requirements. For general availability, the hardware OEMs are Dell EMC, HPE, and Lenovo; Cisco and other OEMs will follow soon.
Setting up Azure Stack is a straightforward solution using PowerShell. The deployment is divided into two main phases: the data collection section and the deployment section. Both sections steps can be easily resumed if they run into any issues. There is no deployment interruption at all.
In the following chapter, you will learn how to plan the deployment of Azure Stack and what you should think about before starting.