About this book

Cloud technologies have now reached a level where even the most critical business systems can run on them. For most organisations SAP is the key business system. If SAP is unavailable for any reason then potentially your business stops. Because of this, it is understandable that you will be concerned whether such a critical system can run in the public cloud. However, the days when you truly ran your IT system on-premises have long since gone. Most organizations have been getting rid of their own data centres and increasingly moving to co-location facilities. In this context the public cloud is nothing more than an additional virtual data centre connected to your existing network.

There are typically two main reasons why you may consider migrating SAP to Azure: You need to replace the infrastructure that is currently running SAP, or you want to migrate SAP to a new database. Depending on your goal SAP offers different migration paths. You can decide either to migrate the current workload to Azure as-is, or to combine it with changing the database and execute both activities as a single step. SAP on Azure Implementation Guide covers the main migration options to lead you through migrating your SAP data to Azure simply and successfully.

Publication date:
February 2020


Why Azure for business-critical systems?

Many Information Technology (IT) executives, be they Chief Information Officer (CIO), Chief Technology Officer (CTO), or Chief Digital Officer (CDO), are under pressure from their business to consider cloud computing. In many sectors organizations are seeing increased competition from new entrants to their market, whose IT systems show a level of agility and flexibility with which they simply cannot compete. At the same time, they are concerned about whether they can really move their business-critical systems into the Cloud; can the cloud offer the security, scalability, and availability that their business requires?

The answer to that is simple: yes it can. Like all new technologies, the cloud has taken time to mature, but it has now reached a level where even the most critical business systems can run in the cloud. As examples, the first Software as a Service (SaaS) offerings were made available to customers in late 1999, which is 20 years ago, with the first Platform as a Service (PaaS) offerings arriving as long ago as 2002. The first Infrastructure as a Service (IaaS) offerings including compute and storage were released in 2002, and Microsoft itself announced the Windows Azure platform in October 2008, which became commercially available in February 2010, and now offers over 600 services across SaaS, PaaS, and IaaS.

For most organizations SAP is the key business system. If SAP is unavailable for any reason then potentially your business stops, or at least your ability to respond to your customers' needs is impacted. Because of this, it is understandable that you will be concerned whether such a critical system can run in the public cloud. However, the days when you truly ran your IT system on-premises have long since gone. Most organizations have been getting rid of their own data centres and are increasingly moving to co-location facilities.

Additionally, in many cases the management and operation of this IT has been outsourced to third parties. In this context the public cloud is nothing more than one or more additional virtual data centers connected to your existing wide area network.

So why should you move to cloud? In March 2019 Forrester published their report The Total Economic ImpactTM of Microsoft Azure for SAP1. This report identified the following quantified benefits:

  • Avoided cost of on-premises hardware of $7.2M
  • Faster time-to-market for SAP releases worth $3.3M
  • Avoided cost of overprovisioned hardware of $3.1M
  • Avoided cost of reallocation of staff required to manage SAP infrastructure worth $1.2M
  • Avoided cost of physical data center space valued at $1.1M
Graph showing Azure costs versus benefits. Payback is reached after 9 months.

With overall benefits of $15.9M over three years versus costs of $7.9M, moving to Azure provides a net present value (NPV) of $8.0M and a return on investment (ROI) of 102%.

For some, cost savings are one of the key drivers for moving SAP to Azure, while for others, aspects such as faster time to market are as important, if not more so. For many organizations, rather than driving innovation and change IT has in fact struggled to keep pace. By embracing Azure your IT department can become far more responsive and not only keep pace with the demand for change but become leaders in driving forward new innovations with data-driven insights. We will explore this in more detail later in this book.

Whatever your reasons for considering moving SAP to Azure, our purpose is to address your concerns and demonstrate that running SAP on Azure is not only possible but actually highly desirable. You should be reassured that you are not alone. At the time of writing in September 2019, over 800 organizations are at various stages of their journey to SAP on Azure. Some are fully productive, some with parts of their estate productive, and others are in various stages of migration. Of these, more than 80 are deploying SAP S/4HANA either as a conversion from ECC or as a net new implementation. Importantly, one of the largest customers for SAP on Azure is in fact Microsoft itself, who completed their migration to Azure in February 2018 and are now embarking on their journey to S/4HANA.

While this book focuses on SAP-supplied software and applications, it is important to remember that many SAP estates also include a number of third-party applications that are an integral part of the overall SAP application estate. These will also need to be moved. In general, the same approach can be taken to moving third-party applications as for SAP applications themselves. You do need to check that these applications are supported in Azure, and if there are any specific version, operating system, or database management system requirements to run in Azure. Most software vendors are less prescriptive than SAP, so moving to Azure is less likely to be a problem.

One final point to make is that this book only provides a high-level overview of each topic. Further information is publicly available on Microsoft websites such as docs.microsoft.com2, with more detailed descriptions, best practices, and how-to guides. As Microsoft Azure is a constantly evolving platform, it is important to regularly check the documentation to find out about the latest features and how they can be used to run SAP on Azure.


Customer stories

Before delving into the details as to why Azure is the best cloud platform for all your workloads, let's start by looking at some of the existing customers that have already moved some or all of their SAP estate to Azure.

Carlsberg Group3 has been brewing since 1847 and is now the world's fourth largest beer manufacturer with 150 brands in more than 100 countries, and with net revenues in 2018 of DKK 62 billion (USD 9.4 billion). Some of Carlsberg's growth has come from acquisition, and this created IT challenges as each acquired company had its own IT systems. As part of a widescale corporate strategy (called SAIL'22), Carlsberg is embracing the cloud and digital technologies to drive product innovation and better experiences for customers.

Carlsberg's existing SAP estate was mostly running on IBM Power/AIX servers with an older version of the IBM DB2 database, except for SAP Business Warehouse, which was running on SAP HANA on SUSE Linux. As part of the migration to Azure Carlsberg migrated the AIX/DB2 SAP systems to Windows Server 2016 and Microsoft SQL Server 2016, and BW on HANA to Azure Large Instances. The solution is designed for both high availability (HA) and disaster recovery (DR) with failover to a secondary region. This whole migration was completed in six months.

In addition, Carlsberg has implemented the Carlsberg Analytics Platform (CAP) in Azure, which utilizes Azure Data Factory (ADF), Azure Data Lake Store Gen2 (ADLS Gen2), Azure Databricks, Azure SQL Data Warehouse, and Microsoft Power BI. CAP provides a unified platform for data analytics with the aim of allowing business analysts to gain new and improved insights from the structured and unstructured data that is now available.

As Sarah Haywood, Chief Technology Officer and Vice President of Technology of Carlsberg Group says: "We're seeing a huge benefit in terms of scalability. We build out resources in the cloud as we need them, in a way that would have been impossible in a physical datacenter. We also trust in the security of Azure, which is important for any business site."

For Daimler AG4 the challenge was to implement a new procurement system (NPS) to help the company transform its procurement services by providing greater transparency in contracts and unifying procurement processes across different business units and geographies. NPS was required to replace a legacy procurement system developed by Daimler in the 1990s that had become difficult to refresh, with the IT team only able to release new features a couple of times a year.

The new solution is based on SAP Supplier Relationship Management (SRM) on HANA with SAP S/4HANA and the Icertis Contract Management (ICM) platform. Daimler had already used Azure to deliver connected car, truck, and van projects to outfit its vehicles with Internet of Things (IoT) intelligence and remote monitoring capabilities, while ICM is natively architected on Microsoft Azure. As part of this solution SAP HANA runs on multiple large Azure M-series virtual machines.

Using Azure, Daimler was able to transform a key operational system months faster than it would have by using traditional on-premises methods. To launch a project of this magnitude previously would have required up to 12 months just to acquire the necessary hardware, says Dr. Stephan Stathel, Operations Lead for New Procurement System and Team Lead for the Build2Run Team at Daimler AG. In Azure, we had the complete hardware set up in 12 weeks, which allowed development to start much sooner. We went live with NPS in just three months, which is unheard of by historic Daimler standards.

Coke One North America (CONA) is a platform that provides each of the 12 largest Coca-Cola Company bottling partners in North America with tools they need to collaborate as one company. CONA Services LLC manages the solution specifically to support bottlers in North America. For CONA Services one of their biggest challenges was that if the migration was successful then they would have migrated the biggest SAP HANA instance to Azure at that time. Their aim was to migrate a SAP Business Warehouse (BW) on HANA system to SAP HANA on Azure Large Instances with SAP HANA in a scale-out 7+1 node configuration; 1 master node, 6 worker nodes, and 1 standby node. The SAP HANA database has a size of more than 12TB on disk, with 28TB of total memory on the active nodes.

Working with their partner and Microsoft CONA Services was able to complete the migration in just seven months, from initial planning to full production. The entire CONA platform now runs on Azure, making it easily accessible and scalable for bottlers and distributors. We get cost value with Azure right now, and we're starting to clearly see increased performance, says Brett Findley, Chief Services Officer at CONA Services. Plus, we now have a base we can build on to provide greater capabilities in analytics and machine learning—so the results will only get better. The new CONA Azure platform handles roughly 160,000 orders a day, which represents an annual $21 billion of net sales value. The company's bottlers use it to help them improve operations, speak the same technical language, and thrive in the digital age of bottling.

Hopefully these customer stories will reassure you that when you choose to move SAP to Azure you will not be the first. These are just some examples from the 800 (and that figure is growing) customers that have either moved to Azure or are in the process of moving.


Why is Azure the best cloud platform for all SAP workloads?

For most customers considering moving SAP to Azure there will be a number of key considerations:

  • Security: Will my data be secure in Azure?
  • Scalability: Will Azure have the performance and scalability to run my critical SAP workloads?
  • Availability: Will Azure deliver the service levels that my business requires?
  • Disaster Recovery: Will Azure deliver the business continuity my organization requires?
  • Cloud Adoption: What do I need to do to move to Azure?
  • Automation: How can I utilize cloud capabilities such as automation to be more agile?
  • Insights and innovation: How can I leverage cloud-native services to enhance SAP?
  • Microsoft and SAP Partnership: How closely are Microsoft and SAP aligned?
  • Responsibilities: How do my responsibilties change when moving to Azure?

In this section we will look at each of these topics in turn. If you are already running other workloads in Azure then it is likely that you will already have addressed some of these topics, in which case please skip over those and jump to the topics that remain a concern.

Azure compliance and security

Compliance and security are normally the two main concerns for organizations when considering moving to the cloud; will I be able to comply with legal and industry-specific requirements, and will my data be safe? Because of this, these were also two of the main concerns for Microsoft when developing Azure.

When it comes to compliance Microsoft has the greatest number of compliance certifications of any public cloud, providing customers with the assurance they require to run their IT systems in Azure. Microsoft Azure has more than 50 compliance certifications specific to global regions and countries, including the US, the European Union, Germany, Japan, the United Kingdom, India, and China. In addition, Azure has more than 35 compliance offerings specific to the needs of key industries, including health, government, finance, education, manufacturing, and media. Full details are available on the Microsoft Trust Center5.

When it comes to security there are still a lot of common misconceptions. The cloud does not mean that all your applications and data need to be exposed to the public internet; on the contrary most customers running applications in Azure will access them through private networks, either via Virtual Private Networks (VPNs) or, for business-critical applications such as SAP, by connecting Wide Area Network (WAN), such as multiprotocol label switching (MPLS), to Azure ExpressRoute. Essentially you can connect to systems and data in Azure in the same way that you connect to them today, whether running on-premises, in a colocation (Colo) data center, in a managed hosting environment, or in a private cloud.

Of course, you can provide public internet access to your systems in Azure when required, in a similar way to how you would when not in the cloud. You will typically create a Demilitarized Zone (DMZ) in Azure, protected by firewalls, and only expose those external-facing services that need to be accessed from untrusted network such as the internet. Many customers when starting to migrate workloads to Azure will in fact use their existing DMZ to provide external access, and simply route traffic via their WAN to the workloads running in Azure.

Many people are also concerned about the security of their data in Azure, where their data will be located, and who has access to that data. When it comes to data residency then when using Azure IaaS it is you who decides in which Azure Regions you want to deploy your workloads, and to where, if anywhere, that data will be replicated.

One of the currently unique features of Azure is that with a couple of minor exceptions Microsoft builds its Azure Regions in pairs. Each pair of regions is within the same geopolitical area so that from a customer perspective if your business requires true Disaster Recovery then you can replicate your data between two Azure Regions without needing to go outside your chosen geopolitical area.

In North America there are currently eight general purpose regions along with additional regions for the US Government and US Department of Defense (DoD), Canada has East and Central, Europe West and North, the UK South and West, Asia East and Southeast, and so on. For some organizations this is essential if they are to be able to use the public cloud.

Once you have chosen the Azure regions in which you wish your data to reside, the next question is how secure is that data? All data stored in Azure is always encrypted while at rest by Azure Storage Service Encryption (SSE) for data at rest. Storage accounts are encrypted regardless of their performance tier (standard or premium) or deployment model (Azure Resource Manager or classic). All Azure Storage redundancy options support encryption, and all copies of a storage account are encrypted. All Azure Storage resources are encrypted, including blobs, disks, files, queues, and tables. All object metadata is also encrypted.

The default for Azure Storage Service Encryption is to use Microsoft-managed keys, and for many customers this may meet your requirements. However, for certain data and in certain industries it may be required that you use customer-managed keys, and this is supported in Azure. These customer-managed keys can be stored in the Azure Key Vault, and you can either create your own keys and simply store them in Key Vault, or use the Azure Key Vault API to generate the keys.

On top of the Azure Storage Encryption, you can choose to use Azure Disk Encryption to further protect and safeguard your data to meet your organizational security and compliance commitments. It uses the BitLocker feature of Windows and the DM-Crypt feature of Linux to provide volume encryption for the OS and data disks of Azure virtual machines (VMs). It is also integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets, and ensures that all data on the VM disks are encrypted at rest while in Azure Storage. This combination of Azure Storage Service Encryption by default with the optional Azure Disk Encryption should meet the security and compliance needs of all organizations.

In addition to Storage and Disk Encryption you may wish to consider database-level encryption. All the SAP-supported Database Management Systems (DBMS) support some form of encryption: IBM DB2, Microsoft SQL Server, Oracle Database, SAP ASE, and SAP HANA. The exact details do vary by DBMS, but these are the same capabilities that are available in on-premises environments. One point to note is that it is not generally recommended to combine both Azure Disk Encryption with DBMS encryption, as this may impact performance.

Then there is the matter of encryption in transit. In order to encrypt traffic between the SAP application server and the database server, depending on the DBMS and version you can use either Secure Sockets Layer (SSL) or Transport Layer Security (TLS). In general TLS is the preferred solution, as it is newer and more secure; however, where you have older DBMS versions in use then they may only support SSL. This may be a reason to consider a DBMS upgrade as part of the migration, which will normally provide other benefits such as enhanced HA and DR capabilities as well.

If you are using SAP GUI then you will also want to encrypt traffic between SAP GUI and the SAP application server. For this you can use SAP Secure Network Communications (SNC). If you are using Fiori, then you should use HTTPS to secure the network communication. Many of you will already be using this today, so this should not be anything new.

Finally it is worth mentioning how Azure Active Directory (AAD) can be used to provide Identity and Access Management (IAM) for SAP applications. Whether a user is using the traditional SAP GUI, or they are use more modern web-based user interfaces such as SAP Fiori, AAD can provide Single Sign-On (SSO) for these applications6. In fact AAD provides SSO capabilities for most SAP applications including Ariba, Concur, Fieldglass, SAP Analytics Cloud, SAP Cloud for Customer (C4C0, SAP Cloud Platform, SAP HANA, and SuccessFactors. Many organizations are already using AAD to control access to solutions such as Office 365, and if it is already in place, extending its use to provide SSO across the SAP portfolio is very easy. This means not only can you use SSO, but you can also leverage other AAD capabilities such as Conditional Access (CA) and Multi-Factor Authentication (MFA).

Azure scalability

For those with large SAP estates, and particularly some very large instances, then there may be concern about whether Azure can scale to meet your needs. This is a fair question as in the early days of Azure the IaaS service was primarily aimed at supporting smaller workloads or new cloud-native applications that are typically horizontally scalable. These applications require many small VMs rather than a few large VMs. Even in the world of SAP, the application tier can scale horizontally, but in general terms the database tier needs to scale vertically; while horizontally scalable database technologies such as Oracle Real Application Cluster (RAC) and SAP HANA Scale-Out are supported with SAP, they don't suit all SAP applications.

SAP has been officially supported on Azure since May 2014 when SAP Note 1928533 was released with the title "SAP Applications on Azure: Supported Products and Azure VM types." To be fair, in the first version of that note support was fairly limited, with support for Microsoft SQL Server 2008 R2 or higher running on Microsoft Windows Server 2008 R2 or higher, and with a single supported Azure VM type of A5 (2 CPU, 14 GiB RAM, 1,500 SAPS). However, over the coming months and years support for a complete range of DBMSes and OSes was added rapidly with a wide range of Azure VM types supported. Now all the major DBMSes are supported on Azure for SAP – on Microsoft Windows Server, Oracle Linux (Oracle Database only), Red Hat Enterprise Linux, and SUSE Enterprise Linux operating systems7. As for supported VMs, these now scale all the way from the original A5 to the latest M208ms_v2 (208 CPU, 5.7 TiB RAM, 259,950 SAPS), with M416ms_v2 (416 CPU, 11.7 TiB RAM) announced and due for release before the end of 2019. Are you still concerned about scalability?

The next significant announcement came in September 2016 when Microsoft first announced the availability of Azure SAP HANA Large Instances (HLI). These are physical servers (sometimes referred to as bare metal servers) dedicated to running the SAP HANA database. As with VMs, HLI have continued to grow over the years and Microsoft now have SAP HANA Certified IaaS Platforms up to 20 TiB scale-up, and 60 TiB scale-out. Under the SAP HANA Tailored Datacenter Integration (TDI) Phase 5 rules, scale-up to 24 TiB and scale-out to 120 TiB is supported. Essentially Azure can support the same scale-up and scale-out as customers can achieve on-premises, as HLI leverages essentially the same servers as a customer can buy, but hosted and managed by Microsoft, and now within an Azure data center.

Some customers do question how a physical server can really be considered as a cloud solution. The reality is that physical servers will generally be one step ahead of any virtual servers because the physical server comes first. For any hypervisor developer new servers with more CPU and memory are required before they can test the new more scalable hypervisor, so physical will always come first. In addition, with only a very small number of customers requiring such large and scalable systems the economics of globally deploying hundreds or thousands of these servers in advance and hoping that the customers come really does not stack up. There are probably less than 1% of SAP customers globally that require such large systems.

Ultimately these HLI systems address the edge case of the largest global SAP customers, meeting their needs for massive scalability, and enabling them to migrate the whole of their SAP estate to Azure. Without HLI they could not do this. For most customers one or two HLI will be used for their largest scale-up workload, normally ECC or S/4HANA, and their largest scale-out workload, normally BW or BW/4HANA.

In conclusion, whether you are running traditional SAP NetWeaver applications such as ECC, CRM, SRM, and so on, AnyDB (SAP's collective term for IBM DB2 UDB, Microsoft SQL Server, Oracle Database, SAP ASE, SAP HANA, SAP liveCache, and SAP MaxDB), or the latest S/4HANA and BW/4HANA, Azure has the scalability to meet your needs.

System availability

The third key concern of most people when considering the migration of SAP to Azure is availability.

Azure has been designed with availability in mind and there are multiple ways to provide HA for VMs in Azure.

At its simplest every VM in Azure is protected by Azure Service Healing, which will initiate auto-recovery of the VM should the host server have an issue. All VMs on the failing host will automatically be relocated to a different healthy host. The SAP application will be unavailable while the VM restarts, but typically this will complete within about 15 minutes. For some people this may be adequate, and many may already be familiar with and using this sort of recovery, as most hypervisors provide a similar capability in the on-premises environment.

The second Azure solution for HA is availability sets8, which ensure that where two or more VMs are placed in the same availability set, these VMs will be isolated from each other.

Availability sets leverage two other Azure features as shown in Figure 1-1:

The figure shows how virtual machines are organised into three fault domains and into multiple update domains within each fault domain.

Figure 1-1: Azure fault and update domains

  • Fault domains (FD) define a group of physical hosts that share a common power source and network switch. Potentially all the hosts within a fault domain could go offline if there is a failure of the power supply or network switch.
  • Update domains (UD) define a set of hosts within a fault domain that may be updated at the same time, which could in some cases require the VMs to reboot.

When you create VMs in an availability set they will be placed in separate fault domains and update domains to ensure that you will not lose all the VMs if there is a fault or an enforced host reboot. Because there are only a finite number of fault and update domains once they have all been used, the next VM that is created in the availability set will have to be placed in the same fault and update domain as an existing VM.

When using availability sets the recommended solution is to create a separate availability set for each tier of each SAP application: the database tier, the (A)SCS tier, the application tier, and the web dispatcher tier. This is because if the total number of VMs exceeds the total number of fault domains then VMs will have to share the same FD. This will ensure that two database VMs in a cluster, or the (A)SCS and ERS in a cluster, will be placed in separate update domains in separate fault domains. Mix all the tiers together and there is the risk that these resilient pairs of VMs will be co-located in the same fault and/or update domain.

The final Azure solution for HA are availability zones, which are currently available in some but not all Azure regions. Where they are available each region will have three zones, with each zone having one or more separate data centres, each data centre having its own power, cooling, and network. This is shown in Figure 1-29:

The figure shows how some Azure Regions have three separate Availability Zones.

Figure 1-2: Azure availability zones

Availability zones provide a higher level of isolation than availability sets, as availability sets may be deployed within a single data centre, whereas availability zones span multiple data centres. With availability sets if something impacts the whole data centre then potentially all the VMs will be lost, whereas with availability zones multiple data centres would need to be affected before all the VMs are lost.

Based on these different solutions Microsoft Azure provides a Service Level Agreement (SLA) for Virtual Machines. The following financially backed guarantees are provided10:

  • For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 99.9%;
  • For all Virtual Machines that have two or more instances deployed in the same availability set, we guarantee you will have Virtual Machine Connectivity to at least one instance at least 99.95% of the time;
  • For all Virtual Machines that have two or more instances deployed across two or more availability zones in the same Azure region, we guarantee you will have Virtual Machine Connectivity to at least one instance at least 99.99% of the time.

It is important to note that this is an infrastructure SLA and is not the same as an application SLA. For example, if a virtual machine is restarted then as soon as the VM has restarted and a user can connect to the VM at operating system level, the downtime is considered to be finished. The application itself may take some more minutes to start, and if it is a VM running a large HANA database it could take many tens of minutes to fully reload memory.

So how do we use these availability options with SAP? For some customers Azure Service Healing will be sufficient to meet their HA needs as it will reboot VMs in the event of a host failure. In fact Azure Service Healing offers more than this, as it uses Machine Learning to predict potential hardware failures and will try to live migrate11 VMs prior to a failure occurring to avoid any downtime. If your current on-premises landscape is already virtualized you may be relying on simple VM auto-restart to provide HA today, in which case Azure Service Healing offers better functionality than you currently have. However, it is important to know that Azure will very occasionally need to reboot VMs to make certain updates or patches, and while customers will be given warning of this and the opportunity to perform their own planned reboot, if this is not completed before the published deadline then your VMs will undergo a forced reboot.

If your SAP system is considered business-critical then it is very likely that you currently use some form of clustering to protect the database layer and the SAP (A)SCS (ABAP SAP Central Services/SAP Central Services for Java), and you may want to continue with this in Azure. There are some differences between clustering in an on-premises environment and clustering in Azure, but importantly clustering is fully supported.

The first difference is that Azure does not currently support the concept of shared disk storage for VMs, neither block storage nor file storage. This means that DBMS clustering solutions that rely on shared storage cannot be supported in Azure. However, SAP with its SAP HANA database has popularized the approach of shared-nothing clustering, where synchronous DBMS replication is used to keep two database instances fully synchronized, allowing failover between a primary and its standby to happen in seconds.

In fact, this type of replication was not new as Microsoft introduced database mirroring with SQL Server 2008, and has continued to enhance the capabilities into what is now called SQL Server Always On Availability Groups. Similar solutions are also available with IBM DB2 LUW, Oracle Database, and SAP ASE; so all the mainstream-supported SAP DBMSes can provide a similar capability.

The second difference is that traditional operating system clusters generally rely on virtual IP addresses that can be migrated with a service when they are failed over from one machine to another. This type of floating virtual IP address is not supported in Azure. However, the functionality can be replaced by using Azure Load Balancers in front of each clustered service so that any process communicating with the clustered service always uses the Azure Load Balancer IP address, and in turn the Azure Load Balancer will route traffic to the currently active service. As with shared storage, there is a solution in Azure that meets the requirements to create a clustered solution.

While Azure provides the infrastructure and components to build a clustered solution, the actual responsibility for detecting a fault and initiating failover of a service, be it the database or SAP central services, falls to the operating system. All the supported operating systems can provide this capability, Microsoft Windows Server with Windows Server Failover Clustering, Red Hat, and SUSE Linux with Pacemaker, and for Oracle Linux a third-party product called SIOS Lifekeeper can be used. While there are some differences in capability all these clustering solutions have been enhanced to operate in Azure.

In conclusion it is possible to build the same type of highly available clustered solution for SAP in Azure as it has been historically on-premises. Some customers will deploy a range of solutions based on the criticality of individual SAP applications, rather than simply applying the same solution to every application.

Business continuity/disaster recovery

While High Availability addresses the issue of failure of infrastructure within an Azure region, most customers for whom SAP is business critical will also want a solution for Disaster Recovery (DR) should there be a whole Azure region-level failure. While you may consider the loss of an entire Azure Region to be highly unlikely, it is nonetheless possible and important to plan for what to do in such an event, were it to happen.

To support DR, with one or two exceptions Microsoft builds Azure Regions in pairs. These pairs of regions are in the same geopolitical area, allowing you to ensure that your data is kept within a particular jurisdiction. Azure is currently the only hyperscale cloud vendor to do this. Certain Azure services are inherently linked to these paired regions, such as geo-redundant storage (GRS). In addition, Microsoft will only ever perform maintenance on one region within a pair to ensure that should the maintenance cause any problems and customers need to failover workloads to the other region in the pair then that region will not be undergoing any maintenance.

At a very minimum for DR you should plan to replicate backups to another Azure region. This will ensure that you have a copy of all data in an off-site location. It may take days to rebuild the infrastructure and restore data from backup, but at least the data is secure. Most customers pursuing this approach will build a minimal landing zone in the second region, with basic networking and security in place ready to start the rebuild process. This is not an ideal solution but where cost is critical, and where your business can survive without SAP for a number of days, then this may be adequate.

Where SAP is critical to your business it is likely that you will want a full-blown DR solution, with minimal data loss and the ability to recover service within a few hours. It is quite likely that you have agreements with your business to provide specific Recovery Point Objectives (RPO), which define the acceptable level of data loss, and Recovery Time Objectives (RTO), which define the time allowed to recover SAP. As for high availability you can use similar techniques in Azure for DR to those routinely used in the on-premises world. In this case you will build the complete foundation services in both regions so that all capabilities to run the complete production environment, as a minimum, are available in both regions.

The Azure regions are situated sufficiently far apart with the aim of ensuring that no single event will impact both regions in a pair. For example, you may consider 100 miles (160 kilometres) to be a safe distance between two data centres for DR purposes, but if they are both on the same river flood plain, or earthquake fault line, or hurricane path, then that is not adequate. Microsoft takes all this into account when deciding where to position regions, and which ones to pair. To ensure resilience, the regions are typically hundreds of miles apart, which means that only asynchronous replication is supported.

The most important first step is to protect the database, which you can do using DBMS asynchronous replication. The DBMS will replicate the data continuously, but by running asynchronously the latency and throughput of the network will not impact the performance of the primary database. This asynchronous replication can be supported alongside the synchronous replication used for HA, so that HA and DR can coexist.

For the application tier an Azure-native service called Azure Site Recovery (ASR) can be used to replicate the app server VMs. This takes regular snapshots of the VMs in the primary region and replicates them to the second region. ASR can provide either crash-consistent or application-consistent snapshots. By default, crash-consistent snapshots are taken every 5 minutes while application-consistent snapshots are taken every 60 minutes, although these defaults can be changed. For the SAP application, server VMs crash-consistent snapshots are adequate as there is no user data that needs to be protected.

ASR uses Recovery Plans to allow the sequencing of the way in which VMs are restored. You can also add your own scripts to ASR and these can include a pre-action script, for example, to failover the DBMS and promote the DR copy to active, and a post-action script, for example, to attach a load balancer to any clustered VMs once they have been failed over.

The final element of the SAP application that needs to be protected is the SAP (A)SCS. If the (A)SCS is part of an HA cluster with the Enqueue Replication Server (ERS) in the primary site then they will be using a file share, either SMB for Windows or NFS for Linux, to share files between the two VMs. Because of this, replication with ASR is not currently supported. The recommended solution is to create an (A)SCS VM in the secondary region and replicate any changes on the shared file systems using a scripted copy for Windows or Linux rsync.

Microsoft Cloud Adoption Framework for Azure

If you are already using Azure for other non-SAP workloads then it is likely you have a cloud foundation in place, in which case you can skip this section, but if you are uncertain or this is your first deployment in Azure then please read on. The Microsoft Cloud Adoption Framework (CAF) for Azure12 is based on Microsoft's experience of working with a wide range of customers over the last several years as they have deployed workloads into Azure. To quote from CAF, The framework gives enterprise customers tools, guidance, and narratives that help shape technology, business, and people strategies for driving desired business outcomes during their adoption effort. The CAF builds on earlier Microsoft documentation such as the Azure Enterprise Scaffold13 and the Cloud Operating Model14.

The easiest way to understand the CAF is to think of it as providing the foundation for running your business in Azure. It covers the complete adoption life cycle from defining your strategy, through planning and readiness, to migration and innovation, and ultimately to management and operation. Like the foundations of any building, it may not be the most exciting aspect of the project, but it is probably the most essential. Put the right foundations in place and it becomes relatively easy to move new workloads to Azure and to ensure that your governance and security policies will be met. Fail to do this and the potential for problems to occur later in the project are high.

The only time you probably don't need to consider CAF is where you are working with a partner who will deliver SAP as a Service (SAPaaS) under the Microsoft Cloud Service Provider (CSP) model. In this case you will simply consume SAP applications under some form of pay as you go model, and the CSP will be responsible for the cloud foundation.


If you are to gain the full benefits of Azure, then it is essential that you embrace automation. How much automation will depend on your individual circumstances, and finding the right balance is key. At first glance it is easy to look at the Azure Portal and think to yourself, "this makes configuring resources so easy, why would I bother to invest in automation? To provision a new VM takes only a few minutes, only requires you to answer a few questions, and the hard work is handled by Azure." However, this tends to forget that people are generally poor at achieving repeatability.

In general SAP landscapes involve a relatively small number of relatively large VMs, which tends to feed the assumption that automation is not important. However, anyone with experience of SAP will be familiar with the challenges created by configuration drift between your development, QA/Test, Pre-Production, and Production environments. This can make it very difficult to diagnose and debug problems, where you think all the environments are the same when in reality they are not.

From experience working with a number of customers, there are three main approaches to automation:

  1. Empower the user: Make it simple for the person requesting the resource to provision the resources that they require. This needs to ensure that all the security and governance rules are automatically applied, along with an approval process to limit who can provision what and when. In the world of SAP this will generally provision and configure the infrastructure but leave it to the SAP specialist to install and configure the required SAP software.

    For a SAP BASIS administrator to know they can get the resources they need in a matter of a few minutes or even a few hours is a major step forward and supports the move towards a more DevOps culture for SAP development.

  2. Empower the infrastructure team: Make it even easier for the person requesting the resource, while empowering the infrastructure team to deliver a fully working system. In this scenario the requestor outsources the whole responsibility to the Azure team, who not only use automation to build the infrastructure but also to install and configure the SAP application as far as possible, with potentially a few manual steps. For an SAP developer this takes delivery to the next level as they simply request the application that they want and wait to be told when it is available.
  3. None of the above: Use minimal or no automation and the infrastructure team simply continues to build every VM manually via the Azure portal, Azure CLI, Azure PowerShell, or Azure Cloud Shell.

Unfortunately, option 3 is all too common and is what leads to mistakes and configuration drift. There are several reasons for this. Firstly, there are often time pressures to get the infrastructure built as quickly as possible. By the time you have made the decision to move to Azure most projects are already running behind schedule. Then you have no history of using automation in your on-premises world, and the basic Azure tools for manually building infrastructure are probably light years ahead of what you have been used to, so it is tempting just to wade in and get building. Finally building out the required automation requires you to learn new skills and takes time, providing rewards later in the project life cycle but potentially delaying the start. The answer is potentially to manually build the initial VMs while in parallel developing the automation skills, so that by the time you get to provisioning preproduction and production, the automation skills are in place.

The level of automation you should implement will depend on many factors. If SAP is to be your only workload in Azure, and you run a very static environment, then creating some basic ARM templates to improve consistency may meet all your needs. If, however, you are embracing Azure as part of a business transformation program, with a cloud-first strategy and a desire to drive business growth through IT innovation, then it makes sense to invest and build a more sophisticated solution.

The ultimate goal of automation is to deliver Infrastructure as Code, to allow infrastructure to be delivered quickly and consistently while complying with all the required governance. A number of enterprise customers have totally embraced automation. All requests for services in Azure are initiated through an IT Service Management request, which handles the approvals process. If approved the required infrastructure will be automatically provisioned and configured and the software installed.

The infrastructure is guaranteed to conform to all policies and will be correctly tagged to ensure full financial control. Finally, a configuration management database (CMDB) is updated to track all the assets and software licences to ensure full auditability. For non-production systems the infrastructure may even have a time limit, so that once expired the infrastructure will automatically be deleted again; the requestor will be notified before this and can request an extension if needed.

One of the more recent innovations is Azure Blueprints15 Blueprints provide a declarative way for you to define and deploy a repeatable set of resources that adhere to the requirements, standards, and patterns of your organization. Importantly, Blueprints are stored within Azure and globally distributed and can contain Azure resource templates as artifacts. Azure Blueprints is still in preview at the time of writing (October 2019) but should be generally available in the coming months.

If all this sounds quite daunting then there are a number of partners who have developed specific skills around the automation of Azure in general, and some with a particular focus on the automation of SAP in Azure. They can either provide you with automation as a service, whereby you simply consume their tooling to automate deployments into Azure, or in some cases will help you to develop the tooling that you require, and then hand it over to you to maintain and enhance as required.

Insights and innovation

For many organizations the motivation for moving SAP to Azure is part of a wider strategy of IT transformation, aimed at gaining greater insights and using these to drive innovation. While not an essential pre-requisite it makes sense to put SAP, the core system of record, into the same cloud that will be used for insights and innovation.

While most SAP customers have been using SAP BW to support both operational reporting and analytics, with a few exceptions BW has primarily been used for data sourced from other SAP applications. With the rapid growth of Advanced Analytics (AA) and Machine Learning (ML) there is a much greater need to take data from multiple sources, both structured and unstructured, to be used to support AA and ML, and ultimately Artificial Intelligence (AI). Because of the different types of data and the sheer volume, most organizations are looking to do this in hyperscale cloud.

Microsoft Azure has a whole range of native services available to support advanced analytics on big data16, as well as being able to utilize a wide range of third-party solutions running in Azure. A common solution is to use ADF to extract data from SAP and store it in Azure Data Lake Storage (ADLS), where the data can be prepared and blended ready for analysis using tools such as Databricks. One of the benefits you enjoy is the ability to consume these Azure services as required and only pay for what you use. Many of you are probably still at the experimental stage, and the ability to rapidly stand up a solution in Azure, test it, and then either productionize it or tear it down and start again is a major benefit.

When it comes to SAP data there is often tension between the data scientists and the SAP team. The data scientists will generally start from the position of "give me all the data and I will decide what to do with it later," and for some data sources this may be fine. If you are extracting sentiment data from social media, then the data is already public and retrieving and storing it for future analytics may make sense. However, the data inside an SAP system is often more sensitive and is not generally in the public domain, or even widely available within your organization. SAP has a very sophisticated security model that ensures that users can only see the data that they need to see in order to fulfill their role in the organization.

While attitudes will change and evolve over time, the most successful projects currently begin with the end in mind17; that is, they start with a business outcome in mind and work back from that to determine the data required to deliver that outcome. In this way when it comes to extracting data from SAP there is a clear objective in mind, and you only need to extract the data required to deliver that outcome. In many cases the data required may not be considered highly sensitive, or if necessary sensitive data can be masked in such a way that the objective can still be achieved.

Using these data-driven insights you can start to drive innovation within your business, which is at the top of the agenda for many at the CxO level. With new startup disruptors appearing in many industries the ability to adapt and innovate has never been more pressing. Most of these start-ups are born in the cloud and have been leveraging advanced analytics as an integral part of their business model from the start. Traditional organizations must adopt similar strategies, but have the opportunity to mine the vast amounts of historical data that they hold to get even greater insights and to compete successfully.


Microsoft and SAP have a partnership that goes back as far as 1993, and the release of SAP R/3. This early relationship was primarily a technology one, the two companies working together to ensure that SAP technologies would support and integrate with Microsoft technologies. The relationship really started with SAP R/3 and the SAP GUI for Windows and then moved forward with the support for SAP R/3 running on Microsoft Windows NT Server and Microsoft SQL Server.

The partnership was strengthened when more than 20 years ago Microsoft chose to deploy SAP as the core ERP system to run its business. Microsoft was now both a partner and a customer. Over the years Microsoft's SAP system has grown to be one of the largest SAP ERP systems globally. As of May 2019, the size of this ERP/ECC system is as follows18:

  • 16 TB of compressed database (equivalent of 50 TB uncompressed)
  • 110,000 internal users
  • 6,000 named user accounts
  • 300,000 monitored jobs per month
  • Up to 270 million transaction steps per month
  • Up to 10 million dialog steps per day

However, ERP/ECC is only one of the SAP applications used by Microsoft; others include E-Recruiting, GRC, GST, CPM, SCM, OER, MDG, and SMG. In total the SAP estate comprises more than 600 servers, which between 2017 and February 2018 were moved entirely to Azure VMs. Microsoft now runs SAP 100% in Azure and is embarking on the first phase of its move to S/4HANA, also now deployed in Azure.

As part of this ongoing partnership, in November 2017 SAP announced19 that it would move some of its key internal business-critical systems to Azure. As part of a further announcement in June 201820 Thomas Saueressig, CIO of SAP, shared an update on progress with this migration:

"In 2017 we started to leverage Azure as IaaS Platform. By the end of 2018 we will have moved 17 systems including an S/4HANA system for our Concur Business Unit. We are expecting significant operational efficiencies and increased agility which will be a foundational element for our digital transformation."

In September 2018 Microsoft, Adobe, and SAP announced the Open Data Initiative (ODI)21 with the objective of unlocking the data held in applications from all three entities, and potentially other third-party software, by combining that data in a data lake where AI can be used to derive insights and intelligence. In an update in March 201922 it was announced that:

"… the three companies plan to deliver in the coming months a new approach for publishing, enriching and ingesting initial data feeds from Adobe Experience Platform, activated through Adobe Experience Cloud, Microsoft Dynamics 365, and Office 365 and SAP C/4HANA, into a customer's data lake. This will enable a new level of AI and machine learning enrichment to garner new insights and better serve customers."

As part of that November 2017 news release the availability of SAP HANA Enterprise Cloud (HEC) on Microsoft Azure was announced, enabling customers that want the SAP-managed cloud service to also leverage Azure hyperscale cloud. This is all part of SAP's wider strategy of making its "as a Service" offerings available on hyperscale cloud. Today this includes Ariba, Data Custodian, C/4HANA, SuccessFactors, and SAP Cloud Platform, all of which are available in certain Azure Regions.

Further strengthening this partnership in February 2019, SAP unveiled SAP Leonardo IoT23 at Mobile World Congress in Barcelona. At the same time SAP announced that SAP Leonardo IoT will interoperate with Microsoft Azure IoT Hub and that SAP Leonardo IoT Edge essential business function (EBF) modules are planned to run in containers on Microsoft Azure IoT Edge.

Further details of this integration are covered in a Microsoft blog post24 Through this interoperability SAP Leonardo IoT will be able to leverage the market-leading secure connectivity and powerful device management functionality provided by Azure IoT Services, and stream data back to SAP's business applications. By running Leonardo EBF modules on Azure IoT Edge key business processes at the edge and avoid issues with network connectivity, latency, and bandwidth.

Most recently in May 2019 SAP announced project "Embrace,"25 a collaboration between SAP, the hyperscale cloud providers, and some of SAP's global strategic service partners (GSSPs). The aim is to help customers move to S/4HANA in the cloud following market-approved journeys, which provide you with a blueprint for your journey towards an intelligent enterprise. At the same time Microsoft announced26 that it is the first global cloud provider to join this project.

Common misconceptions

There are some common misconceptions about the cloud in general and running SAP on Azure in particular. As already mentioned, when you run SAP in Azure you are utilizing Azure IaaS, primarily VMs, storage, and networking. As such it is your responsibility to create the infrastructure, install and configure the SAP software, configure SAP for HA and DR, and manage your own backups. None of this happens automatically.

At the heart of the confusion is how IaaS is different to both PaaS and SaaS. With SaaS in particular you simply consume the application without any need to install any software, or concern yourself with matters such as HA, DR, and backup. You will have a SLA for the application and it is up to the application provider to take the necessary steps to deliver the SLA. Similarly, when consuming PaaS services in general the level of availability you require is something you choose when initiating the service, and the software provides the capability seamlessly.

As an example of an Azure PaaS service, with Azure SQL Database you can choose different service tiers – general purpose, business critical, and hyperscale – which offer different levels of availability and redundancy. In addition, backups are taken automatically, written to read-access geo-redundant storage (RA-GRS) so that the backups are available even if the region becomes unavailable, and there is an option to configure long-term retention. All of this is handled automatically by Azure with minimal input required once the service has been created.

Further confusion arises from the fact that SAP has their own cloud offering called SAP HEC27. This combines cloud hosting with a range of HEC Standard Services, HEC Optional Services, and HEC Enhanced Managed Services (EMS)28. Even the HEC Standard Service provides most of the day-to-day services required to keep an SAP system running. As described elsewhere you can now combine these two worlds by choosing HEC on Azure, where SAP provides the HEC Services offering but utilizes Azure IaaS. As an aside, despite the name, HEC allows SAP customers to run any SAP application running on any SAP DBMS (HANA, ASE, MaxDB, and so on). It is not exclusively restricted to applications running on HANA.

When you deploy SAP in Azure it does not become either a PaaS or a SaaS offering; you are simply utilizing Azure IaaS. This is further brought home by SAP's recent naming convention, where for SAP S/4HANA you deploy the On-Premises Edition even when the deployment is going into a hyperscale cloud such as Azure, or into SAP's own HEC. S/4HANA Cloud Edition is only available as a Service from SAP and cannot be downloaded and installed elsewhere.

If what you want is a fully managed SAP service – SAP as a Service – then you can either opt for SAP HEC on Azure or work with one of the many partners that Microsoft has that can offer such a service, which includes Global Systems Integrators (GSI), National Systems Integrators (NSI), and local SAP Services Partners.


In this section we have covered a range of subjects related to running SAP on Azure. Hopefully this has addressed any fears or concerns that you may have and helped to convince you that there are no technical limitations to running SAP on Azure. In the next section we will look at how to migrate SAP to Azure.


Migration readiness

This section will cover the various ways in which you can prepare to migrate your SAP workloads over to Azure. Let's begin with the first thing to consider: when should you migrate?

When to migrate

There are three main factors that determine when is a good time to migrate your SAP workloads to Azure, and for many customers it may be some combination of all three:

  1. Your existing infrastructure is due for a refresh, and you must decide whether to refresh what you have today or use this as an opportunity to migrate to Azure.
  2. Your existing outsourcing, hosting, or co-location contract is up for renewal and this provides an opportunity to look at different hosting options, such as Azure.
  3. You are planning a migration to SAP Business Suite on HANA, SAP S/4HANA, SAP BW on HANA, or SAP BW/4HANA, and you want to avoid the capex cost of purchasing new infrastructure to run SAP HANA, and also want the agility and flexibility offered by Azure.

Ultimately every customer that plans to continue to run SAP must plan to move to SAP S/4HANA before the deadline of 31st December 2025, which is the current end of support for SAP Business Suite 7 core application releases and SAP Business Suite powered by SAP HANA29. Whether you are ready to move today, or planning to move in the next few years, moving your existing SAP estate to Azure now will provide much greater flexibility for the future.

From an infrastructure perspective one of the biggest challenges for customers when moving to SAP HANA is that the hardware required to run SAP HANA is very different to that required for AnyDB. SAP HANA is an in-memory database, which by default loads all HANA table data into memory at startup and requires additional temporary memory as working memory to allow the table data to be processed; for example, for table joins. While a VM with 64 GiB memory may be adequate to run a productive AnyDB instance with 2 TiB of data on disk, to run that same database on HANA could require up to 4 TiB memory; 2 TiB for table data and 2 TiB for temporary memory, based on SAP recommendations. In reality there will generally be some reduction in size of the data when migrating to SAP HANA, and potentially even more when migrating to SAP S/4HANA, so 2 TiB or even 1 TiB of memory may prove sufficient, but still that is a lot more than 64 GiB.

By moving to Azure when your next hardware or data centre refresh comes around allows you to better prepare for the migration to SAP HANA. If you make another Capex investment, then that will generally be depreciated over three to five years. If during this time you decide to move to SAP HANA then you will need to purchase new hardware for SAP HANA, while still depreciating the hardware you already have for AnyDB. This can play havoc with budgets, particularly if you decide to buy the new SAP HANA hardware as a further Capex investment. By moving to Azure you can instead stand up today the VMs you need for AnyDB, and when the time is right to move to SAP HANA simply stand up new larger VMs for SAP HANA, and once the migration is complete, shutdown, delete, and stop paying for the AnyDB VMs.

If you are running SAP on-premises and considering a migration to SAP HANA today, then once again deploying SAP HANA in Azure has many benefits. Firstly if you are still depreciating the existing hardware for your on-premises AnyDB then deploying SAP HANA in Azure will normally be classed as Opex, so will not impact your Capex budget in the same way, although this may depend on internal accounting practices along with local regulations. This may well reduce some of the financial challenges.

Also, customers moving to SAP HANA are often uncertain as to the exact size that their databases will be. If it is a net new implementation, for example a greenfield implementation of SAP S/4HANA or SAP BW/4HANA, then it can be hard to size the initial system and also predict future growth, although the SAP Quick Sizer can help here. If it is a migration to SAP HANA or a conversion to S/4HANA or BW/4HANA, then SAP do provide reports that you can run to estimate the future size of your SAP HANA database30. But even then, this is just an estimate, and predicting future growth can be challenging.

In most cases if you are deploying SAP HANA on-premises then you will use an SAP HANA Certified Appliance31. With any capex investment in hardware for SAP it is normal practice to try and predict the size that you will require for the depreciation life of the hardware; typically you are predicting three or even five years ahead. With all the uncertainty over initial sizing and future growth you must choose between choosing a larger appliance and simply accept the high cost, to minimize the risk of outgrowing the appliance, or choose a smaller lower-cost option and risk running out of capacity.

To make matters worse most appliance vendors have to switch server models as the memory grows, so expansion part way through the life of an appliance may in fact mean purchasing a net new appliance.

With Azure these problems mostly go away. With SAP HANA certified Azure VMs you can start with what you need today, and simply resize the VM as and when it is required. This keeps your initial costs lower, by not over provisioning, but still allows you to scale as the database grows. In addition, you can easily scale up the SAP application tier, adding additional application server VMs to address peak workloads, and shut them down again when not required.

In addition, most customers know that they should right size before migrating to SAP HANA, reducing capacity through data archiving or deletion, but the reality is this may not be possible. SAP has always recommended that customers implement Information Life cycle Management for data held in SAP to improve performance, reduce infrastructure costs, and provide governance, risk, and compliance for enterprise data. However, many organizations fail to do this, because no data retention policies exist, and for many years now infrastructure has become faster and cheaper, so it has not been essential.

SAP HANA does change the economics of this as the cost per TB of storing data in memory rather than on disk is typically an order of magnitude higher. It does not take as long to build a compelling business case for archiving when the additional infrastructure costs for SAP HANA are considered. While the project timescales may make right sizing before migration impossible, in Azure it is much easier to stand up an oversized VM for the initial migration, then archive, delete, and right size the VM once the data is in Azure.

Migration order

You may be asking whether SAP should be the first workload that you migrate to Azure, or whether you should leave it until later in your cloud migration strategy, or even right to the end. The answer is that there is no right or wrong answer, and we see customers where SAP is first and others where it is towards the end: it normally depends on the compelling event.

Some of you will have already started to deploy other workloads in Azure, having your cloud adoption framework in place, when the compelling event to migrate SAP to Azure comes along. While the foundations may be in place, for many of you SAP will still be the largest and most critical workload that you have yet moved to Azure. So, all the issues around security, scalability, and availability still need to be addressed with respect to SAP before the migration can begin. However, hopefully at least some of the work will have already been undertaken. What is important is that having the Azure foundation in place should speed up the migration process.

Others may be yet to deploy any workloads into Azure when a compelling SAP event arrives. This means a steeper learning curve for you, because you need to understand a lot more about how SAP runs in Azure before you can make the decision that you want to migrate your SAP workloads. For those in this position, this book should help. Starting with SAP does add an additional step to the process because you will need to build the bare bones of an Azure foundation before you can start to deploy SAP in Azure. You wouldn't build a house without first putting in the foundations, so you shouldn't deploy workloads in Azure without first building the foundations.

Types of SAP migration to Azure

Migrating SAP to Azure is no different to any other type of SAP migration. Many customers will already have undertaken a SAP migration whether it be migrating from a traditional Unix operating system such as AIX, HP-UX, or Solaris to Linux or Windows, or migrating from one data centre to another; from old to new on-premises, or on-premises to co-location, managed hosting, or similar. The migration approaches are very similar when Azure is the target; essentially Azure is just another data centre connected to your corporate WAN.

There are two main types of SAP migration to Azure:

  • Brownfield: Where you have existing SAP workloads you want to migrate from on-premises to Azure, potentially with an OS or database migration, and maybe an upgrade or conversion
  • Greenfield: Where you plan to start afresh in Azure with a new implementation of SAP, whether or not you have existing SAP workloads on-premises

Brownfield migrations are by far the most common, and these in turn fall into four main categories as shown in Figure 1-3:

The figure shows the four main options for migrating SAP to Azure.

Figure 1-3: Options for SAP migration to Azure

  • Lift and Shift to Cloud: Where the existing OS and DBMS used on-premises are supported in Azure, and you have no plans to migrate to HANA at this time, a lift and shift or rehosting is possible. This is normally referred to as an SAP Homogeneous System Copy and is the simplest type of migration.
  • Lift and Migrate to Cloud: Where the existing OS is not supported in Azure – HPE HP-UX/IA64 or HP-UX/PA-RISC, IBM AIX/Power, Oracle Solaris/SPARC, or Solaris/x64 – or there is a desire to change the DBMS, then a lift and migrate or replatforming is the solution. This is normally referred to as an SAP Heterogeneous System Copy and while it is more complicated than lift and shift, there is a well-proven path using tools such as SAP R3load or the SAP Database Migration Option (DMO) of the Software Update Manager (SUM) where the target DBMS is SAP HANA.
  • Lift and Shift/Migrate to Cloud, migrate part to HANA: Particularly if facing an OS/database migration this may be a good time to consider migrating at least part of the SAP landscape to SAP HANA, if you have not already done this. Migrating BW on AnyDB to BW on HANA will provide an immediate performance boost and will allow you to start to get familiar with SAP HANA before you migrate to S/4HANA. Importantly, if you are currently using BW Accelerator then that is not supported in Azure and a migration to SAP HANA is the recommended route forward from SAP.
  • Transformation to S/4HANA: If facing an OS/database migration, then this could be an excellent time to consider a conversion from ECC to S/4HANA. A basic conversion can be done with minimal disruption to end users and will ensure you are ready for 31st December 2025.

For some of you the move to S/4HANA represents an opportunity to get rid of years of customizations and to standardize business processes possibly using the SAP Model Company. When you originally implemented SAP those customizations may have been essential to meet your business requirements, but as SAP has evolved the functionality has increased and now the standard functionality may better meet your business needs. If this is the case, then you may prefer a greenfield migration.

Microsoft has several customers globally either live with greenfield S/4HANA implementation, or in the process of deployment. A small number are net new SAP customers, but the majority are customers that have been using SAP for 10, 15, or even 20 years, but want to start afresh with S/4HANA. They are deploying S/4HANA in Azure for all the reasons stated above, because they may still be finalizing the sizing, want to start small and grow with their data, and do not want to make large upfront capex investments in the hardware needed to run the SAP HANA database than underpins S/4HANA.

Migration strategies

When it comes to brownfield migrations it is important to plan the order in which the migrations will take place. To some extent this will depend on whether you are planning a homogeneous or an heterogeneous migration, and also the downtime window available for the migration.

A homogeneous migration, or in SAP terminology a homogeneous system copy, is one where there is no change to either OS or DBMS. For SAP workloads in Azure the supported operating systems are Microsoft Windows Server, Red Hat Enterprise Linux (RHEL), SUSE Enterprise Linux (SLES), and Oracle Linux; the latter only for Oracle DBMS. If you are currently running SAP on any other operating system, then you will need to undertake a heterogeneous system copy.

A heterogeneous system copy is one where either the OS, the DBM, or both are changed. This is a well-proven path and SAP have several tried and tested tools for carrying out such migrations. If you need to undertake a heterogeneous migration because you must change the operating system, then this also provides an opportunity to consider changing the DBMS as well.

While SAP used to advise against combining multiple changes into one, it is now common for customers to combine a SAP application upgrade with an OS and/or DBMS migration into a single delivery. If there are problems it can make diagnosing the cause more difficult – was it the software upgrade, the OS migration, or the DBMS migration? – but in terms of testing effort and business disruption it is generally preferred.

Homogeneous migrations are the simplest and offer the most migration options. The simplest solution will be a simple backup and restore, taking a full system backup on-premises, copying over the backup files, and then restoring them into Azure. Where downtime needs to be minimized then DBMS replication can be used to set up a new database instance in Azure and add it as a replication target to the primary on-premises. Because there is no change to the OS or DBMS there will be no compatibility issues.

In contrast heterogeneous migrations, while well proven, are more complicated and will normally require more downtime. The standard SAP tool for this is known as R3load, which exports data from the source database into flat files and then imports the flat files into the target database. While SAP now refer, to using Software Provisioning Manager (SWPM) and Database Migration Option (DMO) of SAP Software Update Manager (SUM) for heterogeneous system copies, in reality R3load remains at the heart of the actual migration process.

When it comes to the order in which to migrate an SAP estate there are two main options as shown in Figure 1-432:

The figure shows the difference between horizontal and vertical migration, demonstrating with multiple SAP components and four environments per component.

Figure 1-4: Horizontal versus vertical migration

With horizontal migrations each landscape is moved one environment at a time. For example, you may move all the Development environments into Azure, then the QA/Test environments, followed by the Pre-Production environments, and finally the Production environments. In general, this method only makes sense for homogeneous migrations, otherwise you will have incompatibility between the new environments in Azure and the old environments still on-premises; for example, running AIX/DB2 on-premises and SLES/DB2 in Azure.

For heterogeneous migrations it is more normal to use a vertical approach, moving all the environments for a given SAP application at one time. This avoids any issue of compatibility between the Development, QA/Test, Pre-Production, and Production environments. In some cases, you will use a mix of horizontal and vertical approaches, as some SAP applications may undergo a homogeneous migration while others undergo a heterogeneous migration.

Where you have a large and complex SAP estate it may well not be possible to migrate all the SAP applications at once. In this case it is important to carefully plan the order in which the applications are migrated. While you can maintain network connectivity between the applications that have been migrated to Azure and those still running on-premises, in most cases network latency will increase. If you have SAP applications that are closely coupled and highly interactive, then it is desirable to migrate them together as part of the same move group. When planning move groups it is important not just to consider the SAP applications themselves but also any third-party applications that are an integral part of the SAP landscape and may also be closely coupled and highly interactive, as these will need to form part of the same move group.

If the migration is to be phased using move groups, then you will need to plan the order in which to migrate each group. In general, it is better to start with the smaller, simpler, and least critical SAP applications to prove the migration strategy and to gain confidence with operating SAP applications in Azure. The final SAP applications to move should generally be the largest and most critical. The process does not need to be entirely sequential, as for the largest SAP applications you are likely to need to run multiple test migrations to develop the best approach to minimize downtime and complete the process within the allowed downtime window. Resources permitting, this can run in parallel with migrating some of the smaller less critical SAP applications.

As previously mentioned there are now over 800 customers at various stages of deploying SAP in Azure, with some having their full SAP landscape in production. While a few have been new installations of SAP, either because this is the first time they have implemented SAP or because they have chosen the greenfield migration option, in the majority of cases the customer has migrated from on-premises to Azure using one of the methods described here. There is now a wealth of experience within the partner community of managing such migrations.

This section has so far focused on the traditional SAP NetWeaver-based applications, but not every SAP application uses the NetWeaver stack.


Non-NetWeaver applications

The main focus of this book is on traditional SAP applications based on the NetWeaver stack and written in ABAP or Java, along with the newer applications such as SAP S/4HANA, which since release 1809 uses the ABAP Platform for S/4HANA rather than the NetWeaver stack. However, there are also SAP applications that are not based on this technology, the most notable being SAP hybris.

The company hybris was founded in 1997 in Switzerland and developed an omnichannel commerce platform. SAP acquired hybris in 2013, and has gradually integrated it into their product portfolio. However, hybris continues to use its own quite different technology stack, written in Java and supported on either VMs or Containers, and supporting a number of database management systems including Microsoft SQL Server and SAP HANA.

Today SAP hybris is available as both a SaaS offering under the name SAP C/4HANA as well as an on-premises edition that can leverage Azure IaaS and PaaS services, such as Azure Kubernetes Service (AKS). Further details on running SAP hybris on Azure are given in the later chapters of this book.

Having discussed how you can migrate both SAP NetWeaver-based applications as well as non-NetWeaver applications, let us now discuss who should carry out the migration.


Successful work team

When moving workloads to any cloud it is important to think about how this will affect your organization, and how you leverage partners to support you in this journey. As described earlier in this chapter, while many of the responsibilities remain unchanged, in a software-defined data centre individual roles may need to change if you are to take full advantage of the flexibility and agility provided by Azure. These roles and responsibilities will also be different when considering consuming IaaS, PaaS, and SaaS offerings.

In the case of running SAP on Azure we are mostly considering the use of Azure IaaS services, but most customers will also need to consider how these integrate with native Azure PaaS services, such as AKS for hybris, SAP PaaS services such as SAP Cloud Platform (SCP), as well as SAP SaaS services such as Ariba, Concur, Fieldglass, and SuccessFactors.

For most of you, it is unlikely that you have all the skills in-house to manage the migration of SAP to Azure. Most projects are delivered by a combination of customer, partner, and Microsoft resources. While some tasks can only really be performed by internal resources, such as user acceptance testing, others may be better oursourced to partners who have experience in such migrations. Most customers will only ever perform one SAP migration to Azure, while some partners will have completed tens or hundreds of such migrations. We will now look at some of the roles and responsibilities for each group in more detail.

Internal resources

It is important to realize that when running SAP on Azure you are primarily using Azure IaaS offerings. This removes the need for you to be concerned with the physical assets such as data centres, servers, storage arrays, network switching, and cabling, and replaces all those with virtual assets that are fully software defined, and configured through a portal, command-line, scripts, or automation. You remain fully responsible for configuring the required resources, ensuring data and applications are secure, and configuring and managing backup/restore, high availability, and business continuity/disaster recovery. In that sense there is very little change to your responsibilities.

The first and most important question is, who in your organization owns responsibility for Azure? While the technical responsibilities may not have changed, the financial governance is potentially totally different. If your users are to take full advantage of the agility provided by Azure, they will need to be empowered to consume what they need when they need it. However, giving them this freedom will impact your costs and without good financial governance the costs of Azure can quickly exceed expectations. Historically those responsible for provisioning infrastructure were not typically responsible for the financial governance; they would simply provision what someone else in the organization had procured. In Azure they have access to essentially limitless capacity, and you need to decide who can provision what and when.

Azure has built-in capabilities to provide the required financial governance. Azure Cost Management provides a complete solution to monitor resource usage and manage costs across Azure and other clouds, implement financial governance policies with budgets, and cost allocation and chargeback, and supports continuous cost optimization.

However, it still requires you to plan how you want to use financial governance and then implement a strategy to deliver it. The main question is, who within your organization owns responsibility for this?

After financial governance the other big area of change is the use of automation in Azure. You may already be using tools such as Ansible, Chef, or Puppet to automate software deployment and configuration, in which case you are well placed to adopt automation of Azure and to deliver end-to-end solutions using Infrastructure as Code (IaC). You will have people with the required skill set to embrace technologies such as Azure Resource Manager (ARM) templates to automate Azure deployments, or to use other tools such as Terraform.

However, if you are not using any automation tools today then this is a big area of change and you will need to identify people with the right skills to handle automation. In general, they require good programming skills as they will essentially be modifying or developing code to automate deployments. It may be tempting to take the view that with only a few hundred VMs to build for SAP on Azure, and with a wide variance of configurations, it is hardly worth investing in automation. For all the reasons given previously this tends to lead to poor-quality deployments with lots of variance between VMs that should be similar, and many configuration errors. As an example, Microsoft recommends enabling Azure Accelerated Networking (AN) for VMs running SAP, but much of the value is lost if AN is enabled on the database server VMs but not on the application server VMs, or worse still on some application server VMs but not others. You might not believe it, but these mistakes are made.

When it comes to security it is likely that you already have a team responsible for security within your existing data centres. They will need to extend their existing security model to encompass Azure and define the policies that are to be implemented. These policies can be instantiated through the Azure Policy Service and monitored using Azure Security Center. Arguably there are far better tools available natively in Azure than you probably have available on-premises today, but if you don't use them then you won't gain the benefit. The security team will also need to consider when and where to use technologies such as firewalls, and whether to use native solutions such as Azure Firewall, or to continue with the same products as used today on-premises but implemented in Azure as Network Virtual Appliances (NVA). Their responsibilities do not change significantly, but some of the tools they use will.

The biggest area of change is within core infrastructure management. The role of server, storage, or network administrator will change significantly; there are no physical assets to manage, but there are their virtual equivalents. Their role becomes one of defining policies as to what resources should be configured when and how, and is closely linked to the topic of automation.

As an example, when a user requires a new SAP application server VM the automation should ensure that only SAP-certified VMs can be chosen, that the disk storage is configured appropriately for SAP, that the VM is joined to the correct VNet, and that the VM uses the correct OS image. The code to create the VM will configure all these aspects, which means the server, storage, and network teams must work together to define these and implement the policies. This team may have the skills and ambition to take on responsibility for all aspects of automation.

Unless you have been lucky enough to recruit a team with previous experience of Azure, then training needs to be factored into your plans. However willing and able your staff may be, best practice in Azure is not the same as best practice in the on-premises world. There are differences in how things are done in Azure, and if your project teams are to be effective then they need to understand these differences. As you might imagine, Microsoft, in conjunction with its training partners, offers a variety of Azure training and certifications33. It is highly advised that you ensure that staff that will be involved with the migration to Azure receive the necessary training and certification.


Of course, you may decide that rather than try and take on all this work in-house you would rather entrust it to partners. In this case you will need to decide whether to work with your existing incumbent partners if you have any, or look to new partners. The key question needs to be, do your existing partners have the required skills? As Azure becomes more pervasive skills in both Azure IaaS and SAP on Azure are becoming more common; however, you still need to ensure that the resources allocated to your project have these skills. In most cases by definition the partner team currently managing your on-premises environment are unlikely to have the required Azure skills.

Microsoft is by its nature a partner-centric company and relies on its partners to deliver customer projects. For this reason, Microsoft has been encouraging its traditional partners to develop the core Azure skills required to deliver the Azure cloud foundations, and at the same time working with the GSI, NSI, and local SAP Services Partners to build the skills around SAP on Azure. Where SAP is the lead project migrating to Azure then some customers will use one partner to build the Azure cloud foundations because of their deep expertise and experience in core Azure, and use a separate partner to handle the SAP migration to Azure, based on their expertise and experience of SAP migrations. There is no right or wrong solution; it is a question of leveraging the right skills at the right time in the project.


Whether or not you choose to use a partner to deliver your project, you will have access to certain Microsoft resources. The first level of assistance is FastTrack for Azure34, which provides you with access to Azure engineering resources with real-world customer experience to help you and your partners. These engineers cover a wide range of Azure areas from the basic Azure cloud foundations through to SAP on Azure. The FastTrack service is delivered remotely using Teams.

For larger projects it is likely you will have access to Microsoft Cloud Solution Architects (CSAs). They provide similar capabilities to the FastTrack engineers but generally support a smaller number of concurrent projects and will provide support both remotely and on-site. As with FastTrack there are CSAs who specialize in the core aspects of Azure and also those with specific SAP on Azure skills. It is important to understand that both FastTrack engineers and Cloud Solutions Architects act in a purely advisory capacity: they are not a consulting organization and are not indemnified to carry out work within the customer's own Azure subscriptions.

Finally, if what you really want is a one-stop shop then Microsoft Consulting Services (MCS) can provide this capability. They are the consulting services division of Microsoft and can deliver whole cloud migration projects. Unlike some software companies Microsoft does not see consulting services as a major revenue-generating arm; MCS exists primarily to enable customers to adopt and deploy Microsoft technologies. MCS will provide overall program management and have the required skills in-house to deliver the Azure cloud foundations, but will leverage accredited partners to deliver the SAP migration to Azure.



The first objective of this chapter was to explain why Azure is the best cloud for SAP and to reassure you that any concerns you may have can be addressed. Whether it be security, availability, or scalability, hopefully you will now see that it is possible to run even the largest and most business-critical SAP systems in Azure. Microsoft has invested heavily to enhance the capabilities of Azure to meet the specific demands of enterprise-scale SAP systems, with technologies such as Mv2-series virtual machines offering up to 11.4 TiB of memory, Azure SAP HANA Large Instances, and Proximity Placement Groups.

The second objective was to highlight some of the areas that need to be considered when moving SAP workloads to Azure. If you are already running other workloads in Azure then it is likely that much of the Azure foundations are already in place. If not, this will need to be done before embarking on moving the SAP workloads. We also looked at how important it is to have the right team in place to deliver the project. In most cases bringing in partners with previous experience of moving SAP workloads to Azure will accelerate the move process and ensure a better overall project outcome.

In the following chapters we will look into some of these areas in more detail, to help you better understand how to deploy SAP in Azure. It is important to remember that Azure is constantly evolving and thus as you plan a project you should follow the links in the book to access the latest documentation. This book can only be a snapshot in time.

"If you are interested in speaking to Microsoft about migrating your SAP workloads to Azure, you can reach out via this link: https://aka.ms/contact_Azure_for_SAP"

1 The Total Economic Impact of Microsoft Azure for SAP – a commissioned study conducted by Forrester Consulting, 12 April 2019:https://azure.microsoft.com/en-gb/resources/sap-on-azure-forrester-tei/

2 Use Azure to host and run SAP workload scenarios: https://bit.ly/2PdICHQ

3 Customer Stories, Carlsberg Group: https://bit.ly/387ZQyU

4 Customer Stories, Daimler entrusts SAP HANA–based global procurement system to Microsoft Azure: https://customers.microsoft.com/en-gb/story/daimler-manufacturing-azure

6 Tutorial: Azure Active Directory single sign-on (SSO) integration with SAP Fiori: https://bit.ly/2s0dhA9

7 Azure uses server hardware based on x86-64 architecture processors from Intel and AMD. As such Azure cannot support IBM AIX/Power, Hewlett Packard Enterprise HP-UX/Intel Itanium, or Oracle Solaris/SPARC workloads, and these must be migrated to Windows Server or Linux.

8 Configure multiple virtual machines in an availability set for redundancy: https://bit.ly/362PAG9

9 What are availability zones in Azure: https://bit.ly/2RjH819

10 SLA for Virtual Machines, Last updated: March 2018: https://bit.ly/2rfPHj4

11 Live migration: https://bit.ly/368gPPV

12 Microsoft Cloud Adoption Framework for Azure: https://docs.microsoft.com/en-us/azure/architecture/cloud-adoption/

13 Azure Enterprise Scaffold: https://bit.ly/2ORCCWj

14 Cloud Operating Model: https://bit.ly/34OJ7i8

16 Advanced analytics on big data: https://bit.ly/2YkmcsE

17 A slight misappropriation from The 7 Habits of Highly Effective People, Covey, Stephen R., Simon and Schuster

18 Building an agile and trusted SAP environment on Microsoft Azure: https://bit.ly/34SSk8W

19 Microsoft and SAP join forces to give customers a trusted path to digital transformation in the cloud, 27 November 2017: https://bit.ly/2DLglTo

20 Offering the largest scale and broadest choice for SAP HANA in the cloud, 5 June 2018: https://bit.ly/2LoNDwa

21 Adobe, Microsoft and SAP announce the Open Data Initiative to empower a new generation of customer experiences, 24 September 2018: https://bit.ly/363LHRl

22 Adobe, Microsoft and SAP announce new Open Data Initiative details, 27 March 2019: https://bit.ly/2rjr6tJ

23 SAP Leonardo IoT Helps Shape the Intelligent Enterprise, 25 February 2019: https://bit.ly/2Rm8La7

24 MWC 2019: Azure IoT customers, partners accelerate innovation from cloud to edge, 25 February 2019: https://bit.ly/2OQ4kCz

25 SAP Partners with Ecosystem to Guide Customers to the Cloud, 9 May 2019: https://bit.ly/2PdJNXM

26 Microsoft partners with SAP as the first global cloud provider to launch Project Embrace, 9 May 2019: https://bit.ly/2sMBf2t

28 SAP HANA Enterprise Cloud (HEC) Services Documentation: https://www.sap.com/about/agreements/policies/hec-services.html?tag=language:english

29 SAP Support Strategy, Maintenance 2025: https://support.sap.com/en/offerings-programs/strategy.html

30 SAP Notes 1872710 – ABAP on HANA sizing report (S/4HANA, Suite on HANA…) and 2610534 – HANA BW Sizing Report (/SDF/HANA_BW_SIZING)

31 Certified and Supported SAP HANA Hardware Directory, Certified Appliances: https://bit.ly/387B1mQ

32 Strategies for migrating SAP systems to Microsoft Azure: https://bit.ly/2rRNTg5

About the Authors

  • Nick Morgan

    Nick Morgan is a highly experienced IT infrastructure architect, who, for the last 18 years has focused on architecting solutions for SAP. Since 2017, Nick has worked for Microsoft as part of their Global SAP practice, helping customers with moving their SAP workloads to Azure. Nick also regularly speaks on behalf of Microsoft at SAP events in the UK.

    Browse publications by this author
  • Bartosz Jarkowski

    Bartosz Jarkowski is an SAP Technical Expert with over 12 years of experience working and leading complex technical projects. He has a deep working knowledge of SAP NetWeaver, SAP HANA, and Microsoft Azure. Bartosz works as a trusted advisor at Microsoft, offering thought leadership to global companies to improve the management and resilience of their SAP landscapes.

    Browse publications by this author
SAP on Azure Implementation Guide
Unlock this book and the full library for $5 a month*
Start now