Why is Azure the best cloud platform for all SAP workloads?
For most customers considering moving SAP to Azure there will be a number of key considerations:
- Security: Will my data be secure in Azure?
- Scalability: Will Azure have the performance and scalability to run my critical SAP workloads?
- Availability: Will Azure deliver the service levels that my business requires?
- Disaster Recovery: Will Azure deliver the business continuity my organization requires?
- Cloud Adoption: What do I need to do to move to Azure?
- Automation: How can I utilize cloud capabilities such as automation to be more agile?
- Insights and innovation: How can I leverage cloud-native services to enhance SAP?
- Microsoft and SAP Partnership: How closely are Microsoft and SAP aligned?
- Responsibilities: How do my responsibilties change when moving to Azure?
In this section we will look at each of these topics in turn. If you are already running other workloads in Azure then it is likely that you will already have addressed some of these topics, in which case please skip over those and jump to the topics that remain a concern.
Azure compliance and security
Compliance and security are normally the two main concerns for organizations when considering moving to the cloud; will I be able to comply with legal and industry-specific requirements, and will my data be safe? Because of this, these were also two of the main concerns for Microsoft when developing Azure.
When it comes to compliance Microsoft has the greatest number of compliance certifications of any public cloud, providing customers with the assurance they require to run their IT systems in Azure. Microsoft Azure has more than 50 compliance certifications specific to global regions and countries, including the US, the European Union, Germany, Japan, the United Kingdom, India, and China. In addition, Azure has more than 35 compliance offerings specific to the needs of key industries, including health, government, finance, education, manufacturing, and media. Full details are available on the Microsoft Trust Center5.
When it comes to security there are still a lot of common misconceptions. The cloud does not mean that all your applications and data need to be exposed to the public internet; on the contrary most customers running applications in Azure will access them through private networks, either via Virtual Private Networks (VPNs) or, for business-critical applications such as SAP, by connecting Wide Area Network (WAN), such as multiprotocol label switching (MPLS), to Azure ExpressRoute. Essentially you can connect to systems and data in Azure in the same way that you connect to them today, whether running on-premises, in a colocation (Colo) data center, in a managed hosting environment, or in a private cloud.
Of course, you can provide public internet access to your systems in Azure when required, in a similar way to how you would when not in the cloud. You will typically create a Demilitarized Zone (DMZ) in Azure, protected by firewalls, and only expose those external-facing services that need to be accessed from untrusted network such as the internet. Many customers when starting to migrate workloads to Azure will in fact use their existing DMZ to provide external access, and simply route traffic via their WAN to the workloads running in Azure.
Many people are also concerned about the security of their data in Azure, where their data will be located, and who has access to that data. When it comes to data residency then when using Azure IaaS it is you who decides in which Azure Regions you want to deploy your workloads, and to where, if anywhere, that data will be replicated.
One of the currently unique features of Azure is that with a couple of minor exceptions Microsoft builds its Azure Regions in pairs. Each pair of regions is within the same geopolitical area so that from a customer perspective if your business requires true Disaster Recovery then you can replicate your data between two Azure Regions without needing to go outside your chosen geopolitical area.
In North America there are currently eight general purpose regions along with additional regions for the US Government and US Department of Defense (DoD), Canada has East and Central, Europe West and North, the UK South and West, Asia East and Southeast, and so on. For some organizations this is essential if they are to be able to use the public cloud.
Once you have chosen the Azure regions in which you wish your data to reside, the next question is how secure is that data? All data stored in Azure is always encrypted while at rest by Azure Storage Service Encryption (SSE) for data at rest. Storage accounts are encrypted regardless of their performance tier (standard or premium) or deployment model (Azure Resource Manager or classic). All Azure Storage redundancy options support encryption, and all copies of a storage account are encrypted. All Azure Storage resources are encrypted, including blobs, disks, files, queues, and tables. All object metadata is also encrypted.
The default for Azure Storage Service Encryption is to use Microsoft-managed keys, and for many customers this may meet your requirements. However, for certain data and in certain industries it may be required that you use customer-managed keys, and this is supported in Azure. These customer-managed keys can be stored in the Azure Key Vault, and you can either create your own keys and simply store them in Key Vault, or use the Azure Key Vault API to generate the keys.
On top of the Azure Storage Encryption, you can choose to use Azure Disk Encryption to further protect and safeguard your data to meet your organizational security and compliance commitments. It uses the BitLocker feature of Windows and the DM-Crypt feature of Linux to provide volume encryption for the OS and data disks of Azure virtual machines (VMs). It is also integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets, and ensures that all data on the VM disks are encrypted at rest while in Azure Storage. This combination of Azure Storage Service Encryption by default with the optional Azure Disk Encryption should meet the security and compliance needs of all organizations.
In addition to Storage and Disk Encryption you may wish to consider database-level encryption. All the SAP-supported Database Management Systems (DBMS) support some form of encryption: IBM DB2, Microsoft SQL Server, Oracle Database, SAP ASE, and SAP HANA. The exact details do vary by DBMS, but these are the same capabilities that are available in on-premises environments. One point to note is that it is not generally recommended to combine both Azure Disk Encryption with DBMS encryption, as this may impact performance.
Then there is the matter of encryption in transit. In order to encrypt traffic between the SAP application server and the database server, depending on the DBMS and version you can use either Secure Sockets Layer (SSL) or Transport Layer Security (TLS). In general TLS is the preferred solution, as it is newer and more secure; however, where you have older DBMS versions in use then they may only support SSL. This may be a reason to consider a DBMS upgrade as part of the migration, which will normally provide other benefits such as enhanced HA and DR capabilities as well.
If you are using SAP GUI then you will also want to encrypt traffic between SAP GUI and the SAP application server. For this you can use SAP Secure Network Communications (SNC). If you are using Fiori, then you should use HTTPS to secure the network communication. Many of you will already be using this today, so this should not be anything new.
Finally it is worth mentioning how Azure Active Directory (AAD) can be used to provide Identity and Access Management (IAM) for SAP applications. Whether a user is using the traditional SAP GUI, or they are use more modern web-based user interfaces such as SAP Fiori, AAD can provide Single Sign-On (SSO) for these applications6. In fact AAD provides SSO capabilities for most SAP applications including Ariba, Concur, Fieldglass, SAP Analytics Cloud, SAP Cloud for Customer (C4C0, SAP Cloud Platform, SAP HANA, and SuccessFactors. Many organizations are already using AAD to control access to solutions such as Office 365, and if it is already in place, extending its use to provide SSO across the SAP portfolio is very easy. This means not only can you use SSO, but you can also leverage other AAD capabilities such as Conditional Access (CA) and Multi-Factor Authentication (MFA).
Azure scalability
For those with large SAP estates, and particularly some very large instances, then there may be concern about whether Azure can scale to meet your needs. This is a fair question as in the early days of Azure the IaaS service was primarily aimed at supporting smaller workloads or new cloud-native applications that are typically horizontally scalable. These applications require many small VMs rather than a few large VMs. Even in the world of SAP, the application tier can scale horizontally, but in general terms the database tier needs to scale vertically; while horizontally scalable database technologies such as Oracle Real Application Cluster (RAC) and SAP HANA Scale-Out are supported with SAP, they don't suit all SAP applications.
SAP has been officially supported on Azure since May 2014 when SAP Note 1928533 was released with the title "SAP Applications on Azure: Supported Products and Azure VM types." To be fair, in the first version of that note support was fairly limited, with support for Microsoft SQL Server 2008 R2 or higher running on Microsoft Windows Server 2008 R2 or higher, and with a single supported Azure VM type of A5 (2 CPU, 14 GiB RAM, 1,500 SAPS). However, over the coming months and years support for a complete range of DBMSes and OSes was added rapidly with a wide range of Azure VM types supported. Now all the major DBMSes are supported on Azure for SAP – on Microsoft Windows Server, Oracle Linux (Oracle Database only), Red Hat Enterprise Linux, and SUSE Enterprise Linux operating systems7. As for supported VMs, these now scale all the way from the original A5 to the latest M208ms_v2 (208 CPU, 5.7 TiB RAM, 259,950 SAPS), with M416ms_v2 (416 CPU, 11.7 TiB RAM) announced and due for release before the end of 2019. Are you still concerned about scalability?
The next significant announcement came in September 2016 when Microsoft first announced the availability of Azure SAP HANA Large Instances (HLI). These are physical servers (sometimes referred to as bare metal servers) dedicated to running the SAP HANA database. As with VMs, HLI have continued to grow over the years and Microsoft now have SAP HANA Certified IaaS Platforms up to 20 TiB scale-up, and 60 TiB scale-out. Under the SAP HANA Tailored Datacenter Integration (TDI) Phase 5 rules, scale-up to 24 TiB and scale-out to 120 TiB is supported. Essentially Azure can support the same scale-up and scale-out as customers can achieve on-premises, as HLI leverages essentially the same servers as a customer can buy, but hosted and managed by Microsoft, and now within an Azure data center.
Some customers do question how a physical server can really be considered as a cloud solution. The reality is that physical servers will generally be one step ahead of any virtual servers because the physical server comes first. For any hypervisor developer new servers with more CPU and memory are required before they can test the new more scalable hypervisor, so physical will always come first. In addition, with only a very small number of customers requiring such large and scalable systems the economics of globally deploying hundreds or thousands of these servers in advance and hoping that the customers come really does not stack up. There are probably less than 1% of SAP customers globally that require such large systems.
Ultimately these HLI systems address the edge case of the largest global SAP customers, meeting their needs for massive scalability, and enabling them to migrate the whole of their SAP estate to Azure. Without HLI they could not do this. For most customers one or two HLI will be used for their largest scale-up workload, normally ECC or S/4HANA, and their largest scale-out workload, normally BW or BW/4HANA.
In conclusion, whether you are running traditional SAP NetWeaver applications such as ECC, CRM, SRM, and so on, AnyDB (SAP's collective term for IBM DB2 UDB, Microsoft SQL Server, Oracle Database, SAP ASE, SAP HANA, SAP liveCache, and SAP MaxDB), or the latest S/4HANA and BW/4HANA, Azure has the scalability to meet your needs.
System availability
The third key concern of most people when considering the migration of SAP to Azure is availability.
Azure has been designed with availability in mind and there are multiple ways to provide HA for VMs in Azure.
At its simplest every VM in Azure is protected by Azure Service Healing, which will initiate auto-recovery of the VM should the host server have an issue. All VMs on the failing host will automatically be relocated to a different healthy host. The SAP application will be unavailable while the VM restarts, but typically this will complete within about 15 minutes. For some people this may be adequate, and many may already be familiar with and using this sort of recovery, as most hypervisors provide a similar capability in the on-premises environment.
The second Azure solution for HA is availability sets8, which ensure that where two or more VMs are placed in the same availability set, these VMs will be isolated from each other.
Availability sets leverage two other Azure features as shown in Figure 1-1:
Figure 1-1: Azure fault and update domains
- Fault domains (FD) define a group of physical hosts that share a common power source and network switch. Potentially all the hosts within a fault domain could go offline if there is a failure of the power supply or network switch.
- Update domains (UD) define a set of hosts within a fault domain that may be updated at the same time, which could in some cases require the VMs to reboot.
When you create VMs in an availability set they will be placed in separate fault domains and update domains to ensure that you will not lose all the VMs if there is a fault or an enforced host reboot. Because there are only a finite number of fault and update domains once they have all been used, the next VM that is created in the availability set will have to be placed in the same fault and update domain as an existing VM.
When using availability sets the recommended solution is to create a separate availability set for each tier of each SAP application: the database tier, the (A)SCS tier, the application tier, and the web dispatcher tier. This is because if the total number of VMs exceeds the total number of fault domains then VMs will have to share the same FD. This will ensure that two database VMs in a cluster, or the (A)SCS and ERS in a cluster, will be placed in separate update domains in separate fault domains. Mix all the tiers together and there is the risk that these resilient pairs of VMs will be co-located in the same fault and/or update domain.
The final Azure solution for HA are availability zones, which are currently available in some but not all Azure regions. Where they are available each region will have three zones, with each zone having one or more separate data centres, each data centre having its own power, cooling, and network. This is shown in Figure 1-29:
Figure 1-2: Azure availability zones
Availability zones provide a higher level of isolation than availability sets, as availability sets may be deployed within a single data centre, whereas availability zones span multiple data centres. With availability sets if something impacts the whole data centre then potentially all the VMs will be lost, whereas with availability zones multiple data centres would need to be affected before all the VMs are lost.
Based on these different solutions Microsoft Azure provides a Service Level Agreement (SLA) for Virtual Machines. The following financially backed guarantees are provided10:
- For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 99.9%;
- For all Virtual Machines that have two or more instances deployed in the same availability set, we guarantee you will have Virtual Machine Connectivity to at least one instance at least 99.95% of the time;
- For all Virtual Machines that have two or more instances deployed across two or more availability zones in the same Azure region, we guarantee you will have Virtual Machine Connectivity to at least one instance at least 99.99% of the time.
It is important to note that this is an infrastructure SLA and is not the same as an application SLA. For example, if a virtual machine is restarted then as soon as the VM has restarted and a user can connect to the VM at operating system level, the downtime is considered to be finished. The application itself may take some more minutes to start, and if it is a VM running a large HANA database it could take many tens of minutes to fully reload memory.
So how do we use these availability options with SAP? For some customers Azure Service Healing will be sufficient to meet their HA needs as it will reboot VMs in the event of a host failure. In fact Azure Service Healing offers more than this, as it uses Machine Learning to predict potential hardware failures and will try to live migrate11 VMs prior to a failure occurring to avoid any downtime. If your current on-premises landscape is already virtualized you may be relying on simple VM auto-restart to provide HA today, in which case Azure Service Healing offers better functionality than you currently have. However, it is important to know that Azure will very occasionally need to reboot VMs to make certain updates or patches, and while customers will be given warning of this and the opportunity to perform their own planned reboot, if this is not completed before the published deadline then your VMs will undergo a forced reboot.
If your SAP system is considered business-critical then it is very likely that you currently use some form of clustering to protect the database layer and the SAP (A)SCS (ABAP SAP Central Services/SAP Central Services for Java), and you may want to continue with this in Azure. There are some differences between clustering in an on-premises environment and clustering in Azure, but importantly clustering is fully supported.
The first difference is that Azure does not currently support the concept of shared disk storage for VMs, neither block storage nor file storage. This means that DBMS clustering solutions that rely on shared storage cannot be supported in Azure. However, SAP with its SAP HANA database has popularized the approach of shared-nothing clustering, where synchronous DBMS replication is used to keep two database instances fully synchronized, allowing failover between a primary and its standby to happen in seconds.
In fact, this type of replication was not new as Microsoft introduced database mirroring with SQL Server 2008, and has continued to enhance the capabilities into what is now called SQL Server Always On Availability Groups. Similar solutions are also available with IBM DB2 LUW, Oracle Database, and SAP ASE; so all the mainstream-supported SAP DBMSes can provide a similar capability.
The second difference is that traditional operating system clusters generally rely on virtual IP addresses that can be migrated with a service when they are failed over from one machine to another. This type of floating virtual IP address is not supported in Azure. However, the functionality can be replaced by using Azure Load Balancers in front of each clustered service so that any process communicating with the clustered service always uses the Azure Load Balancer IP address, and in turn the Azure Load Balancer will route traffic to the currently active service. As with shared storage, there is a solution in Azure that meets the requirements to create a clustered solution.
While Azure provides the infrastructure and components to build a clustered solution, the actual responsibility for detecting a fault and initiating failover of a service, be it the database or SAP central services, falls to the operating system. All the supported operating systems can provide this capability, Microsoft Windows Server with Windows Server Failover Clustering, Red Hat, and SUSE Linux with Pacemaker, and for Oracle Linux a third-party product called SIOS Lifekeeper can be used. While there are some differences in capability all these clustering solutions have been enhanced to operate in Azure.
In conclusion it is possible to build the same type of highly available clustered solution for SAP in Azure as it has been historically on-premises. Some customers will deploy a range of solutions based on the criticality of individual SAP applications, rather than simply applying the same solution to every application.
Business continuity/disaster recovery
While High Availability addresses the issue of failure of infrastructure within an Azure region, most customers for whom SAP is business critical will also want a solution for Disaster Recovery (DR) should there be a whole Azure region-level failure. While you may consider the loss of an entire Azure Region to be highly unlikely, it is nonetheless possible and important to plan for what to do in such an event, were it to happen.
To support DR, with one or two exceptions Microsoft builds Azure Regions in pairs. These pairs of regions are in the same geopolitical area, allowing you to ensure that your data is kept within a particular jurisdiction. Azure is currently the only hyperscale cloud vendor to do this. Certain Azure services are inherently linked to these paired regions, such as geo-redundant storage (GRS). In addition, Microsoft will only ever perform maintenance on one region within a pair to ensure that should the maintenance cause any problems and customers need to failover workloads to the other region in the pair then that region will not be undergoing any maintenance.
At a very minimum for DR you should plan to replicate backups to another Azure region. This will ensure that you have a copy of all data in an off-site location. It may take days to rebuild the infrastructure and restore data from backup, but at least the data is secure. Most customers pursuing this approach will build a minimal landing zone in the second region, with basic networking and security in place ready to start the rebuild process. This is not an ideal solution but where cost is critical, and where your business can survive without SAP for a number of days, then this may be adequate.
Where SAP is critical to your business it is likely that you will want a full-blown DR solution, with minimal data loss and the ability to recover service within a few hours. It is quite likely that you have agreements with your business to provide specific Recovery Point Objectives (RPO), which define the acceptable level of data loss, and Recovery Time Objectives (RTO), which define the time allowed to recover SAP. As for high availability you can use similar techniques in Azure for DR to those routinely used in the on-premises world. In this case you will build the complete foundation services in both regions so that all capabilities to run the complete production environment, as a minimum, are available in both regions.
The Azure regions are situated sufficiently far apart with the aim of ensuring that no single event will impact both regions in a pair. For example, you may consider 100 miles (160 kilometres) to be a safe distance between two data centres for DR purposes, but if they are both on the same river flood plain, or earthquake fault line, or hurricane path, then that is not adequate. Microsoft takes all this into account when deciding where to position regions, and which ones to pair. To ensure resilience, the regions are typically hundreds of miles apart, which means that only asynchronous replication is supported.
The most important first step is to protect the database, which you can do using DBMS asynchronous replication. The DBMS will replicate the data continuously, but by running asynchronously the latency and throughput of the network will not impact the performance of the primary database. This asynchronous replication can be supported alongside the synchronous replication used for HA, so that HA and DR can coexist.
For the application tier an Azure-native service called Azure Site Recovery (ASR) can be used to replicate the app server VMs. This takes regular snapshots of the VMs in the primary region and replicates them to the second region. ASR can provide either crash-consistent or application-consistent snapshots. By default, crash-consistent snapshots are taken every 5 minutes while application-consistent snapshots are taken every 60 minutes, although these defaults can be changed. For the SAP application, server VMs crash-consistent snapshots are adequate as there is no user data that needs to be protected.
ASR uses Recovery Plans to allow the sequencing of the way in which VMs are restored. You can also add your own scripts to ASR and these can include a pre-action script, for example, to failover the DBMS and promote the DR copy to active, and a post-action script, for example, to attach a load balancer to any clustered VMs once they have been failed over.
The final element of the SAP application that needs to be protected is the SAP (A)SCS. If the (A)SCS is part of an HA cluster with the Enqueue Replication Server (ERS) in the primary site then they will be using a file share, either SMB for Windows or NFS for Linux, to share files between the two VMs. Because of this, replication with ASR is not currently supported. The recommended solution is to create an (A)SCS VM in the secondary region and replicate any changes on the shared file systems using a scripted copy for Windows or Linux rsync.
Microsoft Cloud Adoption Framework for Azure
If you are already using Azure for other non-SAP workloads then it is likely you have a cloud foundation in place, in which case you can skip this section, but if you are uncertain or this is your first deployment in Azure then please read on. The Microsoft Cloud Adoption Framework (CAF) for Azure12 is based on Microsoft's experience of working with a wide range of customers over the last several years as they have deployed workloads into Azure. To quote from CAF, The framework gives enterprise customers tools, guidance, and narratives that help shape technology, business, and people strategies for driving desired business outcomes during their adoption effort. The CAF builds on earlier Microsoft documentation such as the Azure Enterprise Scaffold13 and the Cloud Operating Model14.
The easiest way to understand the CAF is to think of it as providing the foundation for running your business in Azure. It covers the complete adoption life cycle from defining your strategy, through planning and readiness, to migration and innovation, and ultimately to management and operation. Like the foundations of any building, it may not be the most exciting aspect of the project, but it is probably the most essential. Put the right foundations in place and it becomes relatively easy to move new workloads to Azure and to ensure that your governance and security policies will be met. Fail to do this and the potential for problems to occur later in the project are high.
The only time you probably don't need to consider CAF is where you are working with a partner who will deliver SAP as a Service (SAPaaS) under the Microsoft Cloud Service Provider (CSP) model. In this case you will simply consume SAP applications under some form of pay as you go model, and the CSP will be responsible for the cloud foundation.
Automation
If you are to gain the full benefits of Azure, then it is essential that you embrace automation. How much automation will depend on your individual circumstances, and finding the right balance is key. At first glance it is easy to look at the Azure Portal and think to yourself, "this makes configuring resources so easy, why would I bother to invest in automation? To provision a new VM takes only a few minutes, only requires you to answer a few questions, and the hard work is handled by Azure." However, this tends to forget that people are generally poor at achieving repeatability.
In general SAP landscapes involve a relatively small number of relatively large VMs, which tends to feed the assumption that automation is not important. However, anyone with experience of SAP will be familiar with the challenges created by configuration drift between your development, QA/Test, Pre-Production, and Production environments. This can make it very difficult to diagnose and debug problems, where you think all the environments are the same when in reality they are not.
From experience working with a number of customers, there are three main approaches to automation:
- Empower the user: Make it simple for the person requesting the resource to provision the resources that they require. This needs to ensure that all the security and governance rules are automatically applied, along with an approval process to limit who can provision what and when. In the world of SAP this will generally provision and configure the infrastructure but leave it to the SAP specialist to install and configure the required SAP software.
For a SAP BASIS administrator to know they can get the resources they need in a matter of a few minutes or even a few hours is a major step forward and supports the move towards a more DevOps culture for SAP development.
- Empower the infrastructure team: Make it even easier for the person requesting the resource, while empowering the infrastructure team to deliver a fully working system. In this scenario the requestor outsources the whole responsibility to the Azure team, who not only use automation to build the infrastructure but also to install and configure the SAP application as far as possible, with potentially a few manual steps. For an SAP developer this takes delivery to the next level as they simply request the application that they want and wait to be told when it is available.
- None of the above: Use minimal or no automation and the infrastructure team simply continues to build every VM manually via the Azure portal, Azure CLI, Azure PowerShell, or Azure Cloud Shell.
Unfortunately, option 3 is all too common and is what leads to mistakes and configuration drift. There are several reasons for this. Firstly, there are often time pressures to get the infrastructure built as quickly as possible. By the time you have made the decision to move to Azure most projects are already running behind schedule. Then you have no history of using automation in your on-premises world, and the basic Azure tools for manually building infrastructure are probably light years ahead of what you have been used to, so it is tempting just to wade in and get building. Finally building out the required automation requires you to learn new skills and takes time, providing rewards later in the project life cycle but potentially delaying the start. The answer is potentially to manually build the initial VMs while in parallel developing the automation skills, so that by the time you get to provisioning preproduction and production, the automation skills are in place.
The level of automation you should implement will depend on many factors. If SAP is to be your only workload in Azure, and you run a very static environment, then creating some basic ARM templates to improve consistency may meet all your needs. If, however, you are embracing Azure as part of a business transformation program, with a cloud-first strategy and a desire to drive business growth through IT innovation, then it makes sense to invest and build a more sophisticated solution.
The ultimate goal of automation is to deliver Infrastructure as Code, to allow infrastructure to be delivered quickly and consistently while complying with all the required governance. A number of enterprise customers have totally embraced automation. All requests for services in Azure are initiated through an IT Service Management request, which handles the approvals process. If approved the required infrastructure will be automatically provisioned and configured and the software installed.
The infrastructure is guaranteed to conform to all policies and will be correctly tagged to ensure full financial control. Finally, a configuration management database (CMDB) is updated to track all the assets and software licences to ensure full auditability. For non-production systems the infrastructure may even have a time limit, so that once expired the infrastructure will automatically be deleted again; the requestor will be notified before this and can request an extension if needed.
One of the more recent innovations is Azure Blueprints15 Blueprints provide a declarative way for you to define and deploy a repeatable set of resources that adhere to the requirements, standards, and patterns of your organization. Importantly, Blueprints are stored within Azure and globally distributed and can contain Azure resource templates as artifacts. Azure Blueprints is still in preview at the time of writing (October 2019) but should be generally available in the coming months.
If all this sounds quite daunting then there are a number of partners who have developed specific skills around the automation of Azure in general, and some with a particular focus on the automation of SAP in Azure. They can either provide you with automation as a service, whereby you simply consume their tooling to automate deployments into Azure, or in some cases will help you to develop the tooling that you require, and then hand it over to you to maintain and enhance as required.
Insights and innovation
For many organizations the motivation for moving SAP to Azure is part of a wider strategy of IT transformation, aimed at gaining greater insights and using these to drive innovation. While not an essential pre-requisite it makes sense to put SAP, the core system of record, into the same cloud that will be used for insights and innovation.
While most SAP customers have been using SAP BW to support both operational reporting and analytics, with a few exceptions BW has primarily been used for data sourced from other SAP applications. With the rapid growth of Advanced Analytics (AA) and Machine Learning (ML) there is a much greater need to take data from multiple sources, both structured and unstructured, to be used to support AA and ML, and ultimately Artificial Intelligence (AI). Because of the different types of data and the sheer volume, most organizations are looking to do this in hyperscale cloud.
Microsoft Azure has a whole range of native services available to support advanced analytics on big data16, as well as being able to utilize a wide range of third-party solutions running in Azure. A common solution is to use ADF to extract data from SAP and store it in Azure Data Lake Storage (ADLS), where the data can be prepared and blended ready for analysis using tools such as Databricks. One of the benefits you enjoy is the ability to consume these Azure services as required and only pay for what you use. Many of you are probably still at the experimental stage, and the ability to rapidly stand up a solution in Azure, test it, and then either productionize it or tear it down and start again is a major benefit.
When it comes to SAP data there is often tension between the data scientists and the SAP team. The data scientists will generally start from the position of "give me all the data and I will decide what to do with it later," and for some data sources this may be fine. If you are extracting sentiment data from social media, then the data is already public and retrieving and storing it for future analytics may make sense. However, the data inside an SAP system is often more sensitive and is not generally in the public domain, or even widely available within your organization. SAP has a very sophisticated security model that ensures that users can only see the data that they need to see in order to fulfill their role in the organization.
While attitudes will change and evolve over time, the most successful projects currently begin with the end in mind17; that is, they start with a business outcome in mind and work back from that to determine the data required to deliver that outcome. In this way when it comes to extracting data from SAP there is a clear objective in mind, and you only need to extract the data required to deliver that outcome. In many cases the data required may not be considered highly sensitive, or if necessary sensitive data can be masked in such a way that the objective can still be achieved.
Using these data-driven insights you can start to drive innovation within your business, which is at the top of the agenda for many at the CxO level. With new startup disruptors appearing in many industries the ability to adapt and innovate has never been more pressing. Most of these start-ups are born in the cloud and have been leveraging advanced analytics as an integral part of their business model from the start. Traditional organizations must adopt similar strategies, but have the opportunity to mine the vast amounts of historical data that they hold to get even greater insights and to compete successfully.
Partnership
Microsoft and SAP have a partnership that goes back as far as 1993, and the release of SAP R/3. This early relationship was primarily a technology one, the two companies working together to ensure that SAP technologies would support and integrate with Microsoft technologies. The relationship really started with SAP R/3 and the SAP GUI for Windows and then moved forward with the support for SAP R/3 running on Microsoft Windows NT Server and Microsoft SQL Server.
The partnership was strengthened when more than 20 years ago Microsoft chose to deploy SAP as the core ERP system to run its business. Microsoft was now both a partner and a customer. Over the years Microsoft's SAP system has grown to be one of the largest SAP ERP systems globally. As of May 2019, the size of this ERP/ECC system is as follows18:
- 16 TB of compressed database (equivalent of 50 TB uncompressed)
- 110,000 internal users
- 6,000 named user accounts
- 300,000 monitored jobs per month
- Up to 270 million transaction steps per month
- Up to 10 million dialog steps per day
However, ERP/ECC is only one of the SAP applications used by Microsoft; others include E-Recruiting, GRC, GST, CPM, SCM, OER, MDG, and SMG. In total the SAP estate comprises more than 600 servers, which between 2017 and February 2018 were moved entirely to Azure VMs. Microsoft now runs SAP 100% in Azure and is embarking on the first phase of its move to S/4HANA, also now deployed in Azure.
As part of this ongoing partnership, in November 2017 SAP announced19 that it would move some of its key internal business-critical systems to Azure. As part of a further announcement in June 201820 Thomas Saueressig, CIO of SAP, shared an update on progress with this migration:
"In 2017 we started to leverage Azure as IaaS Platform. By the end of 2018 we will have moved 17 systems including an S/4HANA system for our Concur Business Unit. We are expecting significant operational efficiencies and increased agility which will be a foundational element for our digital transformation."
In September 2018 Microsoft, Adobe, and SAP announced the Open Data Initiative (ODI)21 with the objective of unlocking the data held in applications from all three entities, and potentially other third-party software, by combining that data in a data lake where AI can be used to derive insights and intelligence. In an update in March 201922 it was announced that:
"… the three companies plan to deliver in the coming months a new approach for publishing, enriching and ingesting initial data feeds from Adobe Experience Platform, activated through Adobe Experience Cloud, Microsoft Dynamics 365, and Office 365 and SAP C/4HANA, into a customer's data lake. This will enable a new level of AI and machine learning enrichment to garner new insights and better serve customers."
As part of that November 2017 news release the availability of SAP HANA Enterprise Cloud (HEC) on Microsoft Azure was announced, enabling customers that want the SAP-managed cloud service to also leverage Azure hyperscale cloud. This is all part of SAP's wider strategy of making its "as a Service" offerings available on hyperscale cloud. Today this includes Ariba, Data Custodian, C/4HANA, SuccessFactors, and SAP Cloud Platform, all of which are available in certain Azure Regions.
Further strengthening this partnership in February 2019, SAP unveiled SAP Leonardo IoT23 at Mobile World Congress in Barcelona. At the same time SAP announced that SAP Leonardo IoT will interoperate with Microsoft Azure IoT Hub and that SAP Leonardo IoT Edge essential business function (EBF) modules are planned to run in containers on Microsoft Azure IoT Edge.
Further details of this integration are covered in a Microsoft blog post24 Through this interoperability SAP Leonardo IoT will be able to leverage the market-leading secure connectivity and powerful device management functionality provided by Azure IoT Services, and stream data back to SAP's business applications. By running Leonardo EBF modules on Azure IoT Edge key business processes at the edge and avoid issues with network connectivity, latency, and bandwidth.
Most recently in May 2019 SAP announced project "Embrace,"25 a collaboration between SAP, the hyperscale cloud providers, and some of SAP's global strategic service partners (GSSPs). The aim is to help customers move to S/4HANA in the cloud following market-approved journeys, which provide you with a blueprint for your journey towards an intelligent enterprise. At the same time Microsoft announced26 that it is the first global cloud provider to join this project.
Common misconceptions
There are some common misconceptions about the cloud in general and running SAP on Azure in particular. As already mentioned, when you run SAP in Azure you are utilizing Azure IaaS, primarily VMs, storage, and networking. As such it is your responsibility to create the infrastructure, install and configure the SAP software, configure SAP for HA and DR, and manage your own backups. None of this happens automatically.
At the heart of the confusion is how IaaS is different to both PaaS and SaaS. With SaaS in particular you simply consume the application without any need to install any software, or concern yourself with matters such as HA, DR, and backup. You will have a SLA for the application and it is up to the application provider to take the necessary steps to deliver the SLA. Similarly, when consuming PaaS services in general the level of availability you require is something you choose when initiating the service, and the software provides the capability seamlessly.
As an example of an Azure PaaS service, with Azure SQL Database you can choose different service tiers – general purpose, business critical, and hyperscale – which offer different levels of availability and redundancy. In addition, backups are taken automatically, written to read-access geo-redundant storage (RA-GRS) so that the backups are available even if the region becomes unavailable, and there is an option to configure long-term retention. All of this is handled automatically by Azure with minimal input required once the service has been created.
Further confusion arises from the fact that SAP has their own cloud offering called SAP HEC27. This combines cloud hosting with a range of HEC Standard Services, HEC Optional Services, and HEC Enhanced Managed Services (EMS)28. Even the HEC Standard Service provides most of the day-to-day services required to keep an SAP system running. As described elsewhere you can now combine these two worlds by choosing HEC on Azure, where SAP provides the HEC Services offering but utilizes Azure IaaS. As an aside, despite the name, HEC allows SAP customers to run any SAP application running on any SAP DBMS (HANA, ASE, MaxDB, and so on). It is not exclusively restricted to applications running on HANA.
When you deploy SAP in Azure it does not become either a PaaS or a SaaS offering; you are simply utilizing Azure IaaS. This is further brought home by SAP's recent naming convention, where for SAP S/4HANA you deploy the On-Premises Edition even when the deployment is going into a hyperscale cloud such as Azure, or into SAP's own HEC. S/4HANA Cloud Edition is only available as a Service from SAP and cannot be downloaded and installed elsewhere.
If what you want is a fully managed SAP service – SAP as a Service – then you can either opt for SAP HEC on Azure or work with one of the many partners that Microsoft has that can offer such a service, which includes Global Systems Integrators (GSI), National Systems Integrators (NSI), and local SAP Services Partners.
Conclusion
In this section we have covered a range of subjects related to running SAP on Azure. Hopefully this has addressed any fears or concerns that you may have and helped to convince you that there are no technical limitations to running SAP on Azure. In the next section we will look at how to migrate SAP to Azure.