SAP NetWeaver-based systems
In this section, we will look in more detail at specifics relating to architecting SAP NetWeaver-based systems that run on a non-HANA database, generally now referred to by SAP as AnyDB. These SAP applications use one of the SAP-supported DBMS: IBM Db2, Microsoft SQL Server, Oracle Database, SAP ASE, or SAP MaxDB. With the exception of the details of how each DBMS handles high availability and disaster recovery, all other aspects of the architecture are similar.
Supported platforms
The first thing you need to check when planning a migration of SAP to Microsoft Azure is whether the system is fully supported in Microsoft Azure, or if an upgrade is required as a prerequisite.
The SAP Note 1928533 - SAP Applications on Azure: Supported Products and Azure VM types lists the SAP software versions supported to run in Azure. You should always check the latest version of this SAP note to get the most up-to-date information, as it changes regularly. As an example, version 100 of the note, from August 2019, lists the following products as supported in Azure:
Supported operating systems and products:
- Microsoft Windows Server 2008 R2, 2012 (R2), 2016, and 2019
- SUSE Linux Enterprise Server (SLES) 12 and higher
- SLES 12 for SAP Applications and higher
- Red Hat Enterprise Linux 7 (RHEL7) and higher
- Red Hat Enterprise Linux 7 for SAP and higher
- Red Hat Enterprise Linux 7 for SAP HANA and higher
- Oracle Linux 7 (OL7)
Supported SAP NetWeaver releases:
- Applications running on the Application Server ABAP as part of SAP NetWeaver 7.0X:
- SAP Kernel 7.21 EXT (min. PL #622)
- SAP Kernel 7.22 EXT (min. PL #112)
- Higher SAP Kernel versions
- Applications running on the Application Server ABAP and/ or Java as part of SAP NetWeaver 7.1 or higher:
- SAP Kernel 7.21 EXT (min. PL #622)
- SAP Kernel 7.22 EXT (min. PL #112)
- Applications running on the Application Servers ABAP and/ or Java as part of SAP NetWeaver 7.4 or higher:
- SAP Kernel 7.45 (min. PL #111)
- Higher SAP Kernel versions
Supported databases running on Windows:
- Microsoft SQL Server 2008 R2 or higher
- SAP ASE 16.0 SP02 or higher
- IBM Db2 for Linux, Unix, and Windows 10.5 or higher (see SAP Note 2233094)
- Oracle database; for versions and restrictions, see SAP Note 2039619
- SAP MaxDB version 7.9
- SAP liveCache as part of SAP SCM 7.0 EhP2 (or higher): Minimal version for SAP liveCache: SAP LC/LCAPPS 10.0 SP 27 including liveCache 7.9.08.32 and LCA-Build 27, released for EhP 2 for SAP SCM 7.0 and higher
Note:
SAP liveCache based on SAP MaxDB technology has to run on an Azure VM solely dedicated to SAP liveCache (that is, without any other application software running on this VM).
Supported databases running on Linux:
- SAP HANA 1.0 SP12 and higher, SAP HANA 2.0:
- On Microsoft Azure Large Instances
- On Microsoft Azure Infrastructure as a Service IaaS (Azure Virtual Machines)
- SAP ASE 16.0 SP02 or higher
- IBM Db2 for Linux, UNIX, and Windows 10.5 or higher
- SAP MaxDB version 7.9.09.05 or higher
- Oracle Database – only on Oracle Linux
- SAP liveCache as part of SAP SCM 7.0 EhP4 (or higher): Minimal version for SAP liveCache: SAP LC/LCAPPS 10.0 SP 34 including liveCache 7.9.09.05 and LCA-Build 34, released for EhP 4 for SAP SCM 7.0 and higher.
Note:
SAP liveCache based on SAP MaxDB technology has to run on an Azure VM solely dedicated to SAP liveCache (that is, without any other application software running on this VM).
Due to Oracle licensing, deployment of the Oracle database or its components is supported only by a Windows Server or Oracle Linux. As the SAP application servers use Oracle Client to connect to the database, they are supported only when deployed to Windows Server or Oracle Linux.
Sizing SAP systems
General guidance on sizing has already been provided earlier in this chapter. In this section we will focus on specific considerations for SAP NetWeaver-based systems.
CPU and memory
For new SAP deployments, there is no difference in sizing for on-premises or in Azure. As described before you will use a combination of SAP Quick Sizer and SAP Sizing Guidelines to estimate the required CPU and memory based on the number of users and/or transaction volumes. These will provide the CPU requirements in terms of SAPS and you can then compare the sizing with the SAPS values published in SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types. You may still want to follow the earlier guidance, and initially tight size the VMs so as to reduce cost, and then scale-up the VM if actually required.
When migrating an existing workload from on-premises to Azure, it is recommended to use reference sizing, using the capacity and utilization of the existing installed hardware to calculate the required resources in Azure. Just because the current server has 128 GB of RAM does not mean it is the appropriate amount of RAM; it is quite possible that this amount was overallocated and the system has never used all this memory. In such a case, if you simply use the current allocated memory, then you may choose a larger VM than required, which will cost more despite the fact that there is no additional benefit in terms of application performance. Similarly, for the CPU, if the average utilization is low, most probably you can assign a smaller VM type. Usually the peak CPU utilization should not exceed 65%.
Assuming you have access to the current Azure environment, you can collect performance metrics such as CPU and memory usage, and you can then correlate the actual usage data with Azure VMs and create appropriately-sized configurations, rather than simply provisioning on a like-for-like basis, based on current provisioned capacity.
Again, Azure gives you the opportunity to quickly scale-up and scale-down VMs, so if you discover that the VM requires more or less resources, then you can quickly change the VM size.
Ideally for replatforming you will want the following performance metrics:
- CPU:
- Make and model of processor
- Number of cores
- Processor speed
- Average utilization
- Maximum utilization
- Memory:
- Total available memory
- Average memory consumed
- Maximum memory consumed
As well as the average utilization of the compute resources, you should also look at the peak utilizations. There may be background jobs that are scheduled to run at night or during weekends that require more CPU resources, and during the hours they are running, the CPU utilization may be much higher.
In a few cases, you may find that the existing server is in fact undersized, and that the CPU utilization is regularly much higher than 65%, or that nearly all the memory is being used. In this case you should look to size a larger VM that will bring the CPU utilization back to the target of 65%.
To calculate the required SAPS the following formula can be used:

When sizing the application tier, it is also possible to combine multiple application servers into a smaller number that fit more closely to the available VM sizes in Azure. It is likely that your environment has grown over time, and that the number of servers has grown with it. You may have started with two application servers but, after a year, there was a requirement to deploy an extra one. Moving to Azure is a good opportunity to redesign this outdated landscape.
Instead of provisioning a large number of application servers where all have to be updated and managed, you can easily decrease the number of servers, making each one larger, and assign more work processes. You benefit from the lower administration cost while the performance is still the same.
The following table shows how you can model the Azure resources based on the existing platform data. You should always take into account the current hardware utilization, as very often the infrastructure is oversized and doesn't reflect the possibilities that comes with the cloud:
| On-premise workload | Microsoft Azure | |||||||
|
Qty |
vCPU |
CPU Utilization |
RAM |
RAM usage |
Qty |
VM Type |
vCPU |
RAM |
|
1 |
4 |
65% |
16 GB |
80% |
1 |
D4s_v3 |
4 |
16 GB |
|
1 |
16 |
20% |
64 GB |
70% |
1 |
E8s_v3 |
8 |
64 GB |
|
1 |
8 |
40% |
64 GB |
80% |
1 |
E16s_v3 |
16 |
128GB |
|
1 |
8 |
50% |
64 GB |
60% |
||||
|
3 |
12 |
50% |
96 GB |
60% |
2 |
E16s_v3 |
16 |
128 GB |
Table 2-3: Mapping existing servers to Azure VM
A full list of SAP-certified Azure VMs is available in SAP Note SAP Note 1928533 - SAP Applications on Azure: Supported Products and Azure VM types. An extract of that list, is included earlier in this chapter.
Storage sizing
Together with the basic compute resources like vCPU and memory, you should have a look at the storage layout. In the on-premises environments the storage is considered as an amount of space available for application and as one of the performance factors – faster disks process more information at the same time. In the cloud, the disk also influences the availability of the VMs. Next, you'll find information on how to correctly plan the storage for application servers and databases.
Application servers
SAP application servers do not require high performance disk storage as they do not require a large number of IOPS or high throughput, and therefore assigning a disk that supports a large number of IOPS or high throughput is unnecessary. However, application servers with premium SSD managed disks will benefit from higher availability. The 99.9% SLA of a single VM is only valid when the VM uses premium managed disks.
For VMs that are equipped with standard storage, either standard HDD or standard SSD, Microsoft does not offer any availability guarantee. For cost-saving, standard SSD disks could be used for non-production workloads, while for production the recommendation is to always use premium managed SSD disks.
It's recommended that each application server VM has one disk for the operating system and a second disk for the SAP executables, rather than install the executables on the OS disk. For a Windows VM the storage layout may look like this:
|
Drive letter |
C: |
D: |
E: |
|
Type |
OS Disk |
Temporary disk – page file |
SAP Binaries |
|
Disk size |
P10 |
n/a |
P6 |
The temporary disk is the recommended place to host the Windows page or Linux swap files. If you use an Azure gallery image to deploy Windows, then the page files will be automatically deployed on the temporary disk, but for Linux images the swap file will not automatically re-deployed on the temporary disk; this needs to be configured manually.
The caching of the SAP Application Server should be set to read-only.
Database servers
The database performance is usually highly dependent on the underlying storage. For new workloads on Azure, the number of IOPS and throughput should come from the sizing estimates. In migrations, the storage performance requirement can be obtained either from the underlying storage platform, whether it has performance monitoring tools available, or from the database itself.
To achieve a higher performance, multiple disks can be combined into logical volumes. The volume performance is then a sum of IOPS and throughput of particular disks. By combining three P30 disks as in the example below, you create a single volume that offers 15,000 IOPS and 600 Mbps throughput:
| Azure Disk | IOPS | Throughput | Logical volume | IOPS | Throughput |
|
P30 |
5000 |
200 Mbps |
LV |
15,000 |
600 Mbps |
|
P30 |
5000 |
200 Mbps |
|||
|
P30 |
5000 |
200 Mbps |
The caching should be set to read-only for the data disks and operating system disk. No caching should be used for the log disks.
For DBMS other than HANA, Microsoft only provides high-level guidance on storage layouts and sizing the storage layer. The final solution will depend on the choice of DBMS and the size and throughput of the database, and this will vary for each SAP application and database size. To achieve the best performance, the data volume should be built from at least three P20 disks and the log area from two P20 disks.
The table below shows a sample storage layout for database workloads. The database executables are stored on a single P6 disk to avoid installing software on the operating system drive. The data volume is built based on three P20 disks that together offer 1.5 TB of space and high throughput. In addition, the performance benefits from the data caching, which is not allowed for the log area. Therefore, it requires a separate volume built from the disk with caching disabled:
| Disk type | Caching | Volume | |
|
OS disk |
P10 |
Read/Write |
C: |
|
Temp disk |
n/a |
n/a |
D: |
|
Exe disk |
P6 |
Read-only |
E: |
|
Data disk |
P30 |
Read-only |
F: |
|
Data disk |
P30 |
Read-only |
|
|
Data disk |
P30 |
Read-only |
|
|
Log disk |
P20 |
No caching |
G: |
|
Log disk |
P20 |
No caching |
For sizing storage for SAP HANA databases, please refer to the section SAP HANA sizing.
Network sizing
The network can be described using two main metrics:
- Throughput: Represents the amount of bits transferred over a specified time. For example, the network throughput of 1 Mbps means you can transfer one megabit of information during every second.
- Latency: Represents how much time it takes to transfer the data to the destination. In SAP, we always look at the round-trip, which means the latency is the time required to transfer the data and receive acknowledgment it was received.
The network throughput inside a virtual network is always limited by the virtual machine size. Larger VMs support a higher network throughput.
The latency is more complex to estimate. During the deployment of virtual machines, you cannot control on which physical host they will be provisioned. It can be the case that two VMs in a virtual network are placed at opposite ends of a data hall. As each network packet would be routed by many network switches, the latency could be longer. You should always use the NIPING tool to measure the latency between servers, as the ping won't return correct results when accelerated networking is enabled.
The placement of VMs can be partially controlled using Proximity Placement Groups (PPGs). When VMs are deployed into a PPG then Azure attempts to locate them as close to each other as possible and limit the number of network hops during communication. PPGs should generally be used per SAP system, rather than deploy multiple SAP systems within a single PPG, although if the two SAP systems are closely coupled, with a high level of real-time interaction, then it may be desirable to place them within the same PPG.
When using PPG, always deploy the scarcest resource first, and then add the other VMs after. As an example, in most Azure regions, M-series VMs, and now Mv2-series, will be the scarcest resource, and may only exist in one data hall, or within one Availability Zone in a region that supports Availability Zones. By creating the M/Mv2-series VM first, that will pin the PPG to a specific data hall, and the more common VMs types can then be provisioned in the same data hall. If you don't need an M/Mv2-series VM today, but expect to need to upsize to one later, then deploy a temporary M/Mv2-series VM first, then deploy some other VMs, and finally delete the M/Mv2-series VM. Your Proximity Placement Group will now be pinned to the data hall where the M/Mv2-series VMs exist.
Latency is particularly important in internal communication between the database and application server, where SAP Note 1100926 – FAQ: Network performance recommends the following:
- Good value: Roundtrip time <= 0.3 ms
- Moderate value: 0.3 ms < roundtrip time <= 0.7 ms
- Below average value: Roundtrip time > 0.7 ms
The goal should be to achieve a roundtrip time below 0.7ms.
Another latency optimization can be achieved by using the Azure Accelerated Networking, which is a mandatory setting for all SAP workloads in Azure. When using Accelerated Networking, the VM communicates directly with the network and bypasses the virtual switch in the hypervisor, which results in a much lower latency between hosts:

Figure 2-24: Accelerated networking (source: Microsoft.com)
It's also important to correctly size the network between the on-premises data center and the Azure data center. In most cases you will not move all your workloads from on-premises to Azure in a single "big bang" migration, but will migrate workloads one or a few at a time. Even when just migrating SAP to Azure, it is common to migrate a few SAP applications at a time, creating move groups of logically connected applications. Therefore, in sizing the network between on-premises and Azure you will need to consider:
- User traffic between your users and workloads that have been migrated to Azure. The number of concurrent users and the protocol used are both factors that should be considered when planning the network.
- Application traffic between applications still running on-premises and applications already migrated to Azure.
Each SAP application has certain requirements towards network bandwidth and latency. You should always refer to SAP Notes and plan the network segments to fulfil these requirements. The system access method is also important. SAP Fiori communication, which is based on the HTTP protocol, requires much more throughput per user than access using SAP GUI.
System deployment
When deploying a SAP NetWeaver-based system in Azure, you have the same options as on-premises:
- A standalone installation, where the database, central services instance, and the dialog instance are kept on the same host
- A distributed installation, where each component is installed on separate VMs
- A highly available installation, which prevents unplanned downtime due to redundancy of components
The deployment method influences the performance, availability, and cost of running SAP in Azure.
Standalone installation
Keeping all SAP components on a single VM is the easiest way to install the SAP system. The database, central services instance, and the dialog instance are deployed to a single VM. Such deployment is often called two-tier. While the installation process is simplified, such systems are difficult to scale. All components share the virtual machine resources, and the server has to be capable to process database and application server workloads. It may cause problems with tuning the database and application servers as memory is shared between two components.
The maximum performance of a two-tier system is limited by a single virtual machine size. It's good for smaller installations, but not recommended in the case of large systems.
The standalone installation can be expanded by deploying an additional application server to another VM:

Figure 2-25: Standalone installation
The standalone system shares the virtual machine resources for all components. The system availability is limited by a single VM SLA.
Distributed installation
It is possible to separate each SAP NetWeaver component and distribute the workload to multiple VMs. The installation process is more complex, as each component has to be individually provisioned. Internal communication within the SAP NetWeaver becomes an important factor. The ports between the application server, the database, and the central services instance must be open and network performance sized accordingly.
It's possible to stack components with each other. The Central Services instance doesn't consume much in the way of resources, and it's therefore a common practice to deploy it together with either the database or primary application server.
As specified in SAP Note 2731110 – Support of Network Virtual Appliances (NVA) for SAP on Azure, you must not deploy a network virtual appliance (NVA) in the communication path between the SAP Application Server and the database server. This restriction does not apply to Azure Security Group (ASG) and Network Security Group (NSG) rules as long they allow a direct communication.
Figure 2-26 shows the database, central services, and primary application server distributed across three virtual machines.

Figure 2-26: Distributed installation
If you want to make your SAP systems highly available, then you need to use a distributed installation, and we will now look at this in more detail.
Highly available installation
When you cannot afford any unplanned downtime, it is possible to make each component redundant in order to prevent system unavailability. The database server replicates the data to another VM, the central services instance runs in a failover cluster, and the application workload is distributed to at least two servers.
These highly available systems are more difficult to provision and maintain. Administrators should have prior experience in working with such environments, as the wrong configuration can undermine efforts to increase system availability, potentially even leading to a loss in system availability:

Figure 27: Highly Available Installation
On the Linux operating system, high availability can be enabled by configuring the Pacemaker HA extension, and in Windows Server it is possible to install the Failover Cluster feature. However, not all Linux distributions include the required packages, in particular Oracle Linux does not come with built-in Pacemaker and in this case third-party software such as SIOS Protection Suite for Linux can be used instead. With Red Hat and SUSE, the HA extensions are available as an add-on, but are generally bundled as part of the RHEL for SAP and SLES for SAP subscriptions.
Stacking multiple components on a single VM is generally not supported in Azure, particularly on Linux. While customers have tried this, it has generally proved to be unreliable, although there are a few scenarios where it does seem to work and these are detailed later. Therefore, a minimal install would consist of at least six VMs, and once again NVAs should not be placed in the communication between nodes of Pacemaker cluster or SBD devices as they can easily double the latency.
Multiple SAP databases running on one server
In large environments, it may be desirable to stack multiple databases on a single server in order to minimize the number of required VMs. Such configuration is supported by SAP. However, additional attention is required as it is not always easy to implement. There are two variants of stacking the database, Multiple Components on One System and Multiple Components on One Database. We'll look at these individually.
Multiple components on one system (MCOS)
Instead of hosting a separate server for each database instance, with MCOS, multiple separate databases can be deployed together on a single server. In theory it could simplify the administration, but in real life additional effort is required to manage the performance. The database components may interfere with each other and could require special configuration of the operating system, especially if the database release is not exactly the same.
It is not recommended for production workloads, however it may be an optimization technique for non-production environments such as development and test. It can also be used to combine a Disaster Recovery system with a non-production workload.
Multiple components on one database (MCOD)
An alternative method of hosting multiple databases on one server is to use MCOD and create separate schemas for each SAP NetWeaver system in a single database instance. In such deployments, all databases are on the same release and share the libraries.
Such deployment is even more difficult to maintain, as the level of integration is higher. A restart of a single database requires a downtime for all of them. It's also not possible to upgrade the DBMS software for only one database – they all must be upgraded at the same time.
Like MCOS, this scenario should only be considered for non-production workloads but is probably best avoided. While MCOD has been officially supported by SAP for many years, it has very rarely been used by customers:

Figure 2-28: MCOS and MCOD for SAP databases
Figure 2-28 shows the difference between MCOS and MCOD. With MCOS whilst the OS is shared, each database runs in its own database instance with its own DBMS binaries. Each database can be managed independently, starting, stopping, and even patching the binaries. With MCOD, the OS and the DBMS binaries are shared, and all the databases run within a single database instance. Shutting down the database instance will shut down the entire database, and any patch to the DBMS binaries will affect all the databases.
Central services instance stacking
The Central Services Instances do not require significant resource and for this reason you may want to stack multiple instances on a single VM; this is sometimes referred to as a multi-SID configuration. This is not a problem when using standalone VMs, but you do need to be careful when using clustered VMs.
A highly available multi-SID configuration is supported on Azure when using Microsoft Windows Server and Windows Server Failover Clustering (WSFC). However, it is not supported to create a multi-SID central services cluster on Linux when using Pacemaker for the clustering. Pacemaker is not designed to cluster multiple services in this way, and the failover of a single service is likely to cause the failover of multiple instances.
Oracle Linux does not include Pacemaker, and the recommended cluster solution is to use SIOS Protection Suite for Linux instead, and this does support multi-SID configurations. Interestingly, SAP supports the use of SIOS Protection Suite for Linux32 on Azure for SUSE, and Red Hat as well as Oracle Linux, so this may provide an option for multi-SID clusters; however at the time of writing this has not been tested:

Figure 2-29: Stacked (multi-SID) ASCS/ERS clusters
As shown in Figure 2-29 a pair of VMs is used to host multiple ASCS, reducing the total number of VMs and associated OS costs. The load balancer that is deployed in front of the clustered services and acts as an entry point to the system constantly monitors which node hosts the active service and redirects the traffic accordingly.
Additional considerations
SAP NetWeaver is a technology foundation for a set of business solutions. Depending on the workload, additional considerations may need to be taken into account.
SAP Business Suite
SAP Business Suite is the most common SAP workload. The SAP Business Suite applications are very often tightly integrated with one another. For example, very often, data residing in the ERP system is accessed by the SRM system or extracted from the SAP Business Warehouse. The network bandwidth and latency can impact system performance in cross-system communication. Very often, the communication is synchronous and the source system making the request will pause until it receives a response from the target system. A background job that requires data from an external system can trigger a lot of requests, and in such a case, even a small increase in network latency can cause a significant delay in job execution.
The network latency will be high especially when the connection is established between an on-premises environment and Azure, or in multi-cloud scenarios between Azure and other clouds. If two systems are highly dependent on one another, with a lot of communication, then it is important that they are both located within close proximity. This means that when it comes to migration planning, they should be migrated simultaneously.
SAP S/4HANA and SAP Fiori
SAP S/4HANA is the new business solution and a direct successor of SAP ERP, also known as SAP ERP Central Component (ECC). The business processes have been redesigned and the business model has been simplified, but there is an additional important change. The recommended user interface is changed from SAP GUI to SAP Fiori, which means the system should be accessed through a web browser instead of a dedicated client. The SAP GUI can still be used with S/4HANA, and if you are planning a conversion (technical upgrade) of an existing ECC system to S/4 and you want to minimize user impact, then this is a valid solution.
While such change may not appear very significant, it has a major impact on the overall system architecture. SAP Fiori may be combined with SAP S/4HANA, which is called an embedded deployment and it is the recommended scenario from SAP when it will be used exclusively with S/4HANA. However, SAP Fiori can also be deployed as a completely separate system, with its own application servers and database. This is referred to as a hub deployment and is mostly used when a single SAP Fiori will provide access to multiple back-end SAP systems.
SAP Fiori is, in fact, an application based on NetWeaver AS ABAP and a database, and if it is deployed as a separate system, it has to follow all the recommendations as per normal SAP NetWeaver-based systems, including sizing and resilience. If high availability is planned for the backend system, it must also be considered for the front-end. Otherwise, in the event of an unexpected downtime of Fiori, users will not be able to access the back-end application even if it is still up and running.
The separate SAP Fiori system usually does not store a lot of data in the database and acts as a proxy between the user and back-end system. Therefore, to optimize the solution, its database can be stacked together with the back-end system database. Such optimization is especially useful when SAP HANA is the database. The certified hardware for SAP HANA is expensive, so stacking the databases together using SAP HANA Multitenant Database Containers (MDCs) will decrease the cost of hosting Fiori.
While it is technically possible to use another DBMS for SAP Fiori, since SAP intends to end support for other DBMSes except SAP HANA as of 2025, this is only a short-term solution, and within a few years, the Fiori system will have to be migrated to HANA.
SAP Fiori is accessed through the HTTP protocol and it is good practice to add an extra security layer. SAP Web Dispatcher is a reverse-proxy that can be placed between a user and the system. It can accept or reject a user connection and load balance traffic between multiple application servers. SAP recommend using the Web Dispatcher in front of the Fiori system.
The SAP Web Dispatcher should be deployed in the DMZ area of the network. The VM hosting the reverse proxy should have two network adapters – one for user connections and the second for communications with the SAP Fiori server. Each area of the network is associated with Network Security Groups that filter the unwanted or dangerous traffic and provide an additional security layer:

Figure 2-30: SAP Web Dispatcher deployed in a DMZ
The availability requirements should be also considered for the SAP Web Dispatcher. A highly available SAP landscape should include a highly available SAP Web Dispatcher that runs on two VMs with load balancer in front of it.
The Azure Load Balancer works in layer 4 of the TCP/IP stack, and provides only basic capabilities of routing the request to a VM. A good alternative is to use the Azure Application Gateway solution, which works on the application layer and therefore provides additional network capabilities.
Both SAP Web Dispatcher and Application Gateway solutions work as a reverse proxy, and therefore it would be tempting to use only the Application Gateway. The Web Dispatcher, however, contains unique features to understand the SAP landscape and therefore the load balancing works better than Application Gateway. On the other hand, the Application Gateway offers advanced threat detection and web application firewall, which significantly increase the security of the entire landscape, especially if the system is exposed to the internet. Combining both solutions gives the best results.
SAProuter and SAP Cloud Connector
SAProuter and the SAP Cloud Connector are proxy applications that allow connection between on-premise landscapes and the SAP backbone.
The SAProuter is currently used most often to download SAP notes or establish a support connection, but it can also be used as a proxy for user connection using SAP GUI. It should follow the SAP Web Dispatcher recommendations and be placed in the DMZ area of the network. When it's used as a proxy for SAP GUI, then it should also follow the availability requirements of SAP Web Dispatcher.
The SAP Cloud Connector is software that allows connection with the SAP Cloud Platform without the need to expose the SAP NetWeaver system to the internet. It is used by developers to deploy enhancements or completely new SAP Fiori applications, and it's a mandatory component for SAP S/4HANA deployments. Usually, it is less critical than other components, which means it's usually deployed as a single node, but the highly available mode is also possible. The deployment should follow the SAP Web Dispatcher recommendations.
Having considered the requirements for running SAP applications using AnyDB (IBM DB2, Microsoft SQL Server, Oracle Database, SAP ASE, and SAP MaxDB) let us now look at running SAP applications on SAP HANA. While there are some differences, in many ways SAP HANA is just another database and a lot of what we have already discussed is not changed by using SAP HANA as the database.