SAP on Azure General Update March 2019
Published Mar 20 2019 12:50 AM 15.6K Views

SAP and Microsoft are continuously adding new features and functionalities to the Azure cloud platform. This blog includes updates, fixes, enhancements and best practice recommendations collated over recent months.


1. New Checklist for SAP on Azure Released

A new SAP on Azure Checklist has been released to allow customers and partners to verify their deployment prior to go live.

This checklist is designed for customers moving their SAP NetWeaver, S/4HANA and Hybris applications to Azure Infrastructure as a Service. This checklist should be reviewed by a customer and/or SAP partner during the duration of the project. It is important that many of the checks are conducted at the beginning of the project and in the planning phase. Once the deployment is done, elementary changes on deployed Azure infrastructure or SAP software releases can become complex. Review this checklist at key milestones throughout a project. Small problems can be detected before they become large problems and sufficient time exists to re-engineer and test any necessary changes.


The link for this Checklist will be updated and maintained as new features and technologies are introduced into the Azure platform.


2. SAP on Azure High Availability Patterns – Recent Observations from Customer Projects

Most customers deploying SAP on Azure implement Operating System level clustering. Due to improvements in the stability of Azure IaaS components and improvements in Operating System stability some customers are now choosing to rely on Azure technologies rather than implement Operating System level clustering.

The deployment patterns seen can be generalized as follows:

1. Maximum Protection

OS level clustering protecting both DBMS and SAP ASCS layer

Cluster nodes span between Azure Availability Zones in the same region (eg. Singapore or USA West) with synchronous data replication between Availability Zones  

Disaster Recovery site is implemented with Native DBMS tools (such as asynchronous HSR or AlwaysOn)

SAP Application servers are replicated to DR site with Azure Site Recovery

2. Full HA Protection

OS level clustering protecting both DBMS and SAP ASCS layer

Cluster nodes span between Azure Availability Zones in the same region (eg. Singapore or USA West) with synchronous data replication for the DBMS layer

No Disaster Recovery solution is implemented, though these customers keep backups in a geographically distant Azure datacenter (sometimes using Read Only Geo-redundant Azure storage)

3. DBMS High Availability Only

OS level clustering protecting only DBMS layer

The SAP ASCS layer is not highly available

DBMS nodes span between Azure Availability Zones with synchronous data replication for the DBMS layer

Disaster Recovery site is implemented with Native DBMS tools (such as asynchronous HSR or AlwaysOn)

SAP Application servers are replicated to DR site with Azure Site Recovery

4. Disaster Recovery Only

No OS clustering for either DBMS or SAP ASCS layer

Disaster Recovery site is implemented with Native DBMS tools (such as asynchronous HSR or AlwaysOn)

SAP Application servers are replicated to DR site with Azure Site Recovery

5. No Protection (For small non-line of business applications)

No OS clustering for either DBMS or SAP ASCS layer

No Disaster Recovery


Large multi-national customers implement scenarios #1 and #2 predominantly. Availability Zones has become a default pattern for large customers

With improvements in platform stability many small, medium and medium-large customers have found #3 and #4 meets their technical and business requirements and does so at considerable lower complexity and cost.


3. Additional “High Availability” Option for S4HANA on Azure – New ERS2 Feature in SAP Kernel 7.53 & 7.73

Customers running NetWeaver 7.52 (example: S4HANA 1809) or higher with SAP Kernel 7.53 or 7.73 have a new interesting option for a simple “High Availability” mechanism. New Kernels supports an enhancement on the Enqueue Replication server called ERS2.

A brief description of the ERS2 mechanism is detailed below:

  1. ERS2 does not require OS level clustering – ERS2 can be deployed both with or without OS HA cluster software on Linux and Windows
  2. ERS2 can work in conjunction with a Distributed type ASCS installation – the Enqueue Server and the Enqueue Replication client can be on different VMs
  3. If the ASCS server was to fail due to a platform issue or OS restart, the ERS2 server will hold a copy of the Enqueue table while the ASCS server is restarting
  4. When the ASCS restarts it will try to read the Enqueue table from the ERS2 server
  5. Use Virtual Hostnames to simplify administration and swapping the Enqueue Replication from one VM to another VM
  6. The ASCS VM and the ERS2 VM should be placed in an Availability Set or different Availability Zones. This means it is exceedingly unlikely that both VMs would be unavailable at the same time

The Azure platform has built in self-healing mechanisms that will redeploy and restart a VM if there is a failure in Hardware components (cpu, power supply, memory etc). In addition Azure will detect if the Operating System has hung or reset. The Operating System can be restarted in less than 10 seconds in some cases for small VMs with one or very few attached disks. In such a scenario an online user would be unlikely to experience any significant interruption of service.


It is key to ensure the ASCS instance automatically restarts. The SAP Profile Parameter Autostart = 1 will cause a SAP application server or ASCS instance to start shortly after Operating System startup. Note: this parameter should be removed before running SUM or any kind of upgrade.

2639281 - Migration to Standalone Enqueue Server 2 in Windows Failover Cluster environments 

2630416 - Support for Standalone Enqueue Server 2 

2711036 - Usage of the Standalone Enqueue Server 2 in an HA Environment 

2623468 - [WEBINAR] Understanding and Troubleshooting Standalone Enqueue Server


4. IO Scheduler Kernel Parameter & Mount Options

Customers and Partners have questioned what is the most suitable IO Scheduler Kernel Parameter and mount options for Azure Premium and UltraSSD Storage.

There are 3 kernel scheduler options available - cat /sys/block/sda/queue/scheduler




The default I/O scheduler for SLES is CFQ.

CFQ: CFQ is a fairness-oriented scheduler and is used by default on SUSE Linux Enterprise. The algorithm assigns each thread a time slice in which it is allowed to submit I/O to disk. This way each thread gets a fair share of I/O throughput. It also allows assigning tasks I/O priorities which are taken into account during scheduling decisions.

NOOP: The NOOP scheduler performs only minimal merging functions on your data. There is no sorting, and therefore, this scheduler has minimal overhead. This scheduler was developed for non-disk-based block devices, such as SSD (Premium or UltraSSD Azure Storage). SAP recommend NOOP for Hana systems.

DEADLINE: Deadline scheduler guarantees a start service time for an IO request.


In addition to the Linux scheduler it is also recommended to change the /etc/fstab mount options for file systems formatted XFS like /hana/data and /hana/log to NOBARRIER.

To achieve the highest IOps on Premium Storage disks where their cache settings have been set to either ReadOnly or None, you must disable barriers while mounting the file system in Linux. You do not need barriers because the writes to Premium Storage backed disks are durable for these cache settings

1984787 - SUSE LINUX Enterprise Server 12: Installation notes

1275776 - Linux: Preparing SLES for SAP environments

5. Mount Disks with UUID

The device path for Azure Blog Storage disks can change after restarting a VM. It is therefore mandatory to deploy SAP on Azure VMs using UUID and not device paths in the /etc/fstab otherwise there is a possibility a VM will not boot.

More information on how to recover a VM that will not boot and how to configure the /etc/fstab to use UUID rather than device names can be found in these links

6. SAP Note 1969700 - SQL Statement Collection for SAP HANA

SAP support engineers have created some very useful scripts for detailed analysis of performance and configuration issues.

Hana customers running on Azure are advised to load and install the following scripts in DBA Cockpit. These scripts can be loaded and saved so they can be used at any time.

Prior to go live it is highly recommended to run the following scripts. Review the Hana Minicheck results and evaluate any difference between the reference value and the actual value.

It is common for some IO results to be erroneous – in particular the values for ASYNC DATA IO can be in the range of 100,000ms per IO or even negative numbers. This does not mean that there is an IO problem






In some of the scripts adjust the “Modification section” to specific date range or IO type.

         ( SELECT                                 /* Modification section */

             '2019/03/18 07:58:00' BEGIN_TIME,                 /* YYYY/MM/DD HH24:MI:SS timestamp, C, C-S, C-M, C-H, C-D, C-W, E-S, E-M, E-H, E-D, E-W, MIN */

             '2019/03/18 08:05:00' END_TIME,                   /* YYYY/MM/DD HH24:MI:SS timestamp, C, C-S, C-M, C-H, C-D, C-W, B+S, B+M, B+H, B+D, B+W, MAX */

             '%' HOST,

             '%' PORT,

             'DATA' IO_TYPE,                      /* LOG, DATA, ... */

             '%' PATH,

             '%' FILESYSTEM_TYPE,

1999993 - How-To: Interpreting SAP HANA Mini Check Results

7. Technologies to Reduce Hana Memory Requirements

SAP customers running larger Hana databases frequently ask Microsoft what technologies are supported on Azure to reduce the Hana memory footprint. SAP Hana Database by its design creates a hard dependency on physical memory and SAP Applications by their design create ever increasing Database sizes. These two competing design principles create challenges for large customers.

There are many technologies available that reduce memory load to some extent. Not all of them are applicable to all SAP applications or scenarios. Some technologies are more effective than others. One of the more useful diagrams can be found in the presentation Data Tiering Optimization with SAP BW/4HANA


DT Pic.png

Below is a list of technologies and links. There is no single clear guidance for customers. Large customers in Retail or Utilities may need to discuss directly with SAP or their SI partner. Large Hana customers are highly recommended to develop an operationalized, executable and functional Data Volume Management strategy and Archiving Process before go live and not treat Data Volume Management as an afterthought

SAP HANA Cold Data Tiering


BW Extension Nodes

BW NLS DTO – Sybase IQ

SAP S4HANA Data Aging

2416490 - FAQ: SAP HANA Data Aging in SAP S/4HANA

Scale Out for S4HANA 2408419 - SAP S/4HANA - Multi-Node Support

Scale Out for BW 1908075 - BW on SAP HANA: Table placement and landscape redistribution

Hana Paged Attributes


8. 2731110 - Support of Network Virtual Appliances (NVA) for SAP on Azure – Do Not Place NVA Between SAP APP Servers & DBMS Server!

A new SAP Note has been released directing customers not to intercept or re-route traffic between the SAP application server tier and the DBMS tier. Customers are strongly encouraged to secure their Azure resources with Network Virtual Appliances, Azure Security Groups/NSG and other mechanisms, but traffic between SAP application servers and DBMS server must be unencumbered and unimpeded or serious performance problems will occur.

Azure provides NVA for Fortigate, Palo Alto, Checkpoint and most other leading Firewall and IDS/IPS vendors.

General patterns and configurations many customers have successfully implemented:

  1. Deploy a Hub & Spoke network topology
  2. Deploy Production systems into a single vNet or maximum 1-3 vNet peered to a hub network
  3. SAP Application servers and DBMS servers can be deployed in different subnets. NSG are set at the subnet level to close all ports that are not required
  4. Non-production and/or Pre-production systems are in different vNets
  5. E-recruit, external facing Portals, Webdispatcher, SAPRouter etc are placed in a separate vNet. Typically all inbound and outbound traffic would be inspected and traffic patterns analyzed
  6. Latency between SAP application server and DBMS server can be tested using TCPPing (Ping is not an accurate tool on Azure) or the SAP ABAP report /SSA/CAT -> ABAPMeter – columns DB Access and E. DB Access
  7. A NVA is setup on in the Hub vNet. Traffic between vNets is inspected/monitored, traffic within a vNet is not touched


These are only general guidelines and customers should discuss with their SAP and Azure partner(s).


2731110 - Support of Network Virtual Appliances (NVA) for SAP on Azure


List of ports required by SAP Applications

9. Severe Performance Problems on M-series or other Large Azure VMs Due to Small Boot Disks

During a recent customer migration of a 10TB AIX/DB2 ECC 6.0 system to Hana 2.0 running on m128ms (128 cpu/3.8TB RAM) a strange performance problem was observed. After restarting Hana there would be high IO MB/sec READ for a short period of time while data was being loaded into memory. IO MB/sec READ decreases to nearly zero shortly after restarting – this is an expected behavior because Hana is an “in-memory” database and should not be reading data off disk after tables are loaded.

Hana SavePoints writes changed memory contents caused by database INSERT/UPDATE/DELETE operations into the Hana datafile. SavePoints reduce DB recovery times.

When this problem occured SavePoint write performance would drop from > 300MB/sec to around 1MB/sec. If the Hana DB server was left in this condition the SavePoint would eventually finish, however it would take many hours. Due to blocking/locking required during SavePoints the Hana DB was unusable during this time.

When Azure Support engineers analyzed the Host level disk counters the disk queue length for the /hana/data and /hana/log disks was either 0 or 1. This means that the application or operating system was not requesting these disks to perform work.


The problem was traced to a grossly undersized Boot disk. The m128ms is a very powerful server and requires a boot disk that can provide adequate IOPS and throughput. This is especially true if 3rd party Monitoring, Management or Inventory agents are installed. As a general recommendation a P10 would be the minimum size for the Boot disk on a large powerful Virtual Machine.

Using very small boot disks on large powerful VMs may cause very difficult to diagnose performance issues.


10. What Operations Will Invalidate the SAP License Key?

The SAP License Key on Azure is generated slightly differently than on-premises systems

Windows SAP Systems: Unique Virtual Machine ID (VMID), Hardware ID (generated off Windows SID)

Linux SAP Systems: Unique Virtual Machine ID (VMID), Hardware ID (generated MAC address)


Azure Site Recovery does not preserve the VMID of the Primary VM when failing over to the DR site. The VM in the DR site will have a new unique VMID even though the Operating System and Data disks will be identical

During Failback from DR back to Primary the original VMID of the Primary VM will be preserved.

Deallocating a VM, redeploying a VM to another physical host or other planned or unplanned Azure maintenance operations will not change the VMID and will not invalidate the SAP license key

Deleting a Virtual Machine and creating a new VM with the same OS and Data disks will cause the VMID to change and will consequently invalidate the SAP License


2243692 - Linux on Microsoft Azure (IaaS) VM: SAP license issues

2327159 - SAP NetWeaver License Behavior in Virtual and Cloud Environments


11. New VMs for SAP – 400+ CPU and 11.5TB RAM

Microsoft has announced new VMs that will be available in various locations starting from the middle of 2019. These new VMs are called M-series version 2 or Mv2. These VMs will come in 3TB, 6TB and 11.5TB memory sizes. Azure Premium Storage will be supported initially and Ultra SSD will be added later.

The large VMs with more than 400 cpus will need the version latest versions of Suse and RedHat in order to handle >256 cpus

Customers and Partners with interest in deploying very large SAP Hana systems onto Mv2 Azure VMs should contact their Microsoft Salesperson for timing and location updates.

Azure Regions


12. SAP Stops Supporting Most Proprietary UNIX Platform

Since more than 10 years SAP has followed clear Intel strategy focusing on Linux and Windows platforms. SAP did support proprietary UNIX hardware such as HPUX on Itanium and Sun Solaris for older NetWeaver releases. In ‘SAP Note 2620910 - SAP S/4HANA 1511, 1610, 1709 and SAP BW/4HANA 1.0: Recommended Application Server Platforms’ SAP officially terminate of most proprietary platforms. In the 1990’s and early 2000’s proprietary UNIX platforms especially HPUX were a default choice for SAP deployments. These platforms today are in rapid decline with dozens OS/DB migrations from IBM pSeries and HPUX to Azure happening every year. The number of customers running on AS400 and IBM Mainframes also decreases steadily as the cost of these platforms increases and availability of skillsets decreases.


The large increase in “other” category mainly represents servers built by hardware OEMs for Hyperscale cloud providers:


Useful Links

Commvault Hana Backup in Azure

Master Note for SAP on Azure 1928533 - SAP Applications on Azure: Supported Products and Azure VM types

Master Documentation Link   **Always Start Here**

Deployment Checklist

Azure Datacenter locations

Azure SAP Blog


SAP Notes

2753477 - Saprouter does not start on Windows 2016

2743751 - Troubleshooting SAP ASCS/SCS Instance High Availability in Azure with Azure Internal Load Balancer

2734697 - Windows Support for SAP Kernel 7.73

2731110 - Support of Network Virtual Appliances (NVA) for SAP on Azure

2718300 - Physical and Virtual hostname length limitations

2698948 - How to install SAP applications in Failover Cluster without shared disks?

2672632 - Windows naming resolution is slow

2585591 - How to protect against speculative execution vulnerabilities on Windows?

2648012 - OpenSSL CVE-2016-2107

2476242 - Disable windows SMBv1

2639281 - Migration to Standalone Enqueue Server 2 in Windows Failover Cluster environments

2624843 - How to check a Windows Failover Cluster configuration?

2554480 - Does SAP support Workgroup clusters?

2553235 - High Paging on Windows Server 2012 or higher affecting overall performance

2399979 - automatic HANA Data Collection with HANASitter

Analyzing problems like bad performance, high CPU consumption or lock contention it is required to collect information at the time when the problem exists. Implemented as a python script, HANASitter trigger reactions (e.g. creation of traces and dumps or collection of performance histories) when specific conditions (like high load) are met.

2399996 - automatic HANA Cleanup with HANACleaner

Certain SAP HANA cleanup tasks like purging the backup catalog or deleting old trace files need to be implemented individually.

Implemented via Python script, HANACleaner perform these tasks automatically


Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

Version history
Last update:
‎Aug 15 2019 05:07 AM
Updated by: