When deploying the SAP application layer of NetWeaver or S/4HANA, one of the requirements is to have directories like /sapmnt/SID, saptrans shared in the environment. Additionally, when deploying highly-available SAP system, file systems like /sapmnt/SID, usr/sap/SID/ASCS, also need to be protected and highly available.
On Azure, you have several different options for deploying “sap shared directories”. You may be asking yourself, what are the differences and when should I use which option? In this blog we explore the options and describe how to deploy highly available SAP system, using Azure Files NFS v4.1 shares, which recently entered public preview.
So far, when deploying highly available SAP ASCS/ERS instance on Linux VMs in Azure, you could deploy the sap shared directories on:
When using highly-available NFS cluster, you are responsible for deploying and configuring these NFS clusters, including the HA configuration. Once built, you are also responsible of all aspects of maintaining and operating the NFS cluster. That means that the TCO goes well beyond the cost of the Azure infrastructure components for such an NFS cluster. On the other hand, because no specialized Azure components are required, such a solution can be deployed in any Azure region. If the SAP system is in a zonal deployment, the NFS cluster itself can also be deployed across Availability zones. Cross-region replication for Disaster Recovery(DR) across availability zones is also to be configured and operated by you as a customer.
When using NFS volumes on ANF, there is no need to deploy, configure and maintain NFS clusters. ANF clusters have built in redundancy (within a single zone), and ANF comes with 99.99% availability SLA. Cross-region replication (in public preview) is a new ANF feature, based on SnapMirror, which allows cross-region replication of ANF volumes for DR purposes. ANF can deliver sub-milliseconds latency, and while this is not the topic of the blog, it is worth mentioning, that ANF is an excellent choice for deploying HANA DB as its latency and throughput characteristics meet SAP’s requirements for HANA deployments. If you are considering zonal deployment, keep in mind that ANF is not zone-aware, and data is not automatically replicated across Availability zones. The minimum capacity pool size for ANF is 4 TiB, and such entry point is relatively high for customers with smaller requirements: for instance customers with smaller SAP Landscapes, only interested in using ANF for the SAP shared directories of the SAP application layer.
What about the new option Azure Files NFS v4.1 shares? This is a new Azure offering that is in a public preview stage. As the name implies, it is based on Azure Files - Premium. Like ANF, Azure Files NFS eliminates the need to deploy, configure and maintain NFS clusters. The minimum size for a single NFS share on Azure Files is 100GB. You only pay for the capacity of the deployed shares. Expectation on delivered storage latency is that it does not meet the level necessary for SAP databases. But that the storage latencies are sufficient for hosting SAP application layer shares in a very cost-effective manner.
Two choices for data redundancy are offered: LRS (local, in zone synchronous data replication) and ZRS (synchronous replication of your data across all 3 availability zones). If using ZRS, in the case of a zonal failure, the NFS sessions are automatically redirected to a healthy zone, if using Private Endpoint to access the shares in the storage account. It is not necessary to re-mount the NFS shares to your Azure VMs after a zonal failure. While the storage layer fails over, the SAP application may experience “disturbance”, depending on how long it takes for the NFS sessions to be redirected to a healthy zone. For instance, the SAP workload may experience higher than usual latency. Or SAP work processes may restart. As usual, single point of failure components, like SAP ASCS/ERS, should be protected with clustering technology (Pacemaker cluster), so that the cluster can automatically restart or fail-over the instances. In the case of a VM failure – the Pacemaker cluster will failover to the healthy cluster node, as usual.
Automatic cross-region replication for DR scenarios is currently not offered with NFS on Azure Files. Cross-region replication can be achieved, using tools like rsync.
If you are considering a zonal deployment for your SAP Application workload, and looking for a simplified, cost effective option for the SAP shared directories, that also offers cross-zone resilient, read on – NFS on Azure Files could be a good fit and you can try it now.
To summarize the features of the different options:
NFS - Azure Files | NFS – ANF | NFS Cluster | |
SAP use case |
|
(SAP shared directories)
|
|
High Availability/ Redundancy |
Built-in (LRS or ZRS) | Built-in (within a single zone) | Custom |
Cross-Zonal replication | Yes (with ZRS) | No | Yes, if the cluster is deployed across zones |
Cross-Region Replication | Custom (rsync) | Yes (SnapMirror) | Custom (rsync) |
Minimum requirements | 100 GB / individual share |
4TiB / capacity pool 100GiB / volume |
N/A |
TCO considerations | Cost of the NFS shares | Cost of the ANF capacity pool | Cost of the VMs and disks for the NFS cluster, plus effort/cost to configure and maintain the cluster |
Now, let’s go back and take a look at the example in this blog:
We are assuming that the Virtual networks, subnets, resource groups are already deployed.
For more detailed instructions, reference Deploy Azure Load Balancer via Azure portal.
The Azure Fence agent requires outbound connectivity to the Azure ARM public end point and that is not the case by default for VMs in the backend pool of Standard Azure LB. Several scenarios how to achieve outbound connectivity for these VMs are covered here.
Now, let’s look at the steps to deploy Azure Files NFS shares (we will use ZRS replication). If you are curious to learn more about Azure Files NFS shares, the documentation can be found here.
First, you will need to deploy Premium Storage Account of type file. You will be accessing the Storage account via Private end point, deployed in the same VNET as the VMs for your SAP application layer. That means that the traffic from your SAP system never leaves the virtual network security boundaries. SAP systems often contain sensitive and business critical data and staying within the boundaries of the VNET is important security consideration for many customers.
If you need to access the storage account from a different VNET (Network topology often varies across customers), then you can use Azure VNET peering.
For details on how to create storage account see Create FileStorage Storage Account.
The only supported protocol is NFSv4.1. Navigate to the storage account (sapafsnfs) and click on File shares
For details on how to deploy NFS shares, see Create NFS share.
Now that you have deployed the Azure infrastructure components for your SAP system, you can proceed with the configuration of the system.
The SAP ASCS instance is a single point of failure in an SAP system. To achieve highly available deployment, configure a Pacemaker cluster for the ASCS/ERS VMs.
Follow the instructions here to build the Pacemaker cluster. You can choose to use either Azure Fence Agent or SBD for fencing.
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] - only applicable to node 2.
sudo zypper in sap-suse-cluster-connector
10.90.90.7 sap-cl1
10.90.90.8 sap-cl2
10.90.90.10 sapascs
10.90.90.9 sapers
sudo mkdir -p /saptmp
sudo mount -t nfs sapnfs.file.core.windows.net:/sapnfsafs/sapnw1 /saptmp -o vers=4,minorversion=1,sec=sys
sudo mkdir -p /saptmp/sapmntNW1
sudo mkdir -p /saptmp/usrsapNW1ascs
sudo mkdir -p /saptmp/usrsapNW1ers
sudo mkdir -p /saptmp/usrsapNW1sys
sudo mkdir -p /saptmp/usrsapNW1pas
sudo mkdir -p /saptmp/usrsapNW1aas
sudo umount /saptmp
sudo mkdir -p /sapmnt/NW1
sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/NW1/SYS
sudo mkdir -p /usr/sap/NW1/ASCS00
sudo mkdir -p /usr/sap/NW1/ERS01
sudo chattr +i /sapmnt/NW1
sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/NW1/SYS
sudo chattr +i /usr/sap/NW1/ASCS00
sudo chattr +i /usr/sap/NW1/ERS01
sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1sys/ /usr/sap/NW1/SYS nfs vers=4,minorversion=1,sec=sys 0 0
# Mount the file systems
mount -a
sudo vi /etc/waagent.conf
# Set the property ResourceDisk.EnableSwap to y
# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y
# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB
# The free space of resource disk varies by virtual machine size. Make sure that you do not set a value that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000
#restart the agent to activate the changes
sudo service waagent restart
sudo crm node standby afs-cl2
sudo crm configure primitive fs_NW1_ASCS Filesystem device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1ascs' directory='/usr/sap/NW1/ASCS00' fstype='nfs' options='sec=sys,vers=4.1' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s
sudo crm configure primitive vip_NW1_ASCS IPaddr2 \
params ip=10.90.90.10 cidr_netmask=24 \
op monitor interval=10 timeout=20
sudo crm configure primitive nc_NW1_ASCS azure-lb port=62000
sudo crm configure group g-NW1_ASCS fs_NW1_ASCS nc_NW1_ASCS vip_NW1_ASCS \
meta resource-stickiness=3000
Before proceeding with the SAP ASCS installation, make sure all cluster resources are started:
Full list of resources:
rsc_st_azure (stonith:fence_azure_arm): Started afs-cl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started afs-cl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started afs-cl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started afs-cl1
sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=ASCS_virtual_hostname
If the installation fails to create a subfolder in /usr/sap/NW1/ASCS00, try setting the owner and group of the ASCS00 folder to sidadm and retry.
sudo crm node online afs-cl2
sudo crm node standby afs-cl1
sudo crm configure primitive fs_NW1_ERS Filesystem device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1ers' directory='/usr/sap/NW1/ERS01' fstype='nfs' options='sec=sys,vers=4.1' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s
sudo crm configure primitive vip_NW1_ERS IPaddr2 \
params ip=10.90.90.9 cidr_netmask=24 \
op monitor interval=10 timeout=20
sudo crm configure primitive nc_NW1_ERS azure-lb port=62101
sudo crm configure group g-NW1_ERS fs_NW1_ERS nc_NW1_ERS vip_NW1_ERS
Verify that the cluster resources have started, before proceeding with the SAP ERS installation:
Full list of resources:
rsc_st_azure (stonith:fence_azure_arm): Started afs-cl2
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started afs-cl2
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started afs-cl2
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started afs-cl2
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started afs-cl2
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started afs-cl2
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started afs-cl2
sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=ERS_virtual_hostname
If the installation fails to create a subfolder in /usr/sap/NW1/ERS01, try setting the owner and group of the ERS01 folder to sidadm and retry.
# Add the following lines
service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
# Add the following lines
service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
#Make sure Autostart is not activated in the profile
# Autostart = 1
# Add sidadm to the haclient group
sudo usermod -aG haclient nw1adm
cat /usr/sap/sapservices | grep ASCS00 | sudo ssh afs-cl2 "cat >>/usr/sap/sapservices"
sudo ssh afs-cl2 "cat /usr/sap/sapservices" | grep ERS01 | sudo tee -a /usr/sap/sapservices
sudo crm configure property maintenance-mode="true"
sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
operations \$id=rsc_sap_NW1_ASCS00-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ASCS00_nw1ascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000
sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \
operations \$id=rsc_sap_NW1_ERS01-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ERS01_nw1ers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS01_nw1ers" AUTOMATIC_RECOVER=false IS_ERS=true
sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00
sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS01
sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS
sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start rsc_sap_NW1_ERS01:stop symmetrical=false
sudo crm node online afs-cl1
sudo crm configure property maintenance-mode="false"
Once you have configured the SAP resources in the cluster, make sure all resources have started and that the cluster status is OK:
Online: [ afs-cl1 afs-cl2 ]
Full list of resources:
rsc_st_azure (stonith:fence_azure_arm): Started afs-cl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started afs-cl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started afs-cl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started afs-cl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started afs-cl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started afs-cl2
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started afs-cl2
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started afs-cl2
rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started afs-cl2
The following items are prefixed with either [A] - applicable to both PAS and AAS, [P] - only applicable to PAS or [S] - only applicable to AAS.
10.90.90.7 sap-cl1
10.90.90.8 sap-cl2
10.90.90.10 sapascs
10.90.90.9 sapers
10.90.90.12 sapa01
10.90.90.13 sapa02
sudo mkdir -p /sapmnt/NW1
sudo mkdir -p /usr/sap/trans
sudo chattr +i /sapmnt/NW1
sudo chattr +i /usr/sap/trans
sudo mkdir -p /usr/sap/NW1/D02
sudo chattr +i /usr/sap/NW1/D02
sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1pas/ /usr/sap/NW1/D02 nfs vers=4,minorversion=1,sec=sys 0 0
# Mount the file systems
mount -a
sudo mkdir -p /usr/sap/NW1/D03
sudo chattr +i /usr/sap/NW1/D03
sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1aas/ /usr/sap/NW1/D03 nfs vers=4,minorversion=1,sec=sys 0 0
# Mount the file systems
mount -a
sudo vi /etc/waagent.conf
# Set the property ResourceDisk.EnableSwap to y
# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y
# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB
# The free space of resource disk varies by virtual machine size. Make sure that you do not set a value that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000
#restart the agent to activate the changes
sudo service waagent restart
sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
Next, make sure that you thoroughly test your Pacemaker cluster. You want to make sure that when you need it, your HA solution will deliver. You can find the typical set of failover tests to execute here .
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.