Blog Post

Running SAP Applications on the Microsoft Platform
17 MIN READ

SAP Web Dispatcher on Linux with High Availability Setup on Azure

AnjanBanerjee's avatar
AnjanBanerjee
Icon for Microsoft rankMicrosoft
May 21, 2025

1. Introduction

The SAP Web Dispatcher component is used for load balancing SAP web HTTP(s) traffic among the SAP application servers. It works as “reverse proxy” and the entry point for HTTP(s) requests into SAP environment, which consists of one or more SAP NetWeaver system.

This blog provides detailed guidance about setting up high availability of standalone SAP Web Dispatcher on Linux operating system on Azure. There are different options to set up high availability for SAP Web Dispatcher.

  • Active/Passive High Availability Setup using a Linux pacemaker cluster (SUSE or Red Hat) with a virtual IP/hostname defined in Azure Load Balancer. 
  • Active/Active High Availability Setup by deploying multiple parallel instances of SAP Web Dispatcher across different Azure Virtual Machines (running either SUSE or Red Hat) and distributing traffic using Azure Load Balancer. 

We will walk through the configuration steps for both high availability scenarios in this blog. 

2. Active/Passive HA Setup of SAP Web Dispatcher

2.1. System Design

Following is the high level architecture diagram of HA SAP Production environment on Azure. SAP Web Dispatcher (WD) standalone HA setup is highlighted in the SAP architecture design.

 

In this setup as active/passive node design, primary node of the SAP Web Dispatcher will be receiving the user's requests and transferring (and load balancing) it to the backed SAP Application Servers. In case of unavailability of primary node, Linux pacemaker cluster will perform the failover of SAP Web Dispatcher to the secondary node. Users will connect to the SAP Web Dispatcher using the virtual hostname(FQDN) and virtual IP Address as defined in the Azure Loadbalancer. Azure Loadbalancer health probe port will be activated by pacemaker cluster on the primary node, so all the user connections to the virtual IP/hostname will be redirected by Azure Loadbalancer to the active SAP Web Dispatcher.

Also, SAP Help documentation describes this HA architecture as “High Availability of SAP Web Dispatcher with External HA Software”.

The following are the advantages of active-passive SAP WD setup.

  • Linux pacemaker cluster will continuously monitor the SAP WD active node and services running on it. In case of any error scenario, the active node will be fenced by pacemaker cluster and secondary node will be made active. This will ensure best user experience round the clock.
  • Complete automation of error detection and start/stop functionality of SAP WD. Its would be less challenging to define application-level SLA when pacemaker managing the SAP WD. Azure provides VM level SLA of 99.99% , if VMs are deployed in Availability Zones.

We need following components to setup HA SAP Web Dispatcher on Linux.

  • A pair of SAP Certified VMs on Azure with supported Linux Operating System. Cross Availability Zone deployment is recommended for higher VM level SLA.
  • Azure Fileshare (Premium) for ‘sapmnt’ NFS share which will be available/mounted on both VMs for SAP Web Dispatcher.
  • Azure Load Balancer for configuring virtual IP and hostname (in DNS) of the SAP Web Dispatcher.
  • Configure Linux pacemaker cluster.
  • Installation of SAP Web Dispatcher on both the VMs with same SID and system number. It is recommended to use the latest version of SAP Web Dispatcher.
  • Configure the pacemaker resource agent for SAP Web Dispatcher application.

2.2.   Deployment Steps

This section provides detailed steps for HA active/passive SAP Web Dispatcher deployment for both the supported Linux operating systems (SUSE and Red Hat). Please refer to SAP Note 1928533 for SAP on Azure certified VMs, SAPS values and supported operating systems versions for SAP environment.

In the below steps, ‘For SLES’ is applicable to SLES operating system and ‘For RHEL’ is applicable to RHEL operating system. If for any step, operating system is not mentioned then its applicable to both the operating system.

Also following items are prefixed with:

[A]: Applicable to all nodes.

[1]: Applicable to only node 1.

[2]: Applicable to only node 2.

  1. Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SLES/RHEL for SAP. In this blog, below VM names are used:

    • Node1: webdisp01
    • Node2: webdisp02
    • Virtual Hostname: eitwebdispha

  2. Follow the standard SAP on Azure document for base pacemaker setup for the SAP Web Dispatcher VMs. We can either use SBD device or Azure fence agent for setting up fencing in the pacemaker cluster.

  3. The rest of the below setup steps are derived from the below SAP ASCS/ERS HA setup document and SUSE/RHEL blog on SAP WD setup. It's highly recommended to read the following documents.

  4. Deploy the Azure standard load balancer for defining the virtual IP of the SAP Web Dispatcher. In this example, the following setup is used in deployment.

    Frontend IPBackend PoolHealth Probe PortLoad Balancing Rule

    10.50.60.45

    (Virtual IP of SAP Web Dispatcher)

    Node 1 & Node 2 VMs62320 (set probeThreshold=2)

    HA Port: Enable

    Floating IP: Enable

    Idle Timeout: 30 mins


    Don't enable TCP time stamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the “net.ipv4.tcp_timestamps” OS parameter to '0'. For details, see Load Balancer health probes.

    Run the following command to set this parameter, and to set up value permanently add or update the parameter in /etc/sysctl.conf.

    sudo sysctl net.ipv4.tcp_timestamps=0
    When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity unless you perform additional configuration to allow routing to public endpoints. For details on how to achieve outbound connectivity, see Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios.

  5. Configure NFS for ‘sapmnt’ and SAP WD instance Filesystem on Azure Files. Deploy the Azure Files storage account (ZRS) and create fileshares for ‘sapmnt’ and ‘SAP WD instance (/usr/sap/SID/Wxx)’. Connect it to the vnet of the SAP VMs using private endpoint.

  6. Mount NFS volumes.

    • [A] For SLES: NFS client and other resources come pre-installed.
    • [A] For RHEL: Install the NFS Client and other resources.
      sudo yum -y install nfs-utils resource-agents resource-agents-sap
    • [A] Mount the NFS file system on both VMs. Create shared directories.
      sudo mkdir -p /sapmnt/WD1 
      sudo mkdir -p /usr/sap/WD1/W00
      
      sudo chattr +i /sapmnt/WD1 
      sudo chattr +i /usr/sap/WD1/W00
    • [A] Mount the File system that will not be controlled by pacemaker cluster.
      echo "sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-sapmnt /sapmnt/WD1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 2" >> /etc/fstab
      
      mount -a

       

  7. Prepare for SAP Web Dispatcher HA Installation.

    • [A] For SUSE: Install the latest version of the SUSE connector.
      sudo zypper install sap-suse-cluster-connector
    • [A] Set up host name resolution (including virtual hostname).

      • We can either use a DNS server or modify /etc/hosts on all nodes.

    • [A] Configure the SWAP file. Edit ‘/etc/waagent.conf’ file and change the following parameters.
      ResourceDisk.Format=y 
      ResourceDisk.EnableSwap=y 
      ResourceDisk.SwapSizeMB=2000
    • [A] Restart the agent to activate the change
      sudo service waagent restart
    • [A] For RHEL: Based on RHEL OS version follow SAP Notes.
      • SAP Note 2002167 for RHEL 7.x
      • SAP Note 2772999 for RHEL 8.x
      • SAP Note 3108316 for RHEL 9.x

  8. Create the SAP WD instance Filesystem, virtual IP, and probe port resources for SAP Web Dispatcher.

    • [1] For SUSE: 
      # Keep node 2 in standby 
      sudo crm node standby webdisp02 
      
      # Configure file system, virtual IP, and probe resource 
      sudo crm configure primitive fs_WD1_W00 Filesystem device=' sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-su-usrsap' directory='/usr/sap/WD1/W00' fstype='nfs' options='noresvport,vers=4,minorversion=1,sec=sys' \ 
      op start timeout=60s interval=0 \ 
      op stop timeout=60s interval=0 \ 
      op monitor interval=20s timeout=40s 
      
      sudo crm configure primitive vip_WD1_W00 IPaddr2 \ 
      params ip=10.50.60.45 \ 
      op monitor interval=10 timeout=20 
      
      sudo crm configure primitive nc_WD1_W00 azure-lb port=62320 \ 
      op monitor timeout=20s interval=10 
      
      sudo crm configure group g-WD1_W00 fs_WD1_W00 nc_WD1_W00 vip_WD1_W00

      Make sure that all the resources in the cluster are in started status and running on Node 1. Check the status using the command ‘crm status’.

    • [1] For RHEL:

      # Keep node 2 in standby 
      sudo pcs node standby webdisp02 
      
      # Create file system, virtual IP, probe resource 
      sudo pcs resource create fs_WD1_W00 Filesystem device='sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-rh-usrsap' \ 
      directory='/usr/sap/WD1/W00' fstype='nfs' force_unmount=safe options='sec=sys,nfsvers=4.1' \ 
      op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \ 
      --group g-WD1_W00 
      
      sudo pcs resource create vip_WD1_W00 IPaddr2 \ 
      ip=10.50.60.45 \ 
      --group g-WD1_W00 
      
      sudo pcs resource create nc_WD1_W00 azure-lb port=62320 \ 
      --group g-WD1_W00

      Make sure that all the resources in the cluster are in started status and running on Node 1. Check the status using the command ‘pcs status’.

  9. [1] Install SAP Web Dispatcher on the first Node.
    • For RHEL: Allow access to SWPM. This rule is not permanent. If you reboot the machine, you should run the command again.
      sudo firewall-cmd --zone=public --add-port=4237/tcp
    • Run the SWPM.
      ./sapinst SAPINST_USE_HOSTNAME=<virtual hostname>
      • Enter the virtual hostname and Instance number.
      • Provide the S/4 HANA message server details for backend connections.
      • Continue with SAP Web Dispatcher installation.

    • Check the status of SAP WD.

       

  10. [1] Stop the SAP WD and disable the systemd service. This step is only if SAP startup framework is managed by systemd as per SAP Note 3115048.
    # login as sidadm user 
    sapcontrol -nr 00 -function Stop 
    
    # login as root user 
    systemctl disable SAPWD1_00.service

     

  11. [1] Move the Filesystem, virtual IP, and probe port resources for SAP Web Dispatcher to second Node. 

    • For SLES:
      sudo crm node online webdisp02 
      sudo crm node standby webdisp01
    •  For RHEL:

      sudo pcs node unstandby webdisp02 
      sudo pcs node standby webdisp01
    • NOTE: Before proceeding to the next steps, check that resources successfully moved to Node 2.

  12. [2] Setup SAP Web Dispatcher on the second Node.

    • To setup the SAP WD on Node 2, we can copy the following files and directories from Node 1 to Node 2. Also perform the other tasks in Node 2 as mentioned below.
    • Note: Please ensure that permissions, owner, and group names are same in Node 2 for all the copied items as in Node 1. Before copying, save a copy of the existing files in Node 2.
    • Files to copy
      # For SLES and RHEL 
      /usr/sap/sapservices 
      /etc/system/system/SAPWD1_00.service 
      /etc/polkit-1/rules.d/10-SAPWD1-00.rules 
      /etc/passwd 
      /etc/shadow 
      /etc/group 
      
      # For RHEL 
      /etc/gshadow
    • Folders to copy
      # After copying, Rename the ‘hostname’ in the environment file names. 
      /home/wd1adm 
      /home/sapadm 
      
      /usr/sap/ccms 
      /usr/sap/tmp
    • Create the 'SYS' directory in the /usr/sap/WD1 folder
      • Create all subdirectories and soft links as available in Node 1.

         

  13. [2] Install the saphostagent

    • Extract the SAPHOSTAGENT.SAR file
    • Run the command to install it
      ./saphostexec -install
    • Check if SAP hostagent is running successfully
      /usr/sap/hostctrl/exe/saphostexec -status
  14. [2] Start SAP WD on node 2 and check the status

    sapcontrol -nr 00 -function StartService WD1 
    sapcontrol -nr 00 -function Start 
    sapcontrol -nr 00 -function GetProcessStatus
  15. [1] For SLES: Update the instance profile

    vi /sapmnt/WD1/profile/WD1_W00_wd1webdispha 
    
    # Add the following lines. 
    service/halib = $(DIR_EXECUTABLE)/saphascriptco.so 
    service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

     

  16. [A] Configure SAP users after the installation

    sudo usermod -aG haclient wd1adm
  17. [A] Configure keepalive parameter and add the parameter in /etc/sysctl.conf to set the value permanently

    sudo sysctl net.ipv4.tcp_keepalive_time=300

     

  18. Create SAP Web Dispatcher resource in cluster

    • For SLES:
      sudo crm configure property maintenance-mode="true" 
      
      sudo crm configure primitive rsc_sap_WD1_W00 SAPInstance \ 
      op monitor interval=11 timeout=60 on-fail=restart \ 
      params InstanceName=WD1_W00_wd1webdispha \ 
      START_PROFILE="/usr/sap/WD1/SYS/profile/WD1_W00_wd1webdispha" \ 
      AUTOMATIC_RECOVER=false MONITOR_SERVICES="sapwebdisp" 
      
      sudo crm configure modgroup g-WD1_W00 add rsc_sap_WD1_W00 
      
      sudo crm node online webdisp01 
      
      sudo crm configure property maintenance-mode="false"
    • For RHEL
      sudo pcs property set maintenance-mode=true 
      
      sudo pcs resource create rsc_sap_WD1_W00 SAPInstance \ 
      InstanceName=WD1_W00_wd1webdispha START_PROFILE="/sapmnt/WD1/profile/WD1_W00_wd1webdispha" \ 
      AUTOMATIC_RECOVER=false MONITOR_SERVICES="sapwebdisp" \ 
      op monitor interval=20 on-fail=restart timeout=60 \ 
      --group g-WD1_W00
      
      sudo pcs node unstandby webdisp01 
      
      sudo pcs property set maintenance-mode=false

       

  19. [A] For RHEL: Add firewall rules for SAP Web Dispatcher and Azure load balancer health probe ports on both nodes.

    sudo firewall-cmd --zone=public --add-port={62320,44300,8000}/tcp --permanent 
    sudo firewall-cmd --zone=public --add-port={62320,44300,8000}/tcp

     

  20. Verify SAP Web Dispatcher Cluster is running successfully

  21. Check "insights" blade of Azure load balancer in portal. It would show connections are redirected to one of the nodes.

     

  22. Check the backend S/4 HANA connection is working using the SAP Web Dispatcher Administration link.

  23. Run the sapwebdisp config check

    sapwebdisp pf=/sapmnt/WD1/profile/WD1_W00_wd1webdispha -checkconfig

     

     

  24. Test the cluster setup

    • For SLES
      • Pacemaker cluster testing for SAP Web Dispatcher can be derived from the document Azure VMs high availability for SAP NetWeaver on SLES (for ASCS/ERS Cluster)
      • We can run the following test cases (from the above link), which can be applicable for SAP WD component.
        • Test HAGetFailoverConfig and HACheckFailoverConfig
        • Manually migrate the SAP Web Dispatcher resource
        • Test HAFailoverToNode
        • Simulate node crash
        • Blocking network communication
        • Test manual restart of SAP WD instance
    • For RHEL
      • Pacemaker cluster testing for SAP Web Dispatcher can be derived from the document Azure VMs high availability for SAP NetWeaver on RHEL (for ASCS/ERS Cluster)
      • We can run the following test cases (from the above link), which can be applicable for SAP WD component.
        • Manually migrate the SAP Web Dispatcher resource
        • Simulate a node crash
        • Blocking network communication
        • Kill the SAP WD process

3. Active/Active HA Setup of SAP Web Dispatcher

3.1. System Design

In this Active/Active setup of SAP Web Dispatcher (WD), we can deploy and run parallel standalone WD on individual VMs with share nothing designs and have different SID. To connect to the SAP Web Dispatcher, Users will be using the one virtual hostname (FQDN)/IP as defined in the front-end IP of Azure Load balancer. Virtual IP to hostname/FQDN mapping needs to be performed in AD/DNS. Incoming traffic will be distributed to either of the WD by the Azure Internal Load balancer. No Operating system cluster setup is required in this scenario. This architecture can be deployed in either Linux or Windows operating systems.

 

In ILB configuration, Session persistence settings will ensure that user’s successive requests always be routed from Azure Load balancer to same WD as long as its active and ready to receive connections.

Also, SAP Help documentation describes this HA architecture as “High availability with several parallel Web Dispatchers”.

The following are the advantages of the active-active SAP WD setup.

  • Simpler design no need to set up Operating System Cluster
  • We have 2 WD instances to handle the requests and distribute the workload.
  • If one of the nodes fail, Load balancer will forward request to another and stop sending requests to failed node. So, it means SAP WD setup is highly available.

We need the following components to setup active/active SAP Web Dispatcher on Linux.

  • A pair of SAP Certified VMs on Azure with supported Linux Operating System. Cross Availability Zone deployment is recommended for higher VM level SLA.
  • Azure managed disk of required size on each VM to create Filesystems for ‘sapmnt’ and ‘/sar/sap’.
  • Azure Load Balancer for configuring virtual IP and hostname (in DNS) of the SAP Web Dispatcher.
  • Installation of SAP Web Dispatcher on both the VMs with different SID. It is recommended to use the latest version of SAP Web Dispatcher.

3.2. Deployment Steps

This section provides detailed steps for HA active/active SAP Web Dispatcher deployment for both the supported Linux operating systems (SUSE Linux and Redhat Linux). Please refer to SAP Note 1928533 for SAP on Azure certified VMs, SAPS values and supported operating systems versions for SAP environment.

3.2.1. For SUSE and RHEL Linux

  1. Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SUSE/RHEL Linux for SAP. Add managed data disk on each of the VMs and create ‘/usr/sap’ and ‘/sapmnt/<SID> Filesystem in it.
  2. Install the SAP Web Dispatcher using SAP SWPM on both VMs. Both SAP WD are completely independent of each other and should have separate SID.
  3. Perform the basic configuration check for both SAP web dispatchers using “sapwebdisp pf=<profile> -checkconfig”. We should also check if SAP WD Admin URL is working for both WD.
  4. Deploy the Azure standard load balancer for defining the virtual IP of the SAP Web Dispatcher. As a reference, the following setup is used in deployment.

    Front-end IPBackend PoolHealth Probe PortLoad Balancing Rule

    10.50.60.99

    (Virtual IP of SAP Web Dispatcher)

    Node1 & Node2 VM

    Protocol: HTTPS

    Port: 44300 (WD https port)

    Path: /sap/public/icman/ping

    Interval: 5 seconds

    (set probeThreshold=2 using azure CLI)

    Port & Backend Port: 44300

    Floating IP: Disable,

    TCP Reset: Disable,

    Idle Timeout: Max (30 Minutes)


    Icman/ping is a way to ensure that SAP web dispatcher is successfully connected to backend SAP S/4 HANA or SAP ERP based application servers. This check is also part of the basic configuration check of SAP web dispatcher using “sapwebdisp pf=<profile> -checkconfig”.
    If we use HTTP(s) based health probe, ILB connection will be redirected to SAP WD only when connection between SAP WD and S/4 HANA OR ERP Application is working.
    If we have Java based SAP system as backend environment, then ‘icman/ping’ will not be available, and HTTP(S) path can’t be used in health probe. In that case, we can use TCP based health probe (protocol value as ‘tcp’) and use SAP WD tcp port (like port 8000) in the health probe configuration.
    In this setup, we used https port 44300 as port & backend port value as that is the only port number used by incoming/source URL. If there are multiple ports to be used/allowed in incoming URL, then we can enable ‘HA Port’ in Load balancing rule instead of specifying the used port.
    Note: As per SAP Note 2941769, we need to set SAP web dispatcher parameter wdisp/filter_internal_uris=FALSE.  Also we need to verify if icman ping URL is working for both the SAP Web dispatchers with their actual hostnames.
    Define the front-end IP (virtual IP) and hostname mapping in the DNS or /etc/hosts file.

  5. Check if Azure Loadbalancer is routing traffic to both WD. In the ‘Insights’ section for Azure loadbalancer, connection health to the VMs should be green.

     

  6. Validate the SAP Web Dispatcher URL is accessible using virtual hostname.
  7. Perform high availability tests for SAP WD.
  8. Stop first SAP WD and verify WD connections are working.
  9. Then start the first WD and stop the second WD and verify that the WD connections are working.
  10. Simulate node crash of each of the WD VMs and verify that the WD connections are working.

3.3. SAP Web Dispatcher (active/active) for Multiple Systems

We can use the SAP WD (active/active) pair to connect to multiple backend SAP systems rather than setting up separate SAP WD for each SAP backend environment.

Based on the unique URL of the incoming request with different virtual hostname/FQDN and/or port of the SAP WD, user request will be directed to any one of the SAP WD and then SAP WD will determine the backend system to redirect and load balance the requests.

SAP documents describe the design and SAP specific configurations steps for this scenario.

In Azure environment, SAP Web Dispatcher architecture will be as below.

We can deploy this setup by defining an Azure standard load balancer with multiple front-end IPs attached to one backend-pool of SAP WD VMs and configuring health-probe and load balancing rules to associate it.

When configuring Azure Load Balancer with multiple frontend IPs pointing to the same backend pool/port, floating IP must be enabled for each load balancing rule. If floating IP is not enabled on the first rule, Azure won’t allow the configuration of additional rules with different frontend IPs on the same backend port.  Refer to the article Multiple frontends - Azure Load Balancer

With floating IPs enabled on multiple load balancing rules, the frontend IP must be added to the network interface (e.g., eth0) on both SAP Web Dispatcher VMs.

3.3.1. Deployment Steps

  1. Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SUSE/RHEL Linux for SAP. Add managed data disk on each of the VMs and create ‘/usr/sap’ and ‘/sapmnt/<SID> Filesystem in it.
  2. Install the SAP Web Dispatcher using SAP SWPM on both VMs. Both SAP WD are completely independent of each other and should have separate SID.
  3. Deploy Azure Standard Load Balancer with configuration as below

    Front-end IPBackend PoolHealth Probe PortLoad Balancing Rule

    10.50.60.99

    (Virtual IP of SAP Web Dispatcher for redirection to S/4 or Fiori SID E10)

    Node1 & Node2 VMs

    Protocol: TCP

    Port: 8000 (WD tcp port)

    Interval: 5 seconds

    (set probeThreshold=2 using azure CLI)

    Protocol: TCP

    Port & Backend Port: 44300

    Floating IP: Enable,

    TCP Reset: Disable,

    Idle Timeout: Max (30 Minutes)

    10.50.60.101

    (Virtual IP of SAP Web Dispatcher for redirection to S/4 SID or Fiori E60)

    Protocol: TCP

    Port & Backend Port: 44300

    Floating IP: Enable,

    TCP Reset: Disable,

    Idle Timeout: Max (30 Minutes)

    As described above, we are defining 2 front-end IPs, 2 load-balancing rules, 1 back-end pool and 1 health probe. 
    In this setup, we used https port 44300 as port & backend port value as that is the only port number used by incoming/source URL. If there are multiple ports to be used/allowed in incoming URL, then we can enable ‘HA Port’ in Load balancing rule instead of specifying the used port.
    Define the front-end IP (virtual IP) and hostname mapping in the DNS or /etc/hosts file.

  4. Add both the virtual IPs to the SAP WD VMs network interface. Make sure the additional IPs are added permanently and do not disappear after VM reboot.

    • For SLES, refer to “alternative workaround” section in Automatic Addition of Secondary IP Addresses in Azure

    • For RHEL, refer to the solution provided using “nmcli” command in the How to add multiple IP range in RHEL9

    • Displaying the "ip addr show" for SAP WD VM1:
      >>ip addr show
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host
             valid_lft forever preferred_lft forever
      2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
          link/ether 60:45:bd:73:bd:14 brd ff:ff:ff:ff:ff:ff
          inet 10.50.60.87/26 brd 10.50.60.127 scope global eth0
             valid_lft forever preferred_lft forever
          inet 10.50.60.99/26 brd 10.50.60.127 scope global secondary eth0
             valid_lft forever preferred_lft forever
          inet 10.50.60.101/26 brd 10.50.60.127 scope global secondary eth0
             valid_lft forever preferred_lft forever
          inet6 fe80::6245:bdff:fe73:bd14/64 scope link
             valid_lft forever preferred_lft forever
    • Displaying the "ip addr show" for SAP WD VM2:
      >> ip addr show
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host
             valid_lft forever preferred_lft forever
      2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
          link/ether 60:45:bd:73:b1:92 brd ff:ff:ff:ff:ff:ff
          inet 10.50.60.93/26 brd 10.50.60.127 scope global eth0
             valid_lft forever preferred_lft forever
          inet 10.50.60.99/26 brd 10.50.60.127 scope global secondary eth0
             valid_lft forever preferred_lft forever
          inet 10.50.60.101/26 brd 10.50.60.127 scope global secondary eth0
             valid_lft forever preferred_lft forever
          inet6 fe80::6245:bdff:fe73:b192/64 scope link
             valid_lft forever preferred_lft forever

       

  5. Update the Instance profile of SAP WDs.
    #-----------------------------------------------------------------------
    # Back-end system configuration
    #-----------------------------------------------------------------------
    wdisp/system_0 = SID=E10, MSHOST=e10ascsha, MSPORT=8100, SSL_ENCRYPT=1, SRCSRV=10.50.60.99:*
    wdisp/system_1 = SID=E60, MSHOST=e60ascsha, MSPORT=8100, SSL_ENCRYPT=1, SRCSRV=10.50.60.101:*
    • Stop and Start the SAP WD on VM1 and VM2.
    • Note: With the above SRCSRV parameter value, only incoming request from “.99 (or its hostname)” for E10 or “.101 (or its hostname)” for E60 will be sent to SAP backend environment.  If we also want to use SAP WD actual IP or hostname-based request to be also connected to SAP Backend systems, then we need to add those IP or hostnames in the value (separated by semicolon) of SRCSRV parameter.
  6. Perform the basic configuration check for both SAP web dispatcher using “sapwebdisp pf=<profile> -checkconfig”. We should also check if SAP WD Admin URL is working for both WD.
  7. In the Azure Portal, in the ‘Insights’ section of Azure load balancer, we can see that connection status to the SAP WD VMs are healthy.

Updated Jun 05, 2025
Version 2.0