SAP on Azure
39 TopicsSAP Business Data Cloud Now Available on Microsoft Azure
We’re thrilled to announce that SAP Business Data Cloud (SAP BDC) including SAP Databricks is now available on Microsoft Azure marking a major milestone in our strategic partnership with SAP and Databricks and our commitment to empowering customers with cutting-edge Data & AI capabilities. SAP BDC is a fully managed SaaS solution designed to unify, govern, and activate SAP and third-party data for advanced analytics and AI-driven decision-making. Customers can now deploy SAP BDC on Azure in US East, US West and Europe West, with additional regions coming soon, and unlock transformative insights from their enterprise data with the scale, security, and performance of Microsoft’s trusted cloud platform. Why SAP BDC on Azure Is a Game-Changer for Data & AI Deploying SAP BDC on Azure enables organizations to accelerate their Data & AI initiatives by modernizing their SAP Business Warehouse systems and leveraging a modern data architecture that includes SAP HANA Cloud, data lake files and connectivity to Microsoft technology. Whether it’s building AI-powered intelligent applications, enabling semantically rich data products, or driving predictive analytics, SAP BDC on Azure provides the foundation for scalable, secure, and context-rich decision-making. Running SAP BDC workloads on Microsoft Azure unlocks the full potential of enterprise data by integrating SAP systems with non-SAP data using Microsoft’s powerful Data & AI services - enabling customers to build intelligent applications grounded in critical business context. Why Azure is an Ideal Platform for Running SAP BDC Microsoft Azure stands out as a leading cloud platform for hosting SAP solutions, including SAP BDC. Azure’s global infrastructure, high-performance networking, and powerful Data & AI capabilities make it an ideal foundation for large-scale SAP workloads. When organizations face complex data environments and need seamless interoperability across tools, Azure’s resilient backbone and enterprise-grade services provide the scalability and reliability essential for building a robust SAP data architecture. Under the Hood: SAP Databricks in SAP BDC is Powered by Azure Databricks A key differentiator of SAP BDC on Azure is that SAP Databricks, a core component of BDC, runs on Azure Databricks—Microsoft’s first-party service. Azure Databricks is a fully managed first party service making Microsoft Azure the optimal cloud for running Databricks workloads. It uniquely offers: Native integration with Microsoft Entra ID for seamless access control. Optimized performance with Power BI, delivering unmatched analytics speed. Enterprise-grade security and compliance, inherent to Azure’s first-party services. Joint engineering and unified support from Microsoft and Databricks. Zero-copy data sharing between SAP BDC and Azure Databricks, enabling frictionless collaboration across platforms. This deep integration ensures that customers benefit from the full power of Azure’s AI, analytics, and governance capabilities while running SAP workloads. Expanding Global Reach: What’s Next While SAP BDC is now live in three Azure regions US East, US West and Europe - we’re just getting started. Over the next few months, availability will expand to additional Azure regions such as Brazil and Canada. For the remaining regions, a continuously updated roadmap can be found on the SAP Roadmap Explorer website Final Thoughts This launch reinforces Microsoft Azure’s longstanding partnership with SAP, backed by over 30 years of trusted partnership and co-innovation. With SAP BDC now available on Azure, customers can confidently modernize their data estate, unlock AI-driven insights, and drive business transformation at scale. Stay tuned as we continue to expand availability and bring even more Data & AI innovations to our joint customers over the next few months.Announcing Public Preview for Business Process Solutions
In today’s AI powered enterprises, success hinges on access to reliable, unified business information. Whether you are deploying AI-augmented workflows or fully autonomous agentic solutions, one thing is clear: trusted, consistent data is the fuel that drives intelligent outcomes. Yet in many organizations, data remains fragmented across best of breed applications – creating blind spots in cross-functional processes and throwing roadblocks in the path of automation. Microsoft is dedicated to tackle these challenges, delivering a unified data foundation that accelerates AI adoption, simplifies automation and reduces risk – empowering businesses to unlock the full potential of unified data analytics and agentic intelligence. Our new solution offers cross-functional insights across previously siloed environments and includes: Prebuilt data models for enterprise business applications in Microsoft Fabric Source system data mappings and transformations Prebuilt dashboards and reports in Power BI Prebuilt AI Agents in Copilot Studio (coming soon) Integrated Security and Compliance By unifying Microsoft’s Fabric and AI solutions we can rapidly accelerate transformation and derisk AI rollout through repeatable, reliable, prebuilt solutions. Functional Scope Our new solution currently supports a set of business applications and functional areas, enabling organizations to break down silos and drive actionable insights across their core processes. The platform covers key domains such as: Finance: Delivers a comprehensive view of financial performance, integrating data from general ledger, accounts receivable, and accounts payable systems. This enables finance teams to analyze trends, monitor compliance, and optimize cash flow management all from within Power BI. The associated Copilot agent provides not only access to this data via natural language but will also enable financial postings. Sales: Provides a complete perspective on customers’ opportunity to cash journeys, from initial opportunity through invoicing and payment via Power BI reports and dashboards. The associated Copilot agent can help improve revenue forecasting, by connecting structured ERP and CRM data with unstructured data from Microsoft 365, also tracking sales pipeline health and identify bottlenecks. Procurement: Supports strategic procurement and supplier management, consolidating purchase orders, goods receipts, and vendor invoicing data into a complete spend dashboard. This empowers procurement teams to optimize sourcing strategies, manage supplier risk, and control spend. Manufacturing: (coming soon): Will extend coverage to manufacturing and production processes, enabling organizations to optimize resource allocation and monitor production efficiency. Each item within Business Process Solutions is delivered as a complete, business-ready offering. These models are thoughtfully designed to ensure that organizations can move seamlessly from raw data to actionable execution. Key features include: Facts and Dimensions: Each model is structured to capture both transactional details (facts) and contextual information (dimensions), supporting granular analysis and robust reporting across business processes. Transformations: Built-in transformations automatically prepare data for reporting and analytics, making it compatible with Microsoft Fabric. For example, when a business user needs to compare sales results from Europe, Asia, and North America, the solution transformations handle currency conversion behind the scenes. This ensures that results are consistent across regions, making analysis straightforward and reliable—without the need for manual intervention or complex configuration. Insight to Action: Customers will be able to leverage prebuilt Copilot Agents within Business Process Solutions to turn insight into action. These agents are deeply integrated not only with Microsoft Fabric and Microsoft Teams, but also connected source applications, enabling users to take direct, contextual actions across systems based on real-time insights. By connecting unstructured data sources such as emails, chats, and documents from Microsoft 365 apps, the agents can provide a holistic and contextualized view to support smarter decisions. With embedded triggers and intelligent agents, automated responses could be initiated based on new insights -- streamlining decision-making and enabling proactive, data-driven operations. Ultimately, this will empower teams to not just understand what is happening on a wholistic level, but to also take faster and smarter actions, and with greater confidence. Authorizations: Data models are tailored to respect organizational security and access policies, ensuring that sensitive information is protected and only accessible to authorized users. The same user credential principles apply to the Copilot agents when interacting with/updating the source system in the user-context. Behind the scenes, the solution automatically provisions the required objects and infrastructure to build the data warehouse, removing the usual complexity of bringing data together. It guarantees consistency and reliability, so organizations can focus on extracting value from their data rather than managing technical details. This reliable data foundation serves as one of the key informants of the agentic business processes. Accelerated Insights with Prebuilt Analytics Building on these robust data models, Business Process Solutions offer a suite of prebuilt Power BI reports tailored to common business processes. These reports provide immediate access to key metrics and trends, such as financial performance, sales effectiveness, and procurement efficiency. Designed for rapid deployment, they allow organizations to: Start analyzing data from day one, without lengthy setup or customization. Adapt existing reports for your organization’s exact business needs. Demonstrate best practices for leveraging data models in analytics and decision-making. This approach accelerates time-to-value and also empowers users to explore new analytical scenarios and drive continuous improvement. Extensibility and Customization Every organization is unique and our new solution is designed to support this, allowing you to adapt analytics and data models to fit your specific processes and requirements. You can customize scope items, bring in your own tables and views, integrate new data sources as your business evolves, and combine data across Microsoft Fabric for deeper insights. Similarly, the associated agents will be customizable from Copilot Studio to adapt to your specific Enterprise apps configuration. This flexibility ensures that, no matter how your organization operates, Business Process Solutions helps you unlock the full value of your data. Data integration Business Process Solutions uses the same connectivity options as Microsoft Fabric and Copilot Studio but goes further by embedding best practices that make integration simpler and more effective. We recognize that no single pattern can address the diverse needs of all business applications. We also understand that many businesses have already invested in data extraction tools, which is why our solution supports a wide range of options, from native connectivity to third-party options that bring specialized capabilities to the table. With Business Process Solutions we ensure data can be interacted with in a reliable and high-performant way, whether working with massive volumes or complex data structures. Getting started If your organization is ready to unlock the value of unified analytics, getting started is simple. Just send us a request using the form at: https://aka.ms/JoinBusAnalyticsPreview. Our team will guide you through the next steps and help you begin your journey.Backup SAP Oracle Databases Using Azure VM Backup Snapshots
This blog article provides a comprehensive step-by-step guide for backing up SAP Oracle databases using Azure VM backup snapshots, ensuring data safety and integrity. Installation of CIFS Utilities: The process begins with the installation of cifs-utils on Oracle Linux, which is the recommended OS for running Oracle databases in the cloud. Setting Up Environment Variables: Users are instructed to define necessary environment variables for resource group and storage account names. Creating SMB Credentials: The guide explains how to create a folder for SMB credentials and retrieve the storage account key, emphasizing the need for appropriate permissions. Mounting SMB File Share: Instructions are provided for checking the accessibility of the storage account and mounting the SMB file share, which will serve as a backup location for archived logs. Preparing Oracle Database for Backup:Users must place the Oracle database in hot backup mode to ensure a consistent backup while allowing ongoing transactions. Initiating Snapshot Backup: Once the VM backup is configured, users can initiate a snapshot backup to capture the state of the virtual machine, including the Oracle database. Restoration Process: The document outlines the steps for restoring the Oracle database from the backup, including updating IP addresses and starting the database listener. Final Steps and Verification: Users are encouraged to verify the configuration and ensure that all necessary backups are completed successfully, including the SMB file share.Azure Files NFS Encryption In Transit for SAP on Azure Systems
Azure Files NFS volumes now support encryption in-transit via TLS. With this enhancement, Azure Files NFS v4.1 offers the robust security that modern enterprises require, without compromising performance by ensuring all traffic between clients and servers is fully encrypted. Now Azure Files NFS data can be encrypted end-to-end: at rest, in transit, and across the network. Using Stunnel, an open-source TLS wrapper, Azure Files encrypts the TCP stream between the NFS client and Azure Files with strong encryption using AES-GCM, without needing Kerberos. This ensures data confidentiality while eliminating the need for complex setups or external authentication systems like Active Directory. The AZNFS utility package simplifies encrypted mounts by installing and setting up Stunnel on the client (Azure VMs). The AZNFS mount helper mounts the NFS shares with TLS support. The mount helper initializes dedicated stunnel client process for each storage account’s IP address. The stunnel client process listens on a local port for inbound traffic and then redirects encrypted nfs client traffic to the 2049 port where NFS server is listening on. The AZNFS package runs a background job called aznfswatchdog. It ensures that stunnel processes are running for each storage account and cleans up after all shares from the storage account are unmounted. If for some reason a stunnel process is terminated unexpectedly, the watchdog process restarts it. For more details, refer to the following document: How to encrypt data in transit for NFS shares Availability in Azure Regions All regions that support Azure Premium Files now support encryption in transit. Supported Linux releases For SAP on Azure environment, Azure Files NFS Encryption in Transit (EiT) is available for the following Operating System releases. SLES for SAP 15 SP4 onwards RHEL for SAP 8.6 onwards (EiT is currently not supported for file systems managed by Pacemaker clusters on RHEL.) Refer to SAP Note 1928533 for Operating system supportability for SAP on Azure systems. How to deploy Encryption in Transit (EiT) for Azure Files NFS Shares Refer to the SAP on Azure deployment planning guide about Using Azure Premium Files NFS and SMB for SAP workload As described in the planning guide, for SAP workloads, following are the supported uses of Azure Files NFS shares and EiT can be used for all the scenarios: sapmnt volume for a distributed SAP systems transport directory for SAP landscape /hana/shared for HANA scale-out. Review carefully the considerations for sizing /hana/shared, as appropriately sized /hana/shared volume contributes to system's stability file interface between your SAP landscape and other applications Deploy the Azure File NFS storage account. Refer to the standard documentation for creating the Azure Files storage account, file share and private endpoint. Create an NFS Azure file share Note : We can enforce EiT for all the file shares in the Azure Storage account by enabling ‘secure transfer required’ option. Deploy the mount helper (AZNFS) package on the Linux VM. Follow the instructions for your Linux distribution to install the package. Create the directories to mount the file shares. mkdir -p <full path of the directory> Mount the NFS File share. Refer to the section for mounting the Azure Files NFS EiT file share in Linux VMs. To mount the file share permanently by adding the mount commands in ‘/etc/fstab’. vi /etc/fstab sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 aznfs noresvport,vers=4,minorversion=1,sec=sys,_netdev 0 0 # Mount the file systems mount -a o File systems mentioned above are an example to explain the mount command syntax. o When adding nfs mount entry to /etc/fstab, the fstype is "nfs". However, to use AZNFS mount helper and EiT, we need to use the fstype as "aznfs" which is not known to the Operating System, so at boot time the server tries to mount these entries before the watchdog is active, and they may fail. Users should always add "_netdev" option to their /etc/fstab entries to make sure shares are mounted on reboot only after the required services (like network) are active. o We can add “notls” option in the mount command, if we don’t want to use the EiT but just want to use AZNFS mount helper to mount the file system. Also , we cannot mix EiT and no-EiT methods for different file systems using Azure Files NFS in the same Azure VM. Mount commands may fail to mount the file systems if EiT and no-EiT methods are used in the same VM o Mount helper supports private-endpoint based connections for Azure Files NFS EiT. o If SAP VM is custom domain joined, then we can use custom DNS FQDN OR short names for file share in the ‘/etc/fstab’ as its defined in the DNS. To verify the hostname resolution, check using ‘nslookup <hostname>’ and ‘getent host <hostname>’ commands. Mount the NFS File share as pacemaker cluster resource for SAP Central Services. In high availability setup of SAP Central Services, we may use file system as a resource in pacemaker cluster and it needs to be mounted using pacemaker cluster command. In the pacemaker commands to setup file system as cluster resource, we need to change the mount type to ‘aznfs’ from ‘nfs’. Also it’s recommended to use ‘_netdev’ in the options parameter. Following are the SAP Central Services setup scenarios in which Azure Files NFS is used as pacemaker resource agent, and we can use Azure Files NFS EiT. Azure VMs high availability for SAP NW on SLES with NFS on Azure Files Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files For SUSE Linux: SUSE 15 SP4 (for SAP) and higher releases recognise the ‘aznfs’ as file system type in the pacemaker resource agent. SUSE recommends using simple mount approach for high availability setup of SAP Central services, in which all file systems are mounted using ‘/etc/fstab’ only. For RHEL Linux: RHEL 8.6 (for SAP) and higher releases will be recognising ‘aznfs’ as file system type in pacemaker resource agent. At the time of writing the blog, ‘aznfs’ as file system type is not yet recognised by the FileSystem resource agent(RS) on RHEL, hence this setup can’t be used at this moment. For SAP HANA scale-out with HSR setup We can use Azure Files NFS EiT for SAP HANA scale-out with HSR setup as described in the below docs. SAP HANA scale-out with HSR and Pacemaker on SLES SAP HANA scale-out with HSR and Pacemaker on RHEL We need to mount ‘/hana/shared’ File system with EiT by defining the filesystem type as ‘aznfs’ in ‘/etc/fstab’. Also it’s recommended to use ‘_netdev’ in the options parameter. For SUSE Linux: In the Create File system resource section with SAP HANA high availability “SAPHanaSR-ScaleOut” package, in which we create a dummy file system cluster resource, which will monitor and report failures for ‘/hana/shared’ file system, we can continue to follow the steps as it is in the above document with ‘fstype=nfs4’. ‘/hana/shared’ file system will still be using EiT as defined in ‘/etc/fstab’. For SAP HANA high availability “SAPHanaSR-angi”, there are no further actions needed to use Azure File NFS EiT. For RHEL Linux: In the Create File system resource section, we can replace the file system type to ‘aznfs’ from ‘nfs’ in the pacemaker resource configuration for ‘/hana/shared’ file systems. Validation of in-transit data Encryption for Azure Files NFS. Refer to Verify that the in-transit data encryption succeeded section to check and confirm if EiT is successfully working. Summary Go ahead with EiT!! Simplified deployment of Encryption in Transit of Azure Files Premium NFS (Locally redundant Storage / Zonal redundant Storage) will strengthen the security footprint of Production and non-Production SAP on Azure environments.SAP Web Dispatcher on Linux with High Availability Setup on Azure
1. Introduction The SAP Web Dispatcher component is used for load balancing SAP web HTTP(s) traffic among the SAP application servers. It works as “reverse proxy” and the entry point for HTTP(s) requests into SAP environment, which consists of one or more SAP NetWeaver system. This blog provides detailed guidance about setting up high availability of standalone SAP Web Dispatcher on Linux operating system on Azure. There are different options to set up high availability for SAP Web Dispatcher. Active/Passive High Availability Setup using a Linux pacemaker cluster (SUSE or Red Hat) with a virtual IP/hostname defined in Azure Load Balancer. Active/Active High Availability Setup by deploying multiple parallel instances of SAP Web Dispatcher across different Azure Virtual Machines (running either SUSE or Red Hat) and distributing traffic using Azure Load Balancer. We will walk through the configuration steps for both high availability scenarios in this blog. 2. Active/Passive HA Setup of SAP Web Dispatcher 2.1. System Design Following is the high level architecture diagram of HA SAP Production environment on Azure. SAP Web Dispatcher (WD) standalone HA setup is highlighted in the SAP architecture design. In this setup as active/passive node design, primary node of the SAP Web Dispatcher will be receiving the user's requests and transferring (and load balancing) it to the backed SAP Application Servers. In case of unavailability of primary node, Linux pacemaker cluster will perform the failover of SAP Web Dispatcher to the secondary node. Users will connect to the SAP Web Dispatcher using the virtual hostname(FQDN) and virtual IP Address as defined in the Azure Loadbalancer. Azure Loadbalancer health probe port will be activated by pacemaker cluster on the primary node, so all the user connections to the virtual IP/hostname will be redirected by Azure Loadbalancer to the active SAP Web Dispatcher. Also, SAP Help documentation describes this HA architecture as “High Availability of SAP Web Dispatcher with External HA Software”. The following are the advantages of active-passive SAP WD setup. Linux pacemaker cluster will continuously monitor the SAP WD active node and services running on it. In case of any error scenario, the active node will be fenced by pacemaker cluster and secondary node will be made active. This will ensure best user experience round the clock. Complete automation of error detection and start/stop functionality of SAP WD. Its would be less challenging to define application-level SLA when pacemaker managing the SAP WD. Azure provides VM level SLA of 99.99% , if VMs are deployed in Availability Zones. We need following components to setup HA SAP Web Dispatcher on Linux. A pair of SAP Certified VMs on Azure with supported Linux Operating System. Cross Availability Zone deployment is recommended for higher VM level SLA. Azure Fileshare (Premium) for ‘sapmnt’ NFS share which will be available/mounted on both VMs for SAP Web Dispatcher. Azure Load Balancer for configuring virtual IP and hostname (in DNS) of the SAP Web Dispatcher. Configure Linux pacemaker cluster. Installation of SAP Web Dispatcher on both the VMs with same SID and system number. It is recommended to use the latest version of SAP Web Dispatcher. Configure the pacemaker resource agent for SAP Web Dispatcher application. 2.2. Deployment Steps This section provides detailed steps for HA active/passive SAP Web Dispatcher deployment for both the supported Linux operating systems (SUSE and Red Hat). Please refer to SAP Note 1928533 for SAP on Azure certified VMs, SAPS values and supported operating systems versions for SAP environment. In the below steps, ‘For SLES’ is applicable to SLES operating system and ‘For RHEL’ is applicable to RHEL operating system. If for any step, operating system is not mentioned then its applicable to both the operating system. Also following items are prefixed with: [A]: Applicable to all nodes. [1]: Applicable to only node 1. [2]: Applicable to only node 2. Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SLES/RHEL for SAP. In this blog, below VM names are used: Node1: webdisp01 Node2: webdisp02 Virtual Hostname: eitwebdispha Follow the standard SAP on Azure document for base pacemaker setup for the SAP Web Dispatcher VMs. We can either use SBD device or Azure fence agent for setting up fencing in the pacemaker cluster. For SLES: Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure For RHEL: Set up Pacemaker on Red Hat Enterprise Linux in Azure The rest of the below setup steps are derived from the below SAP ASCS/ERS HA setup document and SUSE/RHEL blog on SAP WD setup. It's highly recommended to read the following documents. For SLES: High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with NFS on Azure Files. SUSE Blog: SAP Web Dispatcher High Availability on Cloud with SUSE Linux. For RHEL: High availability for SAP NetWeaver on VMs on RHEL with NFS on Azure Files RHEL Blog: How to manage standalone SAP Web Dispatcher instances using the RHEL HA Add-On - Red Hat Customer Portal Deploy the Azure standard load balancer for defining the virtual IP of the SAP Web Dispatcher. In this example, the following setup is used in deployment. Frontend IP Backend Pool Health Probe Port Load Balancing Rule 10.50.60.45 (Virtual IP of SAP Web Dispatcher) Node 1 & Node 2 VMs 62320 (set probeThreshold=2) HA Port: Enable Floating IP: Enable Idle Timeout: 30 mins Don't enable TCP time stamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the “net.ipv4.tcp_timestamps” OS parameter to '0'. For details, see Load Balancer health probes. Run the following command to set this parameter, and to set up value permanently add or update the parameter in /etc/sysctl.conf. sudo sysctl net.ipv4.tcp_timestamps=0 When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity unless you perform additional configuration to allow routing to public endpoints. For details on how to achieve outbound connectivity, see Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios. Configure NFS for ‘sapmnt’ and SAP WD instance Filesystem on Azure Files. Deploy the Azure Files storage account (ZRS) and create fileshares for ‘sapmnt’ and ‘SAP WD instance (/usr/sap/SID/Wxx)’. Connect it to the vnet of the SAP VMs using private endpoint. For SLES: Refer to the Deploy an Azure Files storage account and NFS shares section for detailed steps. For RHEL: Refer to the Deploy an Azure Files storage account and NFS shares section for detailed steps. Mount NFS volumes. [A] For SLES: NFS client and other resources come pre-installed. [A] For RHEL: Install the NFS Client and other resources. sudo yum -y install nfs-utils resource-agents resource-agents-sap [A] Mount the NFS file system on both VMs. Create shared directories. sudo mkdir -p /sapmnt/WD1 sudo mkdir -p /usr/sap/WD1/W00 sudo chattr +i /sapmnt/WD1 sudo chattr +i /usr/sap/WD1/W00 [A] Mount the File system that will not be controlled by pacemaker cluster. echo "sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-sapmnt /sapmnt/WD1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 2" >> /etc/fstab mount -a Prepare for SAP Web Dispatcher HA Installation. [A] For SUSE: Install the latest version of the SUSE connector. sudo zypper install sap-suse-cluster-connector [A] Set up host name resolution (including virtual hostname). We can either use a DNS server or modify /etc/hosts on all nodes. [A] Configure the SWAP file. Edit ‘/etc/waagent.conf’ file and change the following parameters. ResourceDisk.Format=y ResourceDisk.EnableSwap=y ResourceDisk.SwapSizeMB=2000 [A] Restart the agent to activate the change sudo service waagent restart [A] For RHEL: Based on RHEL OS version follow SAP Notes. SAP Note 2002167 for RHEL 7.x SAP Note 2772999 for RHEL 8.x SAP Note 3108316 for RHEL 9.x Create the SAP WD instance Filesystem, virtual IP, and probe port resources for SAP Web Dispatcher. [1] For SUSE: # Keep node 2 in standby sudo crm node standby webdisp02 # Configure file system, virtual IP, and probe resource sudo crm configure primitive fs_WD1_W00 Filesystem device=' sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-su-usrsap' directory='/usr/sap/WD1/W00' fstype='nfs' options='noresvport,vers=4,minorversion=1,sec=sys' \ op start timeout=60s interval=0 \ op stop timeout=60s interval=0 \ op monitor interval=20s timeout=40s sudo crm configure primitive vip_WD1_W00 IPaddr2 \ params ip=10.50.60.45 \ op monitor interval=10 timeout=20 sudo crm configure primitive nc_WD1_W00 azure-lb port=62320 \ op monitor timeout=20s interval=10 sudo crm configure group g-WD1_W00 fs_WD1_W00 nc_WD1_W00 vip_WD1_W00 Make sure that all the resources in the cluster are in started status and running on Node 1. Check the status using the command ‘crm status’. [1] For RHEL: # Keep node 2 in standby sudo pcs node standby webdisp02 # Create file system, virtual IP, probe resource sudo pcs resource create fs_WD1_W00 Filesystem device='sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-rh-usrsap' \ directory='/usr/sap/WD1/W00' fstype='nfs' force_unmount=safe options='sec=sys,nfsvers=4.1' \ op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \ --group g-WD1_W00 sudo pcs resource create vip_WD1_W00 IPaddr2 \ ip=10.50.60.45 \ --group g-WD1_W00 sudo pcs resource create nc_WD1_W00 azure-lb port=62320 \ --group g-WD1_W00 Make sure that all the resources in the cluster are in started status and running on Node 1. Check the status using the command ‘pcs status’. [1] Install SAP Web Dispatcher on the first Node. For RHEL: Allow access to SWPM. This rule is not permanent. If you reboot the machine, you should run the command again. sudo firewall-cmd --zone=public --add-port=4237/tcp Run the SWPM. ./sapinst SAPINST_USE_HOSTNAME=<virtual hostname> Enter the virtual hostname and Instance number. Provide the S/4 HANA message server details for backend connections. Continue with SAP Web Dispatcher installation. Check the status of SAP WD. [1] Stop the SAP WD and disable the systemd service. This step is only if SAP startup framework is managed by systemd as per SAP Note 3115048. # login as sidadm user sapcontrol -nr 00 -function Stop # login as root user systemctl disable SAPWD1_00.service [1] Move the Filesystem, virtual IP, and probe port resources for SAP Web Dispatcher to second Node. For SLES: sudo crm node online webdisp02 sudo crm node standby webdisp01 For RHEL: sudo pcs node unstandby webdisp02 sudo pcs node standby webdisp01 NOTE: Before proceeding to the next steps, check that resources successfully moved to Node 2. [2] Setup SAP Web Dispatcher on the second Node. To setup the SAP WD on Node 2, we can copy the following files and directories from Node 1 to Node 2. Also perform the other tasks in Node 2 as mentioned below. Note: Please ensure that permissions, owner, and group names are same in Node 2 for all the copied items as in Node 1. Before copying, save a copy of the existing files in Node 2. Files to copy # For SLES and RHEL /usr/sap/sapservices /etc/system/system/SAPWD1_00.service /etc/polkit-1/rules.d/10-SAPWD1-00.rules /etc/passwd /etc/shadow /etc/group # For RHEL /etc/gshadow Folders to copy # After copying, Rename the ‘hostname’ in the environment file names. /home/wd1adm /home/sapadm /usr/sap/ccms /usr/sap/tmp Create the 'SYS' directory in the /usr/sap/WD1 folder Create all subdirectories and soft links as available in Node 1. [2] Install the saphostagent Extract the SAPHOSTAGENT.SAR file Run the command to install it ./saphostexec -install Check if SAP hostagent is running successfully /usr/sap/hostctrl/exe/saphostexec -status [2] Start SAP WD on node 2 and check the status sapcontrol -nr 00 -function StartService WD1 sapcontrol -nr 00 -function Start sapcontrol -nr 00 -function GetProcessStatus [1] For SLES: Update the instance profile vi /sapmnt/WD1/profile/WD1_W00_wd1webdispha # Add the following lines. service/halib = $(DIR_EXECUTABLE)/saphascriptco.so service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector [A] Configure SAP users after the installation sudo usermod -aG haclient wd1adm [A] Configure keepalive parameter and add the parameter in /etc/sysctl.conf to set the value permanently sudo sysctl net.ipv4.tcp_keepalive_time=300 Create SAP Web Dispatcher resource in cluster For SLES: sudo crm configure property maintenance-mode="true" sudo crm configure primitive rsc_sap_WD1_W00 SAPInstance \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=WD1_W00_wd1webdispha \ START_PROFILE="/usr/sap/WD1/SYS/profile/WD1_W00_wd1webdispha" \ AUTOMATIC_RECOVER=false MONITOR_SERVICES="sapwebdisp" sudo crm configure modgroup g-WD1_W00 add rsc_sap_WD1_W00 sudo crm node online webdisp01 sudo crm configure property maintenance-mode="false" For RHEL sudo pcs property set maintenance-mode=true sudo pcs resource create rsc_sap_WD1_W00 SAPInstance \ InstanceName=WD1_W00_wd1webdispha START_PROFILE="/sapmnt/WD1/profile/WD1_W00_wd1webdispha" \ AUTOMATIC_RECOVER=false MONITOR_SERVICES="sapwebdisp" \ op monitor interval=20 on-fail=restart timeout=60 \ --group g-WD1_W00 sudo pcs node unstandby webdisp01 sudo pcs property set maintenance-mode=false [A] For RHEL: Add firewall rules for SAP Web Dispatcher and Azure load balancer health probe ports on both nodes. sudo firewall-cmd --zone=public --add-port={62320,44300,8000}/tcp --permanent sudo firewall-cmd --zone=public --add-port={62320,44300,8000}/tcp Verify SAP Web Dispatcher Cluster is running successfully Check "insights" blade of Azure load balancer in portal. It would show connections are redirected to one of the nodes. Check the backend S/4 HANA connection is working using the SAP Web Dispatcher Administration link. Run the sapwebdisp config check sapwebdisp pf=/sapmnt/WD1/profile/WD1_W00_wd1webdispha -checkconfig Test the cluster setup For SLES Pacemaker cluster testing for SAP Web Dispatcher can be derived from the document Azure VMs high availability for SAP NetWeaver on SLES (for ASCS/ERS Cluster) We can run the following test cases (from the above link), which can be applicable for SAP WD component. Test HAGetFailoverConfig and HACheckFailoverConfig Manually migrate the SAP Web Dispatcher resource Test HAFailoverToNode Simulate node crash Blocking network communication Test manual restart of SAP WD instance For RHEL Pacemaker cluster testing for SAP Web Dispatcher can be derived from the document Azure VMs high availability for SAP NetWeaver on RHEL (for ASCS/ERS Cluster) We can run the following test cases (from the above link), which can be applicable for SAP WD component. Manually migrate the SAP Web Dispatcher resource Simulate a node crash Blocking network communication Kill the SAP WD process 3. Active/Active HA Setup of SAP Web Dispatcher 3.1. System Design In this Active/Active setup of SAP Web Dispatcher (WD), we can deploy and run parallel standalone WD on individual VMs with share nothing designs and have different SID. To connect to the SAP Web Dispatcher, Users will be using the one virtual hostname (FQDN)/IP as defined in the front-end IP of Azure Load balancer. Virtual IP to hostname/FQDN mapping needs to be performed in AD/DNS. Incoming traffic will be distributed to either of the WD by the Azure Internal Load balancer. No Operating system cluster setup is required in this scenario. This architecture can be deployed in either Linux or Windows operating systems. In ILB configuration, Session persistence settings will ensure that user’s successive requests always be routed from Azure Load balancer to same WD as long as its active and ready to receive connections. Also, SAP Help documentation describes this HA architecture as “High availability with several parallel Web Dispatchers”. The following are the advantages of the active-active SAP WD setup. Simpler design no need to set up Operating System Cluster We have 2 WD instances to handle the requests and distribute the workload. If one of the nodes fail, Load balancer will forward request to another and stop sending requests to failed node. So, it means SAP WD setup is highly available. We need the following components to setup active/active SAP Web Dispatcher on Linux. A pair of SAP Certified VMs on Azure with supported Linux Operating System. Cross Availability Zone deployment is recommended for higher VM level SLA. Azure managed disk of required size on each VM to create Filesystems for ‘sapmnt’ and ‘/sar/sap’. Azure Load Balancer for configuring virtual IP and hostname (in DNS) of the SAP Web Dispatcher. Installation of SAP Web Dispatcher on both the VMs with different SID. It is recommended to use the latest version of SAP Web Dispatcher. 3.2. Deployment Steps This section provides detailed steps for HA active/active SAP Web Dispatcher deployment for both the supported Linux operating systems (SUSE Linux and Redhat Linux). Please refer to SAP Note 1928533 for SAP on Azure certified VMs, SAPS values and supported operating systems versions for SAP environment. 3.2.1. For SUSE and RHEL Linux Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SUSE/RHEL Linux for SAP. Add managed data disk on each of the VMs and create ‘/usr/sap’ and ‘/sapmnt/<SID> Filesystem in it. Install the SAP Web Dispatcher using SAP SWPM on both VMs. Both SAP WD are completely independent of each other and should have separate SID. Perform the basic configuration check for both SAP web dispatchers using “sapwebdisp pf=<profile> -checkconfig”. We should also check if SAP WD Admin URL is working for both WD. Deploy the Azure standard load balancer for defining the virtual IP of the SAP Web Dispatcher. As a reference, the following setup is used in deployment. Front-end IP Backend Pool Health Probe Port Load Balancing Rule 10.50.60.99 (Virtual IP of SAP Web Dispatcher) Node1 & Node2 VM Protocol: HTTPS Port: 44300 (WD https port) Path: /sap/public/icman/ping Interval: 5 seconds (set probeThreshold=2 using azure CLI) Port & Backend Port: 44300 Floating IP: Disable, TCP Reset: Disable, Idle Timeout: Max (30 Minutes) Icman/ping is a way to ensure that SAP web dispatcher is successfully connected to backend SAP S/4 HANA or SAP ERP based application servers. This check is also part of the basic configuration check of SAP web dispatcher using “sapwebdisp pf=<profile> -checkconfig”. If we use HTTP(s) based health probe, ILB connection will be redirected to SAP WD only when connection between SAP WD and S/4 HANA OR ERP Application is working. If we have Java based SAP system as backend environment, then ‘icman/ping’ will not be available, and HTTP(S) path can’t be used in health probe. In that case, we can use TCP based health probe (protocol value as ‘tcp’) and use SAP WD tcp port (like port 8000) in the health probe configuration. In this setup, we used https port 44300 as port & backend port value as that is the only port number used by incoming/source URL. If there are multiple ports to be used/allowed in incoming URL, then we can enable ‘HA Port’ in Load balancing rule instead of specifying the used port. Note: As per SAP Note 2941769, we need to set SAP web dispatcher parameter wdisp/filter_internal_uris=FALSE. Also we need to verify if icman ping URL is working for both the SAP Web dispatchers with their actual hostnames. Define the front-end IP (virtual IP) and hostname mapping in the DNS or /etc/hosts file. Check if Azure Loadbalancer is routing traffic to both WD. In the ‘Insights’ section for Azure loadbalancer, connection health to the VMs should be green. Validate the SAP Web Dispatcher URL is accessible using virtual hostname. Perform high availability tests for SAP WD. Stop first SAP WD and verify WD connections are working. Then start the first WD and stop the second WD and verify that the WD connections are working. Simulate node crash of each of the WD VMs and verify that the WD connections are working. 3.3. SAP Web Dispatcher (active/active) for Multiple Systems We can use the SAP WD (active/active) pair to connect to multiple backend SAP systems rather than setting up separate SAP WD for each SAP backend environment. Based on the unique URL of the incoming request with different virtual hostname/FQDN and/or port of the SAP WD, user request will be directed to any one of the SAP WD and then SAP WD will determine the backend system to redirect and load balance the requests. SAP documents describe the design and SAP specific configurations steps for this scenario. SAP Web Dispatcher for Multiple Systems One SAP Web Dispatcher, Two Systems: Configuration Example In Azure environment, SAP Web Dispatcher architecture will be as below. We can deploy this setup by defining an Azure standard load balancer with multiple front-end IPs attached to one backend-pool of SAP WD VMs and configuring health-probe and load balancing rules to associate it. When configuring Azure Load Balancer with multiple frontend IPs pointing to the same backend pool/port, floating IP must be enabled for each load balancing rule. If floating IP is not enabled on the first rule, Azure won’t allow the configuration of additional rules with different frontend IPs on the same backend port. Refer to the article Multiple frontends - Azure Load Balancer With floating IPs enabled on multiple load balancing rules, the frontend IP must be added to the network interface (e.g., eth0) on both SAP Web Dispatcher VMs. 3.3.1. Deployment Steps Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SUSE/RHEL Linux for SAP. Add managed data disk on each of the VMs and create ‘/usr/sap’ and ‘/sapmnt/<SID> Filesystem in it. Install the SAP Web Dispatcher using SAP SWPM on both VMs. Both SAP WD are completely independent of each other and should have separate SID. Deploy Azure Standard Load Balancer with configuration as below Front-end IP Backend Pool Health Probe Port Load Balancing Rule 10.50.60.99 (Virtual IP of SAP Web Dispatcher for redirection to S/4 or Fiori SID E10) Node1 & Node2 VMs Protocol: TCP Port: 8000 (WD tcp port) Interval: 5 seconds (set probeThreshold=2 using azure CLI) Protocol: TCP Port & Backend Port: 44300 Floating IP: Enable, TCP Reset: Disable, Idle Timeout: Max (30 Minutes) 10.50.60.101 (Virtual IP of SAP Web Dispatcher for redirection to S/4 SID or Fiori E60) Protocol: TCP Port & Backend Port: 44300 Floating IP: Enable, TCP Reset: Disable, Idle Timeout: Max (30 Minutes) As described above, we are defining 2 front-end IPs, 2 load-balancing rules, 1 back-end pool and 1 health probe. In this setup, we used https port 44300 as port & backend port value as that is the only port number used by incoming/source URL. If there are multiple ports to be used/allowed in incoming URL, then we can enable ‘HA Port’ in Load balancing rule instead of specifying the used port. Define the front-end IP (virtual IP) and hostname mapping in the DNS or /etc/hosts file. Add both the virtual IPs to the SAP WD VMs network interface. Make sure the additional IPs are added permanently and do not disappear after VM reboot. For SLES, refer to “alternative workaround” section in Automatic Addition of Secondary IP Addresses in Azure For RHEL, refer to the solution provided using “nmcli” command in the How to add multiple IP range in RHEL9 Displaying the "ip addr show" for SAP WD VM1: >>ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 60:45:bd:73:bd:14 brd ff:ff:ff:ff:ff:ff inet 10.50.60.87/26 brd 10.50.60.127 scope global eth0 valid_lft forever preferred_lft forever inet 10.50.60.99/26 brd 10.50.60.127 scope global secondary eth0 valid_lft forever preferred_lft forever inet 10.50.60.101/26 brd 10.50.60.127 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::6245:bdff:fe73:bd14/64 scope link valid_lft forever preferred_lft forever Displaying the "ip addr show" for SAP WD VM2: >> ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 60:45:bd:73:b1:92 brd ff:ff:ff:ff:ff:ff inet 10.50.60.93/26 brd 10.50.60.127 scope global eth0 valid_lft forever preferred_lft forever inet 10.50.60.99/26 brd 10.50.60.127 scope global secondary eth0 valid_lft forever preferred_lft forever inet 10.50.60.101/26 brd 10.50.60.127 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::6245:bdff:fe73:b192/64 scope link valid_lft forever preferred_lft forever Update the Instance profile of SAP WDs. #----------------------------------------------------------------------- # Back-end system configuration #----------------------------------------------------------------------- wdisp/system_0 = SID=E10, MSHOST=e10ascsha, MSPORT=8100, SSL_ENCRYPT=1, SRCSRV=10.50.60.99:* wdisp/system_1 = SID=E60, MSHOST=e60ascsha, MSPORT=8100, SSL_ENCRYPT=1, SRCSRV=10.50.60.101:* Stop and Start the SAP WD on VM1 and VM2. Note: With the above SRCSRV parameter value, only incoming request from “.99 (or its hostname)” for E10 or “.101 (or its hostname)” for E60 will be sent to SAP backend environment. If we also want to use SAP WD actual IP or hostname-based request to be also connected to SAP Backend systems, then we need to add those IP or hostnames in the value (separated by semicolon) of SRCSRV parameter. Perform the basic configuration check for both SAP web dispatcher using “sapwebdisp pf=<profile> -checkconfig”. We should also check if SAP WD Admin URL is working for both WD. In the Azure Portal, in the ‘Insights’ section of Azure load balancer, we can see that connection status to the SAP WD VMs are healthy.SAP on Azure Product Announcements Summary – SAP Sapphire 2025
Introduction Today at Sapphire, we made an array of exciting announcements that strengthen the Microsoft-SAP partnership. I'd like to share additional details that complement these announcements as well as give updates on further product innovation. With over three decades of close collaboration and co-innovation with SAP, we continue to deliver RISE with SAP on Azure and integrations with SAP S/4HANA Public Cloud, allowing customers to innovate with services from both SAP BTP and Microsoft. Our new integrations enhance security through multi-layer cloud protection for SAP and non-SAP workloads, while our AI and Copilot platform provides unified analytics to improve decision-making for customers. Samsung C&T's Engineering & Construction Group is a leader in both the domestic and international construction industries. It recently embarked on the ERP Cloud transformation with RISE with SAP on Azure to enhance its existing ERP System, which is optimized for local environment, to support the global business expansion by transitioning to RISE with SAP on Azure. “Samsung C&T’s successful transition to RISE with SAP on Azure serves as a best practice for other Samsung Group affiliates considering cloud-based ERP adoption. It also demonstrates that even highly localized operations can be integrated into a cloud-based environment that supports global standards.” Aidan Nam, Former Vice President, Corporate System Team, Samsung C&T SAP on Azure also offers AI, Data, and Security solutions that enhance customers' investments and help unlock valuable information stored within ERP systems. When Danfoss, a global leader in energy-efficient solutions, began searching for new security tools for business-critical SAP infrastructure, it quickly leveraged Microsoft Sentinel solution for SAP applications, to find potential malicious activity and deploy multilayered protection around its expanding core infrastructure thereby achieving scalable security visibility. “With Microsoft Sentinel and the Microsoft Sentinel solution for SAP applications, we’ve centralized our security logs and gained a single pane of glass with which we can monitor our SAP systems,” Kevin Cai, IT Specialist in the Security Operations Center at Danfoss We are pleased to announce additional SAP on Azure product updates and details to further help customers innovate on the most trusted cloud for SAP. Simplified onboarding of SAP BTP estate with the new agentless data connector for Microsoft Sentinel Solution for SAP Microsoft Defender for Endpoint for SAP applications is now fully SAP HANA aware offering unparalleled protection for SAP S/4HANA environments. Public Preview of SAP OData as a knowledge source making it easy to add content from SAP systems to Copilot Studio. The new storage and memory optimized Medium Memory Mbv3 VM Series (Mbsv3 and Mbdsv3) is now SAP certified, delivering compute capabilities with IOPs performance of up to 650K. The Mv3 Very High Memory series now features an expanded range of SAP-certified VM sizes, spanning from 24TB to 32TB of memory and scaling up to 1,792 vCPUs. General Availability of SAP ASE (Sybase) database backup support on Azure Backup. SAP Deployment Automation Framework now supports validation of SAP deployments on Azure with public preview of SAP Testing Automation Framework (STAF), automating high availability testing process to ensure SAP systems reliability and availability. The Inventory Checks for SAP Workbook in Azure Center for SAP Tools now comes with New Dashboards for Enhanced Visibility. Let's dive into the summary of product updates and services. Extend and Innovate Microsoft Sentinel Solution for SAP Business applications pose a unique security challenge with highly sensitive information that can make them prime targets for attacks. Attackers can compromise newly discovered unprotected SAP systems within three hours. Microsoft offers best in class security solutions support for SAP business applications with Microsoft Sentinel. The new agentless data connector is our first party solution that re-uses customers’ SAP BTP estate for drastically simplified onboarding. In addition, new strategic third-party solutions have been added to the Microsoft Sentinel content hub by SAP SE and other ISVs making Sentinel the most effective SIEM for SAP workloads: SAP LogServ: RISE, addon for SAP ECS internal logs -infra, database, etc. (Generally Available) SAP Enterprise Threat Detection (Preview) SecurityBridge (Generally Available) Microsoft Defender for Endpoint for SAP applications We are thrilled to announce a major milestone made possible through the deep collaboration between SAP and Microsoft: Microsoft Defender for Endpoint (MDE) is now the first NextGen antivirus solution that is SAP HANA aware. This joint innovation allows organizations like COFCO International to protect their SAP landscapes seamlessly and securely, without disruption. This groundbreaking capability sets MDE apart in the cybersecurity landscape, offering unparalleled protection for SAP S/4HANA environments — all without interfering with mission-critical operations. Thanks to close engineering collaboration, MDE has been carefully trained to recognize SAP HANA binaries and data files. Specialized detection training ensures MDE accurately identifies these critical components and treats them as known, trusted entities — combining world-class cybersecurity with SAP-native awareness. API Management SAP Principal Propagation (for simplicity often also referred to as SSO) is the gold standard for app integration – especially when it comes to 3rd party apps such as Microsoft Power Platform. We proudly announce that SSO is now password-less with Azure. Microsoft Entra ID Managed Identity works seamlessly with SAP workloads such as RISE, SuccessFactors and more. Cut your maintenance efforts for SAP SSO in half and become more secure in doing so. Find more details on this blog. Teams Integration In addition to the availability of the SAP Joule agent in Teams and Copilot, the “classic” integration of Teams with products like SAP S/4HANA Public Cloud is available as well. What started as “Share links to the business context (apps) in chats” has now evolved to Adaptive Card-based Loop components, Chat, Voice and Video calls integrations in contact cards and To Dos in Teams. Users of SAP S/4HANA Public Cloud can stay in their flow of work and access their business-critical data from with SAP S/4HANA Public Cloud or connected Teams applications. Copilot Studio – SAP OData Support in Knowledge Sources Knowledge Sources in Copilot Studio enhance generative answers by using data from Power Platform, Dynamics 365, websites, and external systems. This enables agents to offer relevant information and insights to customers. Today, we announce the Public Preview of SAP OData as a new knowledge source. Customers and partners can now add content from SAP systems to Copilot Studio. Users can query the latest status of Sales Orders in SAP S/4HANA, view pending Invoices from ECC, or query information about employees from SAP SuccessFactors. All you need to do is connect to the relevant SAP OData services as a knowledge source in Copilot Studio. Copilot Studio will not duplicate the data but analyze the data structure and create the relevant queries on demand whenever a user asks a related question. The user context is always kept ensuring roles and permissions in the SAP system are taken into account. Head over to product documentation to read more and get started. New SAP Certified Compute and Storage Thousands of organizations today trust the Azure M-series virtual machines to run some of their largest mission-critical SAP workloads, including SAP HANA. Very High Memory Mv3 VM Series We are excited to unveil updates to our Mv3 Very High Memory (VHM) series with the addition of a 24TB VM, a testament to our ongoing commitment to innovation. Building on our past successes, this series integrates customer insights and industry advancements to deliver unmatched performance and efficiency. It features advanced capabilities for diverse workloads, powered by the 4th generation Intel® Xeon® Platinum 8490H processors, which offer faster processing speeds and better price-performance. You can learn more about our new Mv3 VHM here link. Below is the summary of the recently released Mv3 VHM SKUs. (New) Standard_M896ixds_24_v3: Designed for S/4HANA workloads, with 896 cores and SMT disabled for optimal SAP performance. It is SAP certified for OLTP (S/4HANA) Scale-Up/4-node Scale-Out, and OLAP (BW4H) Scale-Up operations. Standard_M896ixds_32_v3: Designed for S/4HANA workloads, with 896 cores and SMT disabled for optimal SAP performance. It is SAP certified for OLTP (S/4HANA) Scale-Up/4-node Scale-Out, and OLAP (BW4H) Scale-Up operations. Standard_M1792ixds_32_v3: Designed for S/4HANA workloads, with 1792 cores. It is SAP certified for OLTP (S/4HANA) Scale-Up/2-node Scale-Out, and OLAP (BW4H) Scale-Up operations. The new VM Size provides robust memory and CPU power, ensuring exceptional handling of large-scale in-memory databases. With 200 Gbps bandwidth and adaptable storage options such as Premium Disk and Azure NetApp Files (ANF), these VMs deliver speed and flexibility for SAP HANA configurations. Medium Memory Mbv3 VM Series The new Mbv3 series (Mbsv3 and Mbdsv3), released in September 2024, featuring both storage-optimized and memory-optimized are now certified as SAP certified compute VM as of March 2025. The new Mbv3 VMs are based on the 4th generation Intel® Xeon® Scalable processors, scale for workloads up to 4TB, and deliver with NVMe interface for higher remote disk storage performance. It offers up to 650,000 IOPS providing a 5x improvement in network throughput over the previous M-series families and up to 10GBps of remote disk storage bandwidth, providing a 2.5x improvement in remote storage bandwidth over the previous M-series families. Details of SAP Certified Compute Mbv3 VMs are here link. SAP on Azure Software Products and Services Azure Backup for SAP We are pleased to announce the general availability of backup support for SAP ASE database running on Azure virtual machines using Azure Backup. SAP ASE databases are mission critical workloads that require a low recovery point objective (RPO) and a fast recovery time objective (RTO). This backup service offers zero-infrastructure backup and restore of SAP ASE databases with Azure Backup enterprise management capabilities. Key benefits of SAP ASE database backup 15-minute RPO with point-in-time recovery capability. Striping to increase the backup throughput between ASE Virtual Machine (VM) and Recovery services vault Support for cost-effective backup policies and ASE Native compression to lower backup storage costs. Multiple databases restore options including Alternate Location Restore (System refresh), Original Location Restore and Restore as Files. Recovery Services Vault that provides security capabilities like Immutability, Soft Delete and Multiuser Authentication. SAP Testing Automation Framework (STAF) While deployment automation frameworks like SAP Deployment Automation Framework (SDAF) have streamlined system implementation, the critical testing phase has largely remained a manual bottleneck – until now. We are introducing the SAP Testing Automation Framework (STAF), a new framework (currently in public preview) that automates high-availability (HA) testing for SAP deployments on Azure. STAF currently focuses on testing HA configurations for SAP HANA and SAP Central Services. Importantly, STAF is a cross-distribution solution supporting both SUSE Linux Enterprise Server (SLES) and RedHat Enterprise Linux (RHEL), reflecting our commitment to serve the diverse SAP on Azure customer base.. STAF uses modular architecture with Ansible for orchestration and custom modules for validation. It ensures business continuity by validating configurations and recovery mechanisms before systems go live, reducing risks, boosting efficiency, and ensuring compliance with standards. You can start leveraging its capabilities today by visiting the project on GitHub at https://github.com/azure/sap-automation-qa. To know more about the framework please visit our blog: Introducing SAP Testing Automation Framework (STAF) Azure Center for SAP solutions Tools and Frameworks We are pleased to introduce three new dashboards for Azure Inventory Checks for SAP, enhancing visibility into Azure infrastructure and security. These dashboards offer a more structured, visual approach to monitoring health and compliance. Here are the new dashboards at a glance Summary Dashboard: Offers a snapshot of your Azure landscape with results from 21 key infrastructure checks critical for SAP workloads. It highlights your environment’s readiness and identifies areas needing attention. Extended Report Dashboard: This view presents the Inventory Checks for SAP in a user-friendly dashboard layout, with enhanced navigation and filtering. AzSecurity Dashboard: This dashboard presents 10 key Azure security checks to provide insights into configurations and identify vulnerabilities, ensuring compliance and safety. These dashboards transform raw data into actionable insights, allowing customers to quickly assess SAP infrastructure on Azure, identify misconfigurations, track improvements, and prepare confidently for audits and reviews. SAP + Microsoft Co-Innovations Microsoft and SAP are continually innovating to facilitate business transformation for our customers. This year, we are strengthening our partnership in several areas including Business Suite, AI, Data, Cloud ERP, Security, SAP BTP, among others. Please ensure that you check out our blog to learn more about the significant announcements we are making this year at SAP Sapphire.Join Microsoft at SAP Sapphire 2025
I’m thrilled to be back at SAP Sapphire this year alongside my colleagues from Microsoft! Sapphire is an event I always look forward to as it provides a great opportunity to celebrate the successes of our customers and partners as well as share big announcements and product updates. Whether you’re joining us for pre-day events, engaging in sessions during Sapphire, or enjoying the networking opportunities, there’s something for everyone. Read on to learn more about what’s in store: Previewing exciting innovation for SAP on the Microsoft Cloud Data & AI: At Sapphire, we will be sharing key updates on SAP Business Data Cloud (BDC) on Azure and how you can use Azure Databricks with SAP BDC. We will also be sharing the progress on the joint integration between Microsoft Copilot and SAP Joule, helping accelerate business outcomes and increase end-user productivity. SAP BTP on Azure: Together with SAP, we are ensuring our customers can use the latest SAP Business Technology (BTP) services on Microsoft Azure in their preferred regions. We are excited to share the launch of two new datacenter regions for SAP BTP on Azure – Canada (Toronto) and China (Hebei). With this announcement, SAP BTP is now available in 10 Azure datacenter regions, including Brazil, launched late last year. Thanks to incredible demand from our joint customers, SAP has also added several additional BTP services on Azure. New services include SAP Build Apps, SAP Build Code, SAP AI Core and Joule. You can view all the existing BTP services and regions on Azure on the SAP Discovery Center. To stay up-to-date on future plans for service and region roll-out, visit the SAP roadmap explorer. RISE with SAP Customer Spotlight – Nestlé: This multinational organization with over 2,000 brands in 188 countries has operations that are as large as they are complex. Nestlé executed one of the largest RISE with SAP migrations in the world on Azure where they could build a future-ready enterprise, leveraging AI-driven solutions. Their need for a platform that could deliver innovation and reliability at scale along with a robust infrastructure made Azure the clear choice. To hear more about their transformation story, make sure to attend the session at Sapphire: Nestlé’s journey from SAP on-premises to RISE with SAP on Microsoft Azure. Join us at SAP Sapphire 2025 Sessions you don’t want to miss We’re bringing a dynamic lineup of 10 in-person sessions across both Orlando and Madrid, featuring insights from Microsoft and SAP experts. Don’t miss the chance to dive into the latest on RISE with SAP, SAP Business Suite, SAP BTP, and Data and AI on the Microsoft Cloud—plus hear real-world stories from customers who are already driving results through the Microsoft and SAP partnership. Register now using the links below. Session Number Date and Time Nestlé’s journey from SAP on-premises to RISE with SAP on Microsoft Azure PAR1165 Wednesday, May 21 2:30pm-2:50pm EDT Microsoft Federal’s cloud landscape transformation with SAP NS2 SER2695 Tuesday, May 20 4:30pm-4:50pm EDT Unlock innovation for SAP ERP with AI, SAP BTP, and more on Microsoft Azure PAR1166 Wednesday, May 21 2:00pm-2:20pm EDT Accelerating procurement transformation with SAP Ariba solutions SPM2624 Wednesday, May 21 2:00pm-2:20pm EDT Joule and Microsoft 365 Copilot: AI-enabled productivity in action BAI2594 Tuesday, May 20 11:30am-11:50am EDT (Madrid) - Modernizing the SAP Software Landscape at ANDRITZ PAR1307 Wednesday, May 28 11:30am - 11:50am CEST ASUG Pre-Day Sessions Harnessing SAP's AI Innovations: Joule, Generative AI, and Business AI ASUG104 Location: S320GH Monday, May 19 1:00pm-5:00pm EDT ASUG Power Peer Group Unlock the value of SAP BTP: Lessons learned from ASUG members BTP3093 Location: ASUG Booth Theater Tuesday, May 20 2:00pm-2:40pm EDT Celebration night! We are excited to be the exclusive sponsor of the celebration night concert, which is always a highlight at Sapphire. The evening will feature two special performances by the Zac Brown Band at the American Garden Theatre at Epcot®, scheduled for 8:15 PM and 9:45 PM. Come celebrate with us! Come find us at our booth! Microsoft and SAP are at the forefront of AI transformation and are excited to showcase the interoperability of our AI agents at Sapphire. The video below shows a sneak peek of what’s possible but if you’d like to learn more, come talk to our subject matter experts from Microsoft on-site to help address any questions and foster connections. Find us at Booth #409 in Orlando and Booth #9.333 in Madrid Networking Events Beyond the sessions and booth experiences, our partners are hosting special social and networking events you can join: Home - PwC at SAP Sapphire 2025 RSVP: Lemongrass Invites You for Cocktails and Apps! Syntax Annual Sapphire Party IBM Sapphire Client Appreciation reception We are looking forward to a great Sapphire and I hope to see you there!Configure SAP Standalone Gateway integrated with High Availability ASCS instance
For businesses running on SAP systems, a resilient environment is essential for uninterrupted operations. This includes ensuring high availability of the standalone SAP Gateway. Instead of a separate standalone SAP gateway deployment, integrating it with the ASCS high availability offers a simpler, more efficient, and less maintenance-intensive solution. This approach also ensures the gateway's high availability alongside the ASCS. This blog post details how to implement this streamlined integration on Linux OS, and its advantages. Pre-requisites Before proceeding with the configuration, ensure that you have reviewed 1010990 - Configuring a Standalone Gateway in an HA ASCS instance - SAP for Me, which describes the steps to configure a standalone gateway in a Windows environment. This document will use the information provided in SAP Note 1010990 to extend the configuration process to a Linux environment, while capturing and addressing the necessary changes specific to Linux OS. Configuration steps Edit “/sapmnt//exe/uc/linuxx86_64/scs.lst” and add the following lines. gwmon gwrd NOTE: It is possible that these entries are already present in the file. If they are, then no further action is required. Add following lines in ASCS profile: /sapmnt/SID/profile/SID_ASCS_<ascshost> #----------------------------------------------------------------------- # Start gateway #----------------------------------------------------------------------- _GW = gw.sap$(SAPSYSTEMNAME)_$(INSTANCE_NAME) Execute_ = local rm -f $(_GW) Execute_ = local ln -s -f $(DIR_EXECUTABLE)/gwrd$(FT_EXE) $(_GW) Restart_Program_ = local $(_GW) pf=$(_PF) -no_abap Check the gateway port definitions are maintained in /etc/services files sapgw<no> 33/tcp # SAP System Gateway Port sapgws<no> 48/tcp # SAP System Gateway Security Port NOTE: <no> is the value of the ASCS profile parameter SAPSYSTEM Restart the SAP ASCS instance. If you have a clustered High Availability (HA) setup, you can either restart the service by putting the cluster in maintenance mode or stop and start the ASCS resource within the cluster. To check if the SAP Gateway process has started and is running, execute the following command. # sapcontrol -nr <no> -function GetProcessList Testing and validation To test the setup, register the program on the gateway. Follow the instructions in SAP Note 353597 - Registration of RFC server programs to register the server program on the gateway. In this example, I’m using the existing RFC destination (IGS_RFC_DEST) where we will be registering IGS.NW1 program ID. To ensure the RFC destination uses the standalone gateway, follow these steps: Register the “IGS.NW1” program on the standalone SAP gateway host integrated with the SAP ASCS instance: sidadm> rfcexec -t -a IGS.NW1 -g slesascs -x sapgw00 -s NW1 & Edit “IGS_RFC_DEST” to enter the gateway host and gateway service details: Gateway Host: <hostname of ASCS instance where gateway is running> Gateway Service: sapgw<no> Perform "Connection Test". Reference & troubleshooting 2441956 - System doesn't start with ERROR => ThSetGwParam : NiHostToAddr 910919 - Setting up gateway logging Security Parameters of the Gateway 3224889 - GW: Change of the default settings for gw/prxy_info with parameter gw/acl_mode_proxy 910918 - GW: Parameter gw/prxy_info RFC Gateway security, part 4 - prxyinfo ACL - SAP Community 3380779 - GW: standalone RFC gateway shows errors during startup