SAP on Azure
48 TopicsAgentic Integration with SAP, ServiceNow, and Salesforce
Copilot/Copilot Studio Integration with SAP (No Code) By integrating SAP Cloud Identity Services with Microsoft Entra ID, organizations can establish secure, federated identity management across platforms. This configuration enables Microsoft Copilot and Teams to seamlessly connect with SAP’s Joule digital assistant, supporting natural language interactions and automating business processes efficiently. Key Resources as given in SAP docs (Image courtesy SAP): Configuring SAP Cloud Identity Services and Microsoft Entra ID for Joule Enable Microsoft Copilot and Teams to Pass Requests to Joule Copilot Studio Integration with ServiceNow and Salesforce (No Code) Integration with ServiceNow and Salesforce, has two main approaches: Copilot Agents using Copilot Studio: Custom agents can be built in Copilot Studio to interact directly with Salesforce CRM data or ServiceNow knowledge bases and helpdesk tickets. This enables organizations to automate sales and support processes using conversational AI. Create a custom sales agent using your Salesforce CRM data (YouTube) ServiceNow Connect Knowledge Base + Helpdesk Tickets (YouTube) 3rd Party Agents using Copilot for Service Agent: Microsoft Copilot can be embedded within Salesforce and ServiceNow interfaces, providing users with contextual assistance and workflow automation directly inside these platforms. Set up the embedded experience in Salesforce Set up the embedded experience in ServiceNow MCP or Agent-to-Agent (A2A) Interoperability (Pro Code) - (Image courtesy SAP) If you choose a pro-code approach, you can either implement the Model Context Protocol (MCP) in a client/server setup for SAP, ServiceNow, and Salesforce, or leverage existing agents for these third-party services using Agent-to-Agent (A2A) integration. Depending on your requirements, you may use either method individually or combine them. The recently released Azure Agent Framework offers practical examples for both MCP and A2A implementations. Below is the detailed SAP reference architecture, illustrating how A2A solutions can be layered on top of SAP systems to enable modular, scalable automation and data exchange. Agent2Agent Interoperability | SAP Architecture Center Logic Apps as Integration Actions Logic Apps is the key component of Azure Integration platform. Just like so many other connectors it has connectors for all this three platforms (SAP, ServiceNow, Salesforce). Logic Apps can be invoked from custom Agent (built in action in Foundry) or Copilot Agent. Same can be said for Power Platform/Automate as well. Conclusion This article provides a comprehensive overview of how Microsoft Copilot, Copilot Studio, Foundry by A2A/MCP, and Azure Logic Apps can be combined to deliver robust, agentic integrations with SAP, ServiceNow, and Salesforce. The narrative highlights the importance of secure identity federation, modular agent orchestration, and low-code/pro-code automation in building next-generation enterprise solutions.78Views0likes0CommentsSAP Business Data Cloud Now Available on Microsoft Azure
We’re thrilled to announce that SAP Business Data Cloud (SAP BDC) including SAP Databricks is now available on Microsoft Azure marking a major milestone in our strategic partnership with SAP and Databricks and our commitment to empowering customers with cutting-edge Data & AI capabilities. SAP BDC is a fully managed SaaS solution designed to unify, govern, and activate SAP and third-party data for advanced analytics and AI-driven decision-making. Customers can now deploy SAP BDC on Azure in US East, US West and Europe West, with additional regions coming soon, and unlock transformative insights from their enterprise data with the scale, security, and performance of Microsoft’s trusted cloud platform. Why SAP BDC on Azure Is a Game-Changer for Data & AI Deploying SAP BDC on Azure enables organizations to accelerate their Data & AI initiatives by modernizing their SAP Business Warehouse systems and leveraging a modern data architecture that includes SAP HANA Cloud, data lake files and connectivity to Microsoft technology. Whether it’s building AI-powered intelligent applications, enabling semantically rich data products, or driving predictive analytics, SAP BDC on Azure provides the foundation for scalable, secure, and context-rich decision-making. Running SAP BDC workloads on Microsoft Azure unlocks the full potential of enterprise data by integrating SAP systems with non-SAP data using Microsoft’s powerful Data & AI services - enabling customers to build intelligent applications grounded in critical business context. Why Azure is an Ideal Platform for Running SAP BDC Microsoft Azure stands out as a leading cloud platform for hosting SAP solutions, including SAP BDC. Azure’s global infrastructure, high-performance networking, and powerful Data & AI capabilities make it an ideal foundation for large-scale SAP workloads. When organizations face complex data environments and need seamless interoperability across tools, Azure’s resilient backbone and enterprise-grade services provide the scalability and reliability essential for building a robust SAP data architecture. Under the Hood: SAP Databricks in SAP BDC is Powered by Azure Databricks A key differentiator of SAP BDC on Azure is that SAP Databricks, a core component of BDC, runs on Azure Databricks—Microsoft’s first-party service. Azure Databricks is a fully managed first party service making Microsoft Azure the optimal cloud for running Databricks workloads. It uniquely offers: Native integration with Microsoft Entra ID for seamless access control. Optimized performance with Power BI, delivering unmatched analytics speed. Enterprise-grade security and compliance, inherent to Azure’s first-party services. Joint engineering and unified support from Microsoft and Databricks. Zero-copy data sharing between SAP BDC and Azure Databricks, enabling frictionless collaboration across platforms. This deep integration ensures that customers benefit from the full power of Azure’s AI, analytics, and governance capabilities while running SAP workloads. Expanding Global Reach: What’s Next While SAP BDC is now live in three Azure regions US East, US West and Europe - we’re just getting started. Over the next few months, availability will expand to additional Azure regions such as Brazil and Canada. For the remaining regions, a continuously updated roadmap can be found on the SAP Roadmap Explorer website Final Thoughts This launch reinforces Microsoft Azure’s longstanding partnership with SAP, backed by over 30 years of trusted partnership and co-innovation. With SAP BDC now available on Azure, customers can confidently modernize their data estate, unlock AI-driven insights, and drive business transformation at scale. Stay tuned as we continue to expand availability and bring even more Data & AI innovations to our joint customers over the next few months.MSL correction from clone to multistate HANA DB Cluster SUSE activation
Introduction: SAP HANA system replication involves configuring one primary node and at least one secondary node. Any changes made to the data on the primary node are replicated to the secondary node synchronously. This ensures that we have a consistent and up-to-date backup, which is crucial for maintaining the integrity and availability of our data. Problem Description: Azure VM was in a degraded state causing a major outage since the SAP cluster was unable to start. Node health score (-1000000) did not reset automatically after redeploying and remained until manual intervention. Consider below execution if your cluster nodes are running on SLES 12 or later: Please note that promotable is not supported. Replace <placeholders> with your instance number and HANA system ID. sudo crm configure primitive rsc_SAPHana_<HANA SID>HDB<instance number> ocf:suse:SAPHana operations $id="rsc_sap<HANA SID>_HDB<instance number>-operations" op start interval="0" timeout="3600" op stop interval="0" timeout="3600" op promote interval="0" timeout="3600" op monitor interval="60" role="Master" timeout="700" op monitor interval="61" role="Slave" timeout="700" params SID="<HANA SID>" InstanceNumber="<instance number>" PREFER_SITE_TAKEOVER="true" DUPLICATE_PRIMARY_TIMEOUT="7200" AUTOMATED_REGISTER="false" sudo crm configure ms msl_SAPHana_<HANA SID>HDB<instance number> rsc_SAPHana<HANA SID>_HDB<instance number> meta notify="true" clone-max="2" clone-node-max="1" target-role="Started" interleave="true" sudo crm resource meta msl_SAPHana_<HANA SID>_HDB<instance number> set priority 100 Cutover steps: These steps encompass pre-steps, execution steps, post-validation steps, and the rollback plan. First, we have the pre-steps, which involve preparations and checks that need to be completed before we proceed with the main execution. This ensures that everything is in order and ready for the next phase. Next, we move on to the execution steps. These are the core actions that need to be carried out to ensure the task is completed accurately and efficiently. It's crucial that we follow these steps meticulously to avoid any issues. Post-validation steps come after the execution. This phase involves verifying the results and ensuring that everything works as expected. Pre-Steps: Check cluster status: crm status crm configure show SAPHanaSR-showAttr Ensure no pending operations or failed resources: crm_mon -1 Confirm replication is healthy: hdbnsutil -sr_state SAPHanaSR-showAttr Backup current configuration: crm configure show > /root/cluster_config_backup.txt Execution Steps: Enable maintenance mode: sudo crm configure property maintenance-mode=true Delete the incorrect clone resource: crm configure delete msl_SAPHana_<SID>_HDB<instance> Recreate using ms primitive: sudo crm configure ms msl_SAPHana_<SID>_HDB<instance> rsc_SAPHana_<SID>_HDB<instance> meta notify="true" clone-max="2" clone-node-max="1" target-role="Started" interleave="true" maintenance="true" sudo crm resource meta msl_SAPHana_<HANA SID>_HDB<instance number> set priority 100 Disable maintenance mode: crm configure property maintenance-mode=false Refresh resource and disable maintenance: sudo crm resource refresh msl_SAPHana_<SID> wait 10 seconds Check HSR status match in all SAPHanaSR-showAttr and crm_mon -A -1 and hdbnsutil -sr_state sudo crm resource maintenance msl_SAPHana_<SID> off Post Validation steps: crm status crm configure show SAPHanaSR-showAttr Rollback Plan: Enable maintenance mode: crm configure property maintenance-mode=true sudo crm resource maintenance msl_SAPHana_<SID> on Restore configuration from backup: "crm configure load update /root/cluster_config_backup.txt" Recreate the previous clone configuration if needed: crm configure clone msl_SAPHana_<SID>_HDB<instance> rsc_SAPHana_<SID>_HDB<instance> \ meta notify=true clone-max=2 clone-node-max=1 target-role=Started interleave=true promotable=true Disable maintenance and refresh resources: crm configure property maintenance-mode=false sudo crm resource refresh msl_SAPHana_<SID> wait 10 seconds sudo crm resource maintenance msl_SAPHana_<SID> off Perform below steps during actual execution: Task Description Team Pre Step: Submit a CAB request for approval Basis Perform Pre-checks · Check cluster status: SBD,pacemaker, coro services, sbd messages, isscsi, constraint crm status crm configure show SAPHanaSR-showAttr · Ensure no pending operations or failed resources: crm_mon -R1 -Af -1 · Confirm replication is healthy: hdbnsutil -sr_state · Backup current configuration: Pre-change crm configure show > /hana/shared/SID/dbcluster_backup_prechange.txt crm configure show | sed -n '/primitive rsc_SAPHana_SID_HD/,/^$/p' crm configure show | sed -n '/clone msl_SAPHana_SID_HD/,/^$/p' Basis Execution Get Go ahead from Leadership team Basis Step 0 – Put cluster into maintenance mode Basis crm resource maintenance g_ip_SID_HD on Basis #Backup current configuration: When cluster, msl, g_ip is in maintenance crm configure show > /hana/shared/SID/dbcluster_backup_prehealth.txt Basis Step 1 – (If not already done) clear Node 1 health and ensure topology/azure-events are running on both nodes (this avoids scheduler surprises when we re-manage) Basis #Execute on m1vms*(Ideally it can be executed on any node) crm_attribute -N vm** -n '#health-azure' -v 0 crm_attribute --node vm** --delete --name "azure-events-az_curNodeState" crm_attribute --node vm**--delete --name "azure-events-az_pendingEventIDs" SOPS crm resource cleanup health-azure-events-cln crm resource cleanup cln_SAPHanaTopology_SID_HD Basis #Backup current configuration: When health correct is complete and msl correction remaining. crm configure show > /hana/shared/SID/dbcluster_backup_premsl.txt Basis Step 2 – Convert the wrapper inside a single atomic transaction We delete the promotable clone wrapper only (not the primitive), then create the ms wrapper with the same name msl_SAPHana_SID_HD so existing colocation/order constraints that reference the name keep working. Basis # Remove the promotable clone wrapper (keeps rscSAPHanaSIDHD primitive intact) crm configure delete msl_SAPHana_SID_HD Basis # Recreate as multi-state (ms) for classic agents sudo crm configure ms msl_SAPHana_SID_HD rsc_SAPHana_SID_HD meta notify="true" clone-max="2" clone-node-max="1" target-role="Started" interleave="true" maintenance="true" Basis sudo crm resource meta msl_SAPHana_SID_HD set priority 100 Basis Step 3 – Re‑enable cluster management of IP and HANA Basis Prechecks by MSFT, SUSE Teams MSFT/SUSE Precheck by BASIS Team Basis crm configure property maintenance-mode=false crm resource refresh msl_SAPHana_SID_HD wait 10 seconds crm resource maintenance msl_SAPHana_SID_HD off crm resource maintenance g_ip_SID_HD off Basis Validation Basis crm_mon -R1 -Af -1 crm status crm configure show SAPHanaSR-showAttr Basis Rollback Plan Enable maintenance mode: Basis crm configure property maintenance-mode=true crm resource maintenance msl_SAPHana_SID_HD on crm resource maintenance g_ip_SID_HD on Basis Restore configuration from backup: Decide to which state we need to revert and use respective backup Basis crm configure load update /hana/shared/SID/dbcluster_backup_prechange/prehealth/premsl.txt Basis Recreate the previous clone configuration if needed: Basis crm configure clone msl_SAPHana_SID_HD rsc_SAPHana_SID_HD meta notify=true clone-max=2 clone-node-max=1 target-role=Started interleave=true promotable=true maintenance="true" Basis Disable maintenance and refresh resources: Basis crm configure property maintenance-mode=false crm resource refresh msl_SAPHana_SID_HD wait 10 seconds crm resource maintenance msl_SAPHana_SID_HD off crm resource maintenance g_ip_SID_HD off Basis Important Points: 1. Are there known version-specific considerations when migrating from clone to ms? If you are using SAPHanaSR, please ensure you are using 'ms'. On the other hand, if you are working with SAPHanaSR-angi, you should use 'clone'. There are 3 different sets of HANA resource agents and SRHook scripts, two older ones and one newer one. 2. Does this change apply across the board on SUSE OS and/or Pacemaker versions? The packages for the older ones are: SAPHanaSR which is for Scale-Up HANA clusters. SAPHanaSR-ScaleOut which is for Scale-Out HANA clusters. The package for the new one is: SAPHanaSR-angi which is for both Scale-up and Scale-out clusters. (angi stands for "advanced next generation interface"). When using the older SAPHanaSR or SAPHanaSR-ScaleOut resource agents and SRHook scripts, SUSE only supports the multi-state (ms) clone type for the SAPHanaSR (scale-up) or SAPHanaController (scale-out) resource. The older resource agents and scripts are supported on all Service Packs of SLES for SAP 12 and 15. When using the newer SAPHanaSR-angi resource agents and scripts, SUSE only supports the regular clone type for the SAPHanaController resource (scale-up AND scale-out) with the "promotable=true" meta-attribute set on the clone. The newer "angi" resource agents and scripts are supported on SLES for SAP 15 SP5 and higher and on SLES for SAP 16 when it is released later this year. So, with SLES for SAP 15 SP5 and higher, you can use either the older or the newer resource agents and scripts. For all Service Packs of SLES for SAP 12 and Service Packs of SLES for SAP 15 prior to SP5, you must use the older resource agents and scripts. Starting with SLES for SAP 16, you must use the new angi resource agents and scripts. Installing the new SAPHanaSR-angi package will automatically uninstall the older SAPHanaSR or SAPHanaSR-ScaleOut packages if they are already installed. SUSE has published a blog on how to migrate from the older resource agents and scripts to the newer ones provided in the reference suse link. Conclusion: Let us set up and ensure that system replication is active. This is crucial to avoid any business disruptions during our critical operational hours. By taking these steps, we can seamlessly enhance the cluster architecture and resilience of our systems. Implementing these replication strategies will not only bolster our business continuity measures but also significantly improve our overall resilience. This means our operations will run more smoothly and efficiently, allowing us to handle future demands with ease. Reference MS links: High availability for SAP HANA on Azure VMs on SLES | Microsoft Learn https://www.suse.com/c/how-to-upgrade-to-saphanasr-angi/Gen1 to Gen2 Azure VM Upgrade in Rolling Fashion
Introduction: Azure offers Trusted Launch. This seamless solution is designed to significantly enhance the security of our Generation 2 virtual machines (VMs), providing robust protection against advanced and persistent attack techniques. Trusted Launch is composed of several coordinated infrastructure technologies, each of which can be enabled independently. These technologies work in harmony to create multiple layers of defense, ensuring our virtual machines remain secure against sophisticated threats. With Trusted Launch, we can confidently improve our security posture and safeguard our VMs from potential vulnerabilities. Upgrading of Azure VMs from Generation 1 (Gen1) to Generation 2 (Gen2) involves several steps to ensure a smooth transition without data loss or disruptions. Rolling fashion upgrade process: First and foremost, it is crucial to have a complete backup of virtual machines before starting the upgrade. This step is essential to protect valuable data in case of any unforeseen issues that may arise during the process. Having a backup will give you peace of mind and ensure that data is safe and secure. It is crucial to perform any new process or implementation in pre-production systems first. This step is vital to ensure that we can identify and resolve any potential issues before moving to the production environment. By doing so, we can maintain the integrity and stability of our systems, ultimately serving our customers better. Please run the pre-validation steps before you bring down the VM. SSH into VM: Connect to the Gen1 Linux VM. Identify Boot Device with sudo : bootDevice=$(echo "/dev/$(lsblk -no pkname $(df /boot | awk 'NR==2 {print $1}'))") Check Partition Type (must return 'gpt'): sudo blkid $bootDevice -o value -s PTTYPE Validate EFI System Partition (e.g., /dev/sda2 or 3): sudo fdisk -l $bootDevice | grep EFI | awk '{print $1}' Check EFI Mountpoint (/boot/efi must be in /etc/fstab): sudo grep -qs '/boot/efi' /etc/fstab && echo '/boot/efi present in /etc/fstab' || echo '/boot/efi missing /boot/efi present in /etc/fstab' Example: Once the complete backup is in place and the pre-validation steps are completed, we will need the SAP Basis team to proceed with stopping the application. As part of our planned procedure, once the application has been taken down, the Unix team will proceed to shut down the operating system on the ERS servers. Azure team to follow below steps and perform the Gen upgrade on the selected approved servers: Example: Example: Example: Start the VM: Start-AzVM -ResourceGroupName myResourceGroup -Name myVm (Or) Start from Azure Portal Login into Azure Portal to check the VM Generation is successfully changed to V2. Example: Unix team to validate OS on approved servers. SAP Basis team to generate a new license key based on the new hardware to apply and start the application. Unix team to perform failover of ASCS cluster. SAP Basis team to stop the application server. Unix team to shutdown OS on ERS for selected VM’s and validate the OS. SAP Basis team to apply the new Hardware key and start the application. Unix team to perform failover of ASCS cluster. Azure team to work on capacity analysis to find the path forward for hosting Mv2 VMs on the same PPG group. Once successfully completed test rollback on at least one app server for rollback planning. Here are the other methods to achieve this: Method 1: Using Trusted Launch Direct Upgrade Prerequisites Check: Ensure your subscription is onboarded to preview feature Gen1ToTLMigrationPreview under Microsoft. Compute namespace. The VM should be configured with Trusted launch supported size family and OS version. Also have a successful backup in place. Update Guest OS Volume: Update guest OS volume with GPT Disk layout and EFI system partition. Use PowerShell-based orchestration script for MBR2GPT validation and conversion. Enable Trusted Launch:Deallocate the VM using Stop-AzVM. Enable Trusted launch by setting -SecurityType to TrustedLaunch using Update-AzVM command. Stop-AzVM -ResourceGroupName myResourceGroup -Name myVm, Update-AzVM -ResourceGroupName myResourceGroup -VMName myVm -SecurityType TrustedLaunch -EnableSecureBoot $true -EnableVtpm $true Validate and Start VM:Validate the security profile in the updated VM configuration. Start the VM and verify that you can sign in using RDP or SSH. Method 2: Using Azure Backup Verify Backup Data: Ensure you have valid and up-to-date backups of your Gen1 VMs, including both OS disks and data disks. Verify that the backups are successfully completed and can be restored. Create Gen2 VMs: Create new Gen2 VMs with the desired specifications and configuration. There's no need to start them initially, just have them created and ready for when we need them. Restore VM Backups: In the Azure Portal, go to the Azure Backup service. Select "Recovery Services vaults" from the left-hand menu, and then select your existing backup vault that contains the backups of the Gen1 VMs. Inside the recovery services vault, go to the "Backup Items" section and select the VM you want to restore. Initiate a restore operation for the VM. During the restore process, choose the target resource group and the target VM (which should be the newly created Gen2 VM). Restore OS Disk: Choose to restore the OS disk of the Gen1 VM to the newly created Gen2 VM. Azure Backup will restore the OS disk to the new VM, effectively migrating it to Generation 2. Restore Data Disks: Once the OS disk is restored and the Gen2 VM is operational, proceed to restore the data disks. Repeat the restore process for each data disk, attaching them to the Gen2 VM as needed. Verify and Test: Verify that the Gen2 VM is functioning correctly, and all data is intact. Test thoroughly to ensure all applications and services are running as expected. Decommission Gen1 VMs (Optional): Once the migration is successful, and you have verified that the Gen2 VMs are working correctly please decommission the original Gen1 VMs. Important Notes: Before proceeding with any production migration, thoroughly test this process in a non-production environment to ensure its success and identify any potential issues. Make sure you have a backup of critical data and configurations before attempting any migration. While this approach focuses on using Azure Backup for restoring the VMs, there are other migration strategies available that may better suit your specific scenario. Evaluate them based on your requirements and constraints. Remember, migrating VMs between generations involves changes in the underlying virtual hardware, so thorough testing and planning are essential to ensure a smooth transition without data loss or disruptions. Why Generation2 upgrade without Trusted launch is not supported? Trusted Launch provides foundational compute security for our VMs at no additional cost, which means we can enhance our security posture without incurring extra expenses. Moreover, Trusted Launch VMs are largely on par with Generation 2 VMs in terms of features and performance. This means that upgrading to Generation 2 VMs without enabling Trusted Launch does not provide any added benefits. Unsupported Gen1 VM configurations: Gen1 to Trusted launch VM upgrade is NOT supported if Gen1 VM is configured with below options: Operating system: Windows Server 2016, Azure Linux, Debian, and any other operating system not listed under Trusted launch supported operating system (OS) version. Ref link: Trusted Launch for Azure VMs - Azure Virtual Machines | Microsoft Learn VM size: Gen1 VM configured with VM size not listed under Trusted launch supported size families. Ref link: Trusted Launch for Azure VMs - Azure Virtual Machines | Microsoft Learn Azure Backup: Gen1 VM configured with Azure Backup using Standard policy. As workaround, migrate Gen1 VM backups from Standard to Enhanced policy. Ref link: Move VM backup - standard to enhanced policy in Azure Backup - Azure Backup | Microsoft Learn Conclusion: We will enhance Azure virtual machines by transitioning from Gen1 to Gen2. By implementing these approaches, we can seamlessly unlock improved security and performance of systems. This transition will not only bolster our security measures but also significantly enhance the overall performance, ensuring our operations run more smoothly and efficiently. Let us make this upgrade to ensure virtual machines are more robust and capable of handling future demands. Ref links: Upgrade Gen1 VMs to Trusted launch - Azure Virtual Machines | Microsoft Learn GitHub - Azure/Gen1-Trustedlaunch: aka.ms/Gen1ToTLUpgrade Enable Trusted launch on existing Gen2 VMs - Azure Virtual Machines | Microsoft LearnDeep dive into Pacemaker cluster for Azure SAP systems optimization
Introduction: Azure Pacemaker offers a centralized management platform that streamlines the process of monitoring and maintaining pacemakers. With this innovative service, you can ensure the safety and well-being of your systems through automated alerts and comprehensive management tools. By leveraging Azure Pacemaker, organizations can experience enhanced efficiency and peace of mind knowing that their pacemakers are being managed optimally. The centralized platform simplifies the management process, making it easier to keep track of devices and promptly respond to any issues that may arise. Current customer challenges: Configuration: Common misconfigurations occur when customers don’t follow up-to-date HA setup guidance from learn.microsoft.com, leading to failover issues. Testing: Manual testing causes untested failover scenarios and config drift. Limited expertise in HA tools complicates troubleshooting. Key Use Cases for SAP HA Testing Automation: I wanted to discuss some important updates regarding our testing and validation procedures to ensure that we continue to maintain the highest standards in our work. First off, we need to automate the validation process on new OS versions. This will help us ensure that the Pacemaker cluster configuration remains up-to-date and functions smoothly with the latest OS releases. By doing this, we can promptly address any compatibility issues that might arise. Next, we should implement loop tests to run on a regular cadence. These tests will enable us to catch regressions early and ensure that our customer systems remain robust and reliable over time. It's essential to have continuous monitoring in place to maintain optimal performance. Furthermore, we must validate our high availability (HA) configurations according to the documented SAP on Azure best practices. This will ensure effective failover and quick recovery, minimize downtime and maximize system uptime. Adhering to these best practices will significantly enhance our HA capabilities. SAP Testing Automation Framework (Public Preview): The most recommended approach for validating Pacemaker configurations in SAP HANA clusters, which is through the SAP Deployment Automation Framework (SDAF) and its High Availability Testing Framework. This framework includes a comprehensive set of automated test cases designed to validate cluster behavior under various scenarios such as primary node crashes, manual resource migrations, and service failures. Additionally, it rigorously checks OS versions, Azure roles for fencing, SAP parameters, and Pacemaker/Corosync configurations to ensure everything is set up correctly. Low-level administrative commands are employed to validate the captured values against best practices, particularly focusing on constraints and meta-attributes. This thorough validation process ensures that our clusters are reliable, resilient, and adhering to industry standards. SAP System High Availability on Azure: SAP HANA Scale-UP: SAP Central Services: Support Matrix: Linux Distribution: Distribution Supported Release SUSE Linux Enterprise Server (SLES) 15 SP4, 15 SP5, 15 SP6 Red Hat Enterprise Linux (RHEL) 8.8, 8.10, 9.2, 9.4 High Availability Configuration Patterns: Component Type Cluster Type Storage SAP Central Services ENSA1 or ENSA2 Azure Fencing Agent Azure Files or ANF SAP Central Services ENSA1 or ENSA2 ISCSI (SBD device) Azure Files or ANF SAP HANA Scale-up Azure Fencing Agent Azure Managed Disk or ANF SAP HANA Scale-up ISCSI (SBD device) Azure Managed Disk or ANF High Availability Tests scenarios: Test Type Database Tier (HANA) Central Services Configuration Checks HA Resource Parmeter Validation Azure Load Balancer Configuration HA Resource Parmeter Validation SAPControl Azure Load Balancer Configuration Failover Tests HANA Resource Migration Primary Node Crash ASCS Resource Migration ASCS Node Crash Process & Services Index Server Crash Node Kill Kill SBD Service Message Server Enqueue Server Enqueue Replication Server SAPStartSRV process Network Tests Block network Block network Infrastructure Virtual machine crash Freeze file system (storage) Manual Restart HA Failover to Node Reference links: SLES: Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure | Microsoft Learn Troubleshoot startup issues in a SUSE Pacemaker cluster - Virtual Machines | Microsoft Learn RHEL: Set up Pacemaker on RHEL in Azure | Microsoft Learn Troubleshoot Azure fence agent issues in an RHEL Pacemaker cluster - Virtual Machines | Microsoft Learn STAF: GitHub - Azure/sap-automation-qa: This is the repository supporting the quality assurance for SAP systems running on Azure. Conclusion: This innovative tool is designed to significantly streamline and enhance the high availability deployment of SAP systems on Azure by reducing potential misconfigurations and minimizing manual effort. Please note that since this framework performs multiple failovers sequentially to validate the cluster behavior. It is not recommended to be run on production systems directly. It is intended for use in new high availability deployments that are not yet live / non-business critical systems.Announcing Public Preview for Business Process Solutions
In today’s AI powered enterprises, success hinges on access to reliable, unified business information. Whether you are deploying AI-augmented workflows or fully autonomous agentic solutions, one thing is clear: trusted, consistent data is the fuel that drives intelligent outcomes. Yet in many organizations, data remains fragmented across best of breed applications – creating blind spots in cross-functional processes and throwing roadblocks in the path of automation. Microsoft is dedicated to tackle these challenges, delivering a unified data foundation that accelerates AI adoption, simplifies automation and reduces risk – empowering businesses to unlock the full potential of unified data analytics and agentic intelligence. Our new solution offers cross-functional insights across previously siloed environments and includes: Prebuilt data models for enterprise business applications in Microsoft Fabric Source system data mappings and transformations Prebuilt dashboards and reports in Power BI Prebuilt AI Agents in Copilot Studio (coming soon) Integrated Security and Compliance By unifying Microsoft’s Fabric and AI solutions we can rapidly accelerate transformation and derisk AI rollout through repeatable, reliable, prebuilt solutions. Functional Scope Our new solution currently supports a set of business applications and functional areas, enabling organizations to break down silos and drive actionable insights across their core processes. The platform covers key domains such as: Finance: Delivers a comprehensive view of financial performance, integrating data from general ledger, accounts receivable, and accounts payable systems. This enables finance teams to analyze trends, monitor compliance, and optimize cash flow management all from within Power BI. The associated Copilot agent provides not only access to this data via natural language but will also enable financial postings. Sales: Provides a complete perspective on customers’ opportunity to cash journeys, from initial opportunity through invoicing and payment via Power BI reports and dashboards. The associated Copilot agent can help improve revenue forecasting, by connecting structured ERP and CRM data with unstructured data from Microsoft 365, also tracking sales pipeline health and identify bottlenecks. Procurement: Supports strategic procurement and supplier management, consolidating purchase orders, goods receipts, and vendor invoicing data into a complete spend dashboard. This empowers procurement teams to optimize sourcing strategies, manage supplier risk, and control spend. Manufacturing: (coming soon): Will extend coverage to manufacturing and production processes, enabling organizations to optimize resource allocation and monitor production efficiency. Each item within Business Process Solutions is delivered as a complete, business-ready offering. These models are thoughtfully designed to ensure that organizations can move seamlessly from raw data to actionable execution. Key features include: Facts and Dimensions: Each model is structured to capture both transactional details (facts) and contextual information (dimensions), supporting granular analysis and robust reporting across business processes. Transformations: Built-in transformations automatically prepare data for reporting and analytics, making it compatible with Microsoft Fabric. For example, when a business user needs to compare sales results from Europe, Asia, and North America, the solution transformations handle currency conversion behind the scenes. This ensures that results are consistent across regions, making analysis straightforward and reliable—without the need for manual intervention or complex configuration. Insight to Action: Customers will be able to leverage prebuilt Copilot Agents within Business Process Solutions to turn insight into action. These agents are deeply integrated not only with Microsoft Fabric and Microsoft Teams, but also connected source applications, enabling users to take direct, contextual actions across systems based on real-time insights. By connecting unstructured data sources such as emails, chats, and documents from Microsoft 365 apps, the agents can provide a holistic and contextualized view to support smarter decisions. With embedded triggers and intelligent agents, automated responses could be initiated based on new insights -- streamlining decision-making and enabling proactive, data-driven operations. Ultimately, this will empower teams to not just understand what is happening on a wholistic level, but to also take faster and smarter actions, and with greater confidence. Authorizations: Data models are tailored to respect organizational security and access policies, ensuring that sensitive information is protected and only accessible to authorized users. The same user credential principles apply to the Copilot agents when interacting with/updating the source system in the user-context. Behind the scenes, the solution automatically provisions the required objects and infrastructure to build the data warehouse, removing the usual complexity of bringing data together. It guarantees consistency and reliability, so organizations can focus on extracting value from their data rather than managing technical details. This reliable data foundation serves as one of the key informants of the agentic business processes. Accelerated Insights with Prebuilt Analytics Building on these robust data models, Business Process Solutions offer a suite of prebuilt Power BI reports tailored to common business processes. These reports provide immediate access to key metrics and trends, such as financial performance, sales effectiveness, and procurement efficiency. Designed for rapid deployment, they allow organizations to: Start analyzing data from day one, without lengthy setup or customization. Adapt existing reports for your organization’s exact business needs. Demonstrate best practices for leveraging data models in analytics and decision-making. This approach accelerates time-to-value and also empowers users to explore new analytical scenarios and drive continuous improvement. Extensibility and Customization Every organization is unique and our new solution is designed to support this, allowing you to adapt analytics and data models to fit your specific processes and requirements. You can customize scope items, bring in your own tables and views, integrate new data sources as your business evolves, and combine data across Microsoft Fabric for deeper insights. Similarly, the associated agents will be customizable from Copilot Studio to adapt to your specific Enterprise apps configuration. This flexibility ensures that, no matter how your organization operates, Business Process Solutions helps you unlock the full value of your data. Data integration Business Process Solutions uses the same connectivity options as Microsoft Fabric and Copilot Studio but goes further by embedding best practices that make integration simpler and more effective. We recognize that no single pattern can address the diverse needs of all business applications. We also understand that many businesses have already invested in data extraction tools, which is why our solution supports a wide range of options, from native connectivity to third-party options that bring specialized capabilities to the table. With Business Process Solutions we ensure data can be interacted with in a reliable and high-performant way, whether working with massive volumes or complex data structures. Getting started If your organization is ready to unlock the value of unified analytics, getting started is simple. Just send us a request using the form at: https://aka.ms/JoinBusAnalyticsPreview. Our team will guide you through the next steps and help you begin your journey.Backup SAP Oracle Databases Using Azure VM Backup Snapshots
This blog article provides a comprehensive step-by-step guide for backing up SAP Oracle databases using Azure VM backup snapshots, ensuring data safety and integrity. Installation of CIFS Utilities: The process begins with the installation of cifs-utils on Oracle Linux, which is the recommended OS for running Oracle databases in the cloud. Setting Up Environment Variables: Users are instructed to define necessary environment variables for resource group and storage account names. Creating SMB Credentials: The guide explains how to create a folder for SMB credentials and retrieve the storage account key, emphasizing the need for appropriate permissions. Mounting SMB File Share: Instructions are provided for checking the accessibility of the storage account and mounting the SMB file share, which will serve as a backup location for archived logs. Preparing Oracle Database for Backup:Users must place the Oracle database in hot backup mode to ensure a consistent backup while allowing ongoing transactions. Initiating Snapshot Backup: Once the VM backup is configured, users can initiate a snapshot backup to capture the state of the virtual machine, including the Oracle database. Restoration Process: The document outlines the steps for restoring the Oracle database from the backup, including updating IP addresses and starting the database listener. Final Steps and Verification: Users are encouraged to verify the configuration and ensure that all necessary backups are completed successfully, including the SMB file share.Azure Files NFS Encryption In Transit for SAP on Azure Systems
Azure Files NFS volumes now support encryption in-transit via TLS. With this enhancement, Azure Files NFS v4.1 offers the robust security that modern enterprises require, without compromising performance by ensuring all traffic between clients and servers is fully encrypted. Now Azure Files NFS data can be encrypted end-to-end: at rest, in transit, and across the network. Using Stunnel, an open-source TLS wrapper, Azure Files encrypts the TCP stream between the NFS client and Azure Files with strong encryption using AES-GCM, without needing Kerberos. This ensures data confidentiality while eliminating the need for complex setups or external authentication systems like Active Directory. The AZNFS utility package simplifies encrypted mounts by installing and setting up Stunnel on the client (Azure VMs). The AZNFS mount helper mounts the NFS shares with TLS support. The mount helper initializes dedicated stunnel client process for each storage account’s IP address. The stunnel client process listens on a local port for inbound traffic and then redirects encrypted nfs client traffic to the 2049 port where NFS server is listening on. The AZNFS package runs a background job called aznfswatchdog. It ensures that stunnel processes are running for each storage account and cleans up after all shares from the storage account are unmounted. If for some reason a stunnel process is terminated unexpectedly, the watchdog process restarts it. For more details, refer to the following document: How to encrypt data in transit for NFS shares Availability in Azure Regions All regions that support Azure Premium Files now support encryption in transit. Supported Linux releases For SAP on Azure environment, Azure Files NFS Encryption in Transit (EiT) is available for the following Operating System releases. SLES for SAP 15 SP4 onwards RHEL for SAP 8.6 onwards (EiT is currently not supported for file systems managed by Pacemaker clusters on RHEL.) Refer to SAP Note 1928533 for Operating system supportability for SAP on Azure systems. How to deploy Encryption in Transit (EiT) for Azure Files NFS Shares Refer to the SAP on Azure deployment planning guide about Using Azure Premium Files NFS and SMB for SAP workload As described in the planning guide, for SAP workloads, following are the supported uses of Azure Files NFS shares and EiT can be used for all the scenarios: sapmnt volume for a distributed SAP systems transport directory for SAP landscape /hana/shared for HANA scale-out. Review carefully the considerations for sizing /hana/shared, as appropriately sized /hana/shared volume contributes to system's stability file interface between your SAP landscape and other applications Deploy the Azure File NFS storage account. Refer to the standard documentation for creating the Azure Files storage account, file share and private endpoint. Create an NFS Azure file share Note : We can enforce EiT for all the file shares in the Azure Storage account by enabling ‘secure transfer required’ option. Deploy the mount helper (AZNFS) package on the Linux VM. Follow the instructions for your Linux distribution to install the package. Create the directories to mount the file shares. mkdir -p <full path of the directory> Mount the NFS File share. Refer to the section for mounting the Azure Files NFS EiT file share in Linux VMs. To mount the file share permanently by adding the mount commands in ‘/etc/fstab’. vi /etc/fstab sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 aznfs noresvport,vers=4,minorversion=1,sec=sys,_netdev 0 0 # Mount the file systems mount -a o File systems mentioned above are an example to explain the mount command syntax. o When adding nfs mount entry to /etc/fstab, the fstype is "nfs". However, to use AZNFS mount helper and EiT, we need to use the fstype as "aznfs" which is not known to the Operating System, so at boot time the server tries to mount these entries before the watchdog is active, and they may fail. Users should always add "_netdev" option to their /etc/fstab entries to make sure shares are mounted on reboot only after the required services (like network) are active. o We can add “notls” option in the mount command, if we don’t want to use the EiT but just want to use AZNFS mount helper to mount the file system. Also , we cannot mix EiT and no-EiT methods for different file systems using Azure Files NFS in the same Azure VM. Mount commands may fail to mount the file systems if EiT and no-EiT methods are used in the same VM o Mount helper supports private-endpoint based connections for Azure Files NFS EiT. o If SAP VM is custom domain joined, then we can use custom DNS FQDN OR short names for file share in the ‘/etc/fstab’ as its defined in the DNS. To verify the hostname resolution, check using ‘nslookup <hostname>’ and ‘getent host <hostname>’ commands. Mount the NFS File share as pacemaker cluster resource for SAP Central Services. In high availability setup of SAP Central Services, we may use file system as a resource in pacemaker cluster and it needs to be mounted using pacemaker cluster command. In the pacemaker commands to setup file system as cluster resource, we need to change the mount type to ‘aznfs’ from ‘nfs’. Also it’s recommended to use ‘_netdev’ in the options parameter. Following are the SAP Central Services setup scenarios in which Azure Files NFS is used as pacemaker resource agent, and we can use Azure Files NFS EiT. Azure VMs high availability for SAP NW on SLES with NFS on Azure Files Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files For SUSE Linux: SUSE 15 SP4 (for SAP) and higher releases recognise the ‘aznfs’ as file system type in the pacemaker resource agent. SUSE recommends using simple mount approach for high availability setup of SAP Central services, in which all file systems are mounted using ‘/etc/fstab’ only. For RHEL Linux: RHEL 8.6 (for SAP) and higher releases will be recognising ‘aznfs’ as file system type in pacemaker resource agent. At the time of writing the blog, ‘aznfs’ as file system type is not yet recognised by the FileSystem resource agent(RS) on RHEL, hence this setup can’t be used at this moment. For SAP HANA scale-out with HSR setup We can use Azure Files NFS EiT for SAP HANA scale-out with HSR setup as described in the below docs. SAP HANA scale-out with HSR and Pacemaker on SLES SAP HANA scale-out with HSR and Pacemaker on RHEL We need to mount ‘/hana/shared’ File system with EiT by defining the filesystem type as ‘aznfs’ in ‘/etc/fstab’. Also it’s recommended to use ‘_netdev’ in the options parameter. For SUSE Linux: In the Create File system resource section with SAP HANA high availability “SAPHanaSR-ScaleOut” package, in which we create a dummy file system cluster resource, which will monitor and report failures for ‘/hana/shared’ file system, we can continue to follow the steps as it is in the above document with ‘fstype=nfs4’. ‘/hana/shared’ file system will still be using EiT as defined in ‘/etc/fstab’. For SAP HANA high availability “SAPHanaSR-angi”, there are no further actions needed to use Azure File NFS EiT. For RHEL Linux: In the Create File system resource section, we can replace the file system type to ‘aznfs’ from ‘nfs’ in the pacemaker resource configuration for ‘/hana/shared’ file systems. Validation of in-transit data Encryption for Azure Files NFS. Refer to Verify that the in-transit data encryption succeeded section to check and confirm if EiT is successfully working. Summary Go ahead with EiT!! Simplified deployment of Encryption in Transit of Azure Files Premium NFS (Locally redundant Storage / Zonal redundant Storage) will strengthen the security footprint of Production and non-Production SAP on Azure environments.SAP Web Dispatcher on Linux with High Availability Setup on Azure
1. Introduction The SAP Web Dispatcher component is used for load balancing SAP web HTTP(s) traffic among the SAP application servers. It works as “reverse proxy” and the entry point for HTTP(s) requests into SAP environment, which consists of one or more SAP NetWeaver system. This blog provides detailed guidance about setting up high availability of standalone SAP Web Dispatcher on Linux operating system on Azure. There are different options to set up high availability for SAP Web Dispatcher. Active/Passive High Availability Setup using a Linux pacemaker cluster (SUSE or Red Hat) with a virtual IP/hostname defined in Azure Load Balancer. Active/Active High Availability Setup by deploying multiple parallel instances of SAP Web Dispatcher across different Azure Virtual Machines (running either SUSE or Red Hat) and distributing traffic using Azure Load Balancer. We will walk through the configuration steps for both high availability scenarios in this blog. 2. Active/Passive HA Setup of SAP Web Dispatcher 2.1. System Design Following is the high level architecture diagram of HA SAP Production environment on Azure. SAP Web Dispatcher (WD) standalone HA setup is highlighted in the SAP architecture design. In this setup as active/passive node design, primary node of the SAP Web Dispatcher will be receiving the user's requests and transferring (and load balancing) it to the backed SAP Application Servers. In case of unavailability of primary node, Linux pacemaker cluster will perform the failover of SAP Web Dispatcher to the secondary node. Users will connect to the SAP Web Dispatcher using the virtual hostname(FQDN) and virtual IP Address as defined in the Azure Loadbalancer. Azure Loadbalancer health probe port will be activated by pacemaker cluster on the primary node, so all the user connections to the virtual IP/hostname will be redirected by Azure Loadbalancer to the active SAP Web Dispatcher. Also, SAP Help documentation describes this HA architecture as “High Availability of SAP Web Dispatcher with External HA Software”. The following are the advantages of active-passive SAP WD setup. Linux pacemaker cluster will continuously monitor the SAP WD active node and services running on it. In case of any error scenario, the active node will be fenced by pacemaker cluster and secondary node will be made active. This will ensure best user experience round the clock. Complete automation of error detection and start/stop functionality of SAP WD. Its would be less challenging to define application-level SLA when pacemaker managing the SAP WD. Azure provides VM level SLA of 99.99% , if VMs are deployed in Availability Zones. We need following components to setup HA SAP Web Dispatcher on Linux. A pair of SAP Certified VMs on Azure with supported Linux Operating System. Cross Availability Zone deployment is recommended for higher VM level SLA. Azure Fileshare (Premium) for ‘sapmnt’ NFS share which will be available/mounted on both VMs for SAP Web Dispatcher. Azure Load Balancer for configuring virtual IP and hostname (in DNS) of the SAP Web Dispatcher. Configure Linux pacemaker cluster. Installation of SAP Web Dispatcher on both the VMs with same SID and system number. It is recommended to use the latest version of SAP Web Dispatcher. Configure the pacemaker resource agent for SAP Web Dispatcher application. 2.2. Deployment Steps This section provides detailed steps for HA active/passive SAP Web Dispatcher deployment for both the supported Linux operating systems (SUSE and Red Hat). Please refer to SAP Note 1928533 for SAP on Azure certified VMs, SAPS values and supported operating systems versions for SAP environment. In the below steps, ‘For SLES’ is applicable to SLES operating system and ‘For RHEL’ is applicable to RHEL operating system. If for any step, operating system is not mentioned then its applicable to both the operating system. Also following items are prefixed with: [A]: Applicable to all nodes. [1]: Applicable to only node 1. [2]: Applicable to only node 2. Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SLES/RHEL for SAP. In this blog, below VM names are used: Node1: webdisp01 Node2: webdisp02 Virtual Hostname: eitwebdispha Follow the standard SAP on Azure document for base pacemaker setup for the SAP Web Dispatcher VMs. We can either use SBD device or Azure fence agent for setting up fencing in the pacemaker cluster. For SLES: Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure For RHEL: Set up Pacemaker on Red Hat Enterprise Linux in Azure The rest of the below setup steps are derived from the below SAP ASCS/ERS HA setup document and SUSE/RHEL blog on SAP WD setup. It's highly recommended to read the following documents. For SLES: High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with NFS on Azure Files. SUSE Blog: SAP Web Dispatcher High Availability on Cloud with SUSE Linux. For RHEL: High availability for SAP NetWeaver on VMs on RHEL with NFS on Azure Files RHEL Blog: How to manage standalone SAP Web Dispatcher instances using the RHEL HA Add-On - Red Hat Customer Portal Deploy the Azure standard load balancer for defining the virtual IP of the SAP Web Dispatcher. In this example, the following setup is used in deployment. Frontend IP Backend Pool Health Probe Port Load Balancing Rule 10.50.60.45 (Virtual IP of SAP Web Dispatcher) Node 1 & Node 2 VMs 62320 (set probeThreshold=2) HA Port: Enable Floating IP: Enable Idle Timeout: 30 mins Don't enable TCP time stamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the “net.ipv4.tcp_timestamps” OS parameter to '0'. For details, see Load Balancer health probes. Run the following command to set this parameter, and to set up value permanently add or update the parameter in /etc/sysctl.conf. sudo sysctl net.ipv4.tcp_timestamps=0 When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity unless you perform additional configuration to allow routing to public endpoints. For details on how to achieve outbound connectivity, see Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios. Configure NFS for ‘sapmnt’ and SAP WD instance Filesystem on Azure Files. Deploy the Azure Files storage account (ZRS) and create fileshares for ‘sapmnt’ and ‘SAP WD instance (/usr/sap/SID/Wxx)’. Connect it to the vnet of the SAP VMs using private endpoint. For SLES: Refer to the Deploy an Azure Files storage account and NFS shares section for detailed steps. For RHEL: Refer to the Deploy an Azure Files storage account and NFS shares section for detailed steps. Mount NFS volumes. [A] For SLES: NFS client and other resources come pre-installed. [A] For RHEL: Install the NFS Client and other resources. sudo yum -y install nfs-utils resource-agents resource-agents-sap [A] Mount the NFS file system on both VMs. Create shared directories. sudo mkdir -p /sapmnt/WD1 sudo mkdir -p /usr/sap/WD1/W00 sudo chattr +i /sapmnt/WD1 sudo chattr +i /usr/sap/WD1/W00 [A] Mount the File system that will not be controlled by pacemaker cluster. echo "sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-sapmnt /sapmnt/WD1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 2" >> /etc/fstab mount -a Prepare for SAP Web Dispatcher HA Installation. [A] For SUSE: Install the latest version of the SUSE connector. sudo zypper install sap-suse-cluster-connector [A] Set up host name resolution (including virtual hostname). We can either use a DNS server or modify /etc/hosts on all nodes. [A] Configure the SWAP file. Edit ‘/etc/waagent.conf’ file and change the following parameters. ResourceDisk.Format=y ResourceDisk.EnableSwap=y ResourceDisk.SwapSizeMB=2000 [A] Restart the agent to activate the change sudo service waagent restart [A] For RHEL: Based on RHEL OS version follow SAP Notes. SAP Note 2002167 for RHEL 7.x SAP Note 2772999 for RHEL 8.x SAP Note 3108316 for RHEL 9.x Create the SAP WD instance Filesystem, virtual IP, and probe port resources for SAP Web Dispatcher. [1] For SUSE: # Keep node 2 in standby sudo crm node standby webdisp02 # Configure file system, virtual IP, and probe resource sudo crm configure primitive fs_WD1_W00 Filesystem device=' sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-su-usrsap' directory='/usr/sap/WD1/W00' fstype='nfs' options='noresvport,vers=4,minorversion=1,sec=sys' \ op start timeout=60s interval=0 \ op stop timeout=60s interval=0 \ op monitor interval=20s timeout=40s sudo crm configure primitive vip_WD1_W00 IPaddr2 \ params ip=10.50.60.45 \ op monitor interval=10 timeout=20 sudo crm configure primitive nc_WD1_W00 azure-lb port=62320 \ op monitor timeout=20s interval=10 sudo crm configure group g-WD1_W00 fs_WD1_W00 nc_WD1_W00 vip_WD1_W00 Make sure that all the resources in the cluster are in started status and running on Node 1. Check the status using the command ‘crm status’. [1] For RHEL: # Keep node 2 in standby sudo pcs node standby webdisp02 # Create file system, virtual IP, probe resource sudo pcs resource create fs_WD1_W00 Filesystem device='sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-rh-usrsap' \ directory='/usr/sap/WD1/W00' fstype='nfs' force_unmount=safe options='sec=sys,nfsvers=4.1' \ op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \ --group g-WD1_W00 sudo pcs resource create vip_WD1_W00 IPaddr2 \ ip=10.50.60.45 \ --group g-WD1_W00 sudo pcs resource create nc_WD1_W00 azure-lb port=62320 \ --group g-WD1_W00 Make sure that all the resources in the cluster are in started status and running on Node 1. Check the status using the command ‘pcs status’. [1] Install SAP Web Dispatcher on the first Node. For RHEL: Allow access to SWPM. This rule is not permanent. If you reboot the machine, you should run the command again. sudo firewall-cmd --zone=public --add-port=4237/tcp Run the SWPM. ./sapinst SAPINST_USE_HOSTNAME=<virtual hostname> Enter the virtual hostname and Instance number. Provide the S/4 HANA message server details for backend connections. Continue with SAP Web Dispatcher installation. Check the status of SAP WD. [1] Stop the SAP WD and disable the systemd service. This step is only if SAP startup framework is managed by systemd as per SAP Note 3115048. # login as sidadm user sapcontrol -nr 00 -function Stop # login as root user systemctl disable SAPWD1_00.service [1] Move the Filesystem, virtual IP, and probe port resources for SAP Web Dispatcher to second Node. For SLES: sudo crm node online webdisp02 sudo crm node standby webdisp01 For RHEL: sudo pcs node unstandby webdisp02 sudo pcs node standby webdisp01 NOTE: Before proceeding to the next steps, check that resources successfully moved to Node 2. [2] Setup SAP Web Dispatcher on the second Node. To setup the SAP WD on Node 2, we can copy the following files and directories from Node 1 to Node 2. Also perform the other tasks in Node 2 as mentioned below. Note: Please ensure that permissions, owner, and group names are same in Node 2 for all the copied items as in Node 1. Before copying, save a copy of the existing files in Node 2. Files to copy # For SLES and RHEL /usr/sap/sapservices /etc/system/system/SAPWD1_00.service /etc/polkit-1/rules.d/10-SAPWD1-00.rules /etc/passwd /etc/shadow /etc/group # For RHEL /etc/gshadow Folders to copy # After copying, Rename the ‘hostname’ in the environment file names. /home/wd1adm /home/sapadm /usr/sap/ccms /usr/sap/tmp Create the 'SYS' directory in the /usr/sap/WD1 folder Create all subdirectories and soft links as available in Node 1. [2] Install the saphostagent Extract the SAPHOSTAGENT.SAR file Run the command to install it ./saphostexec -install Check if SAP hostagent is running successfully /usr/sap/hostctrl/exe/saphostexec -status [2] Start SAP WD on node 2 and check the status sapcontrol -nr 00 -function StartService WD1 sapcontrol -nr 00 -function Start sapcontrol -nr 00 -function GetProcessStatus [1] For SLES: Update the instance profile vi /sapmnt/WD1/profile/WD1_W00_wd1webdispha # Add the following lines. service/halib = $(DIR_EXECUTABLE)/saphascriptco.so service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector [A] Configure SAP users after the installation sudo usermod -aG haclient wd1adm [A] Configure keepalive parameter and add the parameter in /etc/sysctl.conf to set the value permanently sudo sysctl net.ipv4.tcp_keepalive_time=300 Create SAP Web Dispatcher resource in cluster For SLES: sudo crm configure property maintenance-mode="true" sudo crm configure primitive rsc_sap_WD1_W00 SAPInstance \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=WD1_W00_wd1webdispha \ START_PROFILE="/usr/sap/WD1/SYS/profile/WD1_W00_wd1webdispha" \ AUTOMATIC_RECOVER=false MONITOR_SERVICES="sapwebdisp" sudo crm configure modgroup g-WD1_W00 add rsc_sap_WD1_W00 sudo crm node online webdisp01 sudo crm configure property maintenance-mode="false" For RHEL sudo pcs property set maintenance-mode=true sudo pcs resource create rsc_sap_WD1_W00 SAPInstance \ InstanceName=WD1_W00_wd1webdispha START_PROFILE="/sapmnt/WD1/profile/WD1_W00_wd1webdispha" \ AUTOMATIC_RECOVER=false MONITOR_SERVICES="sapwebdisp" \ op monitor interval=20 on-fail=restart timeout=60 \ --group g-WD1_W00 sudo pcs node unstandby webdisp01 sudo pcs property set maintenance-mode=false [A] For RHEL: Add firewall rules for SAP Web Dispatcher and Azure load balancer health probe ports on both nodes. sudo firewall-cmd --zone=public --add-port={62320,44300,8000}/tcp --permanent sudo firewall-cmd --zone=public --add-port={62320,44300,8000}/tcp Verify SAP Web Dispatcher Cluster is running successfully Check "insights" blade of Azure load balancer in portal. It would show connections are redirected to one of the nodes. Check the backend S/4 HANA connection is working using the SAP Web Dispatcher Administration link. Run the sapwebdisp config check sapwebdisp pf=/sapmnt/WD1/profile/WD1_W00_wd1webdispha -checkconfig Test the cluster setup For SLES Pacemaker cluster testing for SAP Web Dispatcher can be derived from the document Azure VMs high availability for SAP NetWeaver on SLES (for ASCS/ERS Cluster) We can run the following test cases (from the above link), which can be applicable for SAP WD component. Test HAGetFailoverConfig and HACheckFailoverConfig Manually migrate the SAP Web Dispatcher resource Test HAFailoverToNode Simulate node crash Blocking network communication Test manual restart of SAP WD instance For RHEL Pacemaker cluster testing for SAP Web Dispatcher can be derived from the document Azure VMs high availability for SAP NetWeaver on RHEL (for ASCS/ERS Cluster) We can run the following test cases (from the above link), which can be applicable for SAP WD component. Manually migrate the SAP Web Dispatcher resource Simulate a node crash Blocking network communication Kill the SAP WD process 3. Active/Active HA Setup of SAP Web Dispatcher 3.1. System Design In this Active/Active setup of SAP Web Dispatcher (WD), we can deploy and run parallel standalone WD on individual VMs with share nothing designs and have different SID. To connect to the SAP Web Dispatcher, Users will be using the one virtual hostname (FQDN)/IP as defined in the front-end IP of Azure Load balancer. Virtual IP to hostname/FQDN mapping needs to be performed in AD/DNS. Incoming traffic will be distributed to either of the WD by the Azure Internal Load balancer. No Operating system cluster setup is required in this scenario. This architecture can be deployed in either Linux or Windows operating systems. In ILB configuration, Session persistence settings will ensure that user’s successive requests always be routed from Azure Load balancer to same WD as long as its active and ready to receive connections. Also, SAP Help documentation describes this HA architecture as “High availability with several parallel Web Dispatchers”. The following are the advantages of the active-active SAP WD setup. Simpler design no need to set up Operating System Cluster We have 2 WD instances to handle the requests and distribute the workload. If one of the nodes fail, Load balancer will forward request to another and stop sending requests to failed node. So, it means SAP WD setup is highly available. We need the following components to setup active/active SAP Web Dispatcher on Linux. A pair of SAP Certified VMs on Azure with supported Linux Operating System. Cross Availability Zone deployment is recommended for higher VM level SLA. Azure managed disk of required size on each VM to create Filesystems for ‘sapmnt’ and ‘/sar/sap’. Azure Load Balancer for configuring virtual IP and hostname (in DNS) of the SAP Web Dispatcher. Installation of SAP Web Dispatcher on both the VMs with different SID. It is recommended to use the latest version of SAP Web Dispatcher. 3.2. Deployment Steps This section provides detailed steps for HA active/active SAP Web Dispatcher deployment for both the supported Linux operating systems (SUSE Linux and Redhat Linux). Please refer to SAP Note 1928533 for SAP on Azure certified VMs, SAPS values and supported operating systems versions for SAP environment. 3.2.1. For SUSE and RHEL Linux Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SUSE/RHEL Linux for SAP. Add managed data disk on each of the VMs and create ‘/usr/sap’ and ‘/sapmnt/<SID> Filesystem in it. Install the SAP Web Dispatcher using SAP SWPM on both VMs. Both SAP WD are completely independent of each other and should have separate SID. Perform the basic configuration check for both SAP web dispatchers using “sapwebdisp pf=<profile> -checkconfig”. We should also check if SAP WD Admin URL is working for both WD. Deploy the Azure standard load balancer for defining the virtual IP of the SAP Web Dispatcher. As a reference, the following setup is used in deployment. Front-end IP Backend Pool Health Probe Port Load Balancing Rule 10.50.60.99 (Virtual IP of SAP Web Dispatcher) Node1 & Node2 VM Protocol: HTTPS Port: 44300 (WD https port) Path: /sap/public/icman/ping Interval: 5 seconds (set probeThreshold=2 using azure CLI) Port & Backend Port: 44300 Floating IP: Disable, TCP Reset: Disable, Idle Timeout: Max (30 Minutes) Icman/ping is a way to ensure that SAP web dispatcher is successfully connected to backend SAP S/4 HANA or SAP ERP based application servers. This check is also part of the basic configuration check of SAP web dispatcher using “sapwebdisp pf=<profile> -checkconfig”. If we use HTTP(s) based health probe, ILB connection will be redirected to SAP WD only when connection between SAP WD and S/4 HANA OR ERP Application is working. If we have Java based SAP system as backend environment, then ‘icman/ping’ will not be available, and HTTP(S) path can’t be used in health probe. In that case, we can use TCP based health probe (protocol value as ‘tcp’) and use SAP WD tcp port (like port 8000) in the health probe configuration. In this setup, we used https port 44300 as port & backend port value as that is the only port number used by incoming/source URL. If there are multiple ports to be used/allowed in incoming URL, then we can enable ‘HA Port’ in Load balancing rule instead of specifying the used port. Note: As per SAP Note 2941769, we need to set SAP web dispatcher parameter wdisp/filter_internal_uris=FALSE. Also we need to verify if icman ping URL is working for both the SAP Web dispatchers with their actual hostnames. Define the front-end IP (virtual IP) and hostname mapping in the DNS or /etc/hosts file. Check if Azure Loadbalancer is routing traffic to both WD. In the ‘Insights’ section for Azure loadbalancer, connection health to the VMs should be green. Validate the SAP Web Dispatcher URL is accessible using virtual hostname. Perform high availability tests for SAP WD. Stop first SAP WD and verify WD connections are working. Then start the first WD and stop the second WD and verify that the WD connections are working. Simulate node crash of each of the WD VMs and verify that the WD connections are working. 3.3. SAP Web Dispatcher (active/active) for Multiple Systems We can use the SAP WD (active/active) pair to connect to multiple backend SAP systems rather than setting up separate SAP WD for each SAP backend environment. Based on the unique URL of the incoming request with different virtual hostname/FQDN and/or port of the SAP WD, user request will be directed to any one of the SAP WD and then SAP WD will determine the backend system to redirect and load balance the requests. SAP documents describe the design and SAP specific configurations steps for this scenario. SAP Web Dispatcher for Multiple Systems One SAP Web Dispatcher, Two Systems: Configuration Example In Azure environment, SAP Web Dispatcher architecture will be as below. We can deploy this setup by defining an Azure standard load balancer with multiple front-end IPs attached to one backend-pool of SAP WD VMs and configuring health-probe and load balancing rules to associate it. When configuring Azure Load Balancer with multiple frontend IPs pointing to the same backend pool/port, floating IP must be enabled for each load balancing rule. If floating IP is not enabled on the first rule, Azure won’t allow the configuration of additional rules with different frontend IPs on the same backend port. Refer to the article Multiple frontends - Azure Load Balancer With floating IPs enabled on multiple load balancing rules, the frontend IP must be added to the network interface (e.g., eth0) on both SAP Web Dispatcher VMs. 3.3.1. Deployment Steps Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SUSE/RHEL Linux for SAP. Add managed data disk on each of the VMs and create ‘/usr/sap’ and ‘/sapmnt/<SID> Filesystem in it. Install the SAP Web Dispatcher using SAP SWPM on both VMs. Both SAP WD are completely independent of each other and should have separate SID. Deploy Azure Standard Load Balancer with configuration as below Front-end IP Backend Pool Health Probe Port Load Balancing Rule 10.50.60.99 (Virtual IP of SAP Web Dispatcher for redirection to S/4 or Fiori SID E10) Node1 & Node2 VMs Protocol: TCP Port: 8000 (WD tcp port) Interval: 5 seconds (set probeThreshold=2 using azure CLI) Protocol: TCP Port & Backend Port: 44300 Floating IP: Enable, TCP Reset: Disable, Idle Timeout: Max (30 Minutes) 10.50.60.101 (Virtual IP of SAP Web Dispatcher for redirection to S/4 SID or Fiori E60) Protocol: TCP Port & Backend Port: 44300 Floating IP: Enable, TCP Reset: Disable, Idle Timeout: Max (30 Minutes) As described above, we are defining 2 front-end IPs, 2 load-balancing rules, 1 back-end pool and 1 health probe. In this setup, we used https port 44300 as port & backend port value as that is the only port number used by incoming/source URL. If there are multiple ports to be used/allowed in incoming URL, then we can enable ‘HA Port’ in Load balancing rule instead of specifying the used port. Define the front-end IP (virtual IP) and hostname mapping in the DNS or /etc/hosts file. Add both the virtual IPs to the SAP WD VMs network interface. Make sure the additional IPs are added permanently and do not disappear after VM reboot. For SLES, refer to “alternative workaround” section in Automatic Addition of Secondary IP Addresses in Azure For RHEL, refer to the solution provided using “nmcli” command in the How to add multiple IP range in RHEL9 Displaying the "ip addr show" for SAP WD VM1: >>ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 60:45:bd:73:bd:14 brd ff:ff:ff:ff:ff:ff inet 10.50.60.87/26 brd 10.50.60.127 scope global eth0 valid_lft forever preferred_lft forever inet 10.50.60.99/26 brd 10.50.60.127 scope global secondary eth0 valid_lft forever preferred_lft forever inet 10.50.60.101/26 brd 10.50.60.127 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::6245:bdff:fe73:bd14/64 scope link valid_lft forever preferred_lft forever Displaying the "ip addr show" for SAP WD VM2: >> ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 60:45:bd:73:b1:92 brd ff:ff:ff:ff:ff:ff inet 10.50.60.93/26 brd 10.50.60.127 scope global eth0 valid_lft forever preferred_lft forever inet 10.50.60.99/26 brd 10.50.60.127 scope global secondary eth0 valid_lft forever preferred_lft forever inet 10.50.60.101/26 brd 10.50.60.127 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::6245:bdff:fe73:b192/64 scope link valid_lft forever preferred_lft forever Update the Instance profile of SAP WDs. #----------------------------------------------------------------------- # Back-end system configuration #----------------------------------------------------------------------- wdisp/system_0 = SID=E10, MSHOST=e10ascsha, MSPORT=8100, SSL_ENCRYPT=1, SRCSRV=10.50.60.99:* wdisp/system_1 = SID=E60, MSHOST=e60ascsha, MSPORT=8100, SSL_ENCRYPT=1, SRCSRV=10.50.60.101:* Stop and Start the SAP WD on VM1 and VM2. Note: With the above SRCSRV parameter value, only incoming request from “.99 (or its hostname)” for E10 or “.101 (or its hostname)” for E60 will be sent to SAP backend environment. If we also want to use SAP WD actual IP or hostname-based request to be also connected to SAP Backend systems, then we need to add those IP or hostnames in the value (separated by semicolon) of SRCSRV parameter. Perform the basic configuration check for both SAP web dispatcher using “sapwebdisp pf=<profile> -checkconfig”. We should also check if SAP WD Admin URL is working for both WD. In the Azure Portal, in the ‘Insights’ section of Azure load balancer, we can see that connection status to the SAP WD VMs are healthy.