<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Running SAP Applications on the Microsoft Platform articles</title>
    <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/bg-p/SAPApplications</link>
    <description>Running SAP Applications on the Microsoft Platform articles</description>
    <pubDate>Fri, 15 May 2026 14:44:44 GMT</pubDate>
    <dc:creator>SAPApplications</dc:creator>
    <dc:date>2026-05-15T14:44:44Z</dc:date>
    <item>
      <title>SAP on Azure Product Announcements Summary – SAP Sapphire 2026</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-product-announcements-summary-sap-sapphire-2026/ba-p/4517634</link>
      <description>&lt;H1&gt;&lt;STRONG&gt;Introduction&lt;/STRONG&gt;&lt;/H1&gt;
&lt;P&gt;Today at SAP Sapphire, we announced a new wave of innovations deepening the Microsoft–SAP partnership – advancing RISE with SAP on Azure, our SAP S/4HANA integrations, and our shared AI platform. With more than three decades of co-engineering, Microsoft and SAP continue to help customers modernize their ERP estate and build new value on top of it.&lt;/P&gt;
&lt;P&gt;Below is a look at the latest product updates, alongside customer evidence of what is possible when SAP and Microsoft come together.&lt;/P&gt;
&lt;H2&gt;Customer Evidence&lt;/H2&gt;
&lt;H3&gt;AI: From ERP Data to Intelligence in the Flow of Work&lt;/H3&gt;
&lt;P&gt;&lt;A href="https://www.microsoft.com/en/customers/story/26034-kone-power-apps" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;KONE&lt;/STRONG&gt;&lt;/A&gt;&amp;nbsp;is running an AI-driven contract workflow on Power Platform that validates documents against SAP records and auto-creates contracts in SAP — processing 54,000+ contracts per year with a 33% reduction in handling time. To support their 3,000+ citizen developers, KONE developed an agent with&amp;nbsp;Microsoft Copilot Studio&amp;nbsp;that guides makers through building solutions, generating prompts and surfacing existing apps to avoid duplication.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;EM&gt;"Power Platform is enabling us to integrate highly effective AI models into our automation solutions and that is helping us streamline increasingly complex processes — efficiently and at scale."&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;— Lulu Zhang, Director, Head of Technology &amp;amp; Services, KONE&lt;/P&gt;
&lt;H3&gt;Security: Protecting the SAP Core&lt;/H3&gt;
&lt;P&gt;&lt;A href="https://www.microsoft.com/en/customers/story/26295-maire-microsoft-sentinel" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;MAIRE&lt;/STRONG&gt;&lt;/A&gt;, a global engineering group operating across 50 countries, deployed Microsoft Sentinel for SAP to secure its accounts payable environment — the heartbeat of over 10,000 employees. With 50+ active detection rules and cross-environment event correlation now automated, MAIRE has shifted from reactive incident response to continuous, AI-ready threat intelligence.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;EM&gt;"SAP generates an impressive amount of logs, and with the Microsoft solution, we are able to detect suspicious events before they can become a problem."&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;— Andrea Sgarlata, Identity Manager, Tecnimont Services, MAIRE Group&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.microsoft.com/en/customers/story/26155-cenibra-celulose-nipo-brasileira-sa-microsoft-entra-id-governance" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Cenibra&lt;/STRONG&gt;&lt;/A&gt;&amp;nbsp;replaced SAP Identity Management with Microsoft Entra ID Governance, integrating 80+ systems and achieving a 46% operational gain — with 60–70% of manual IAM effort projected to be eliminated as automation expands.&lt;/P&gt;
&lt;H3&gt;Running SAP at Scale: Migration as a Strategic Foundation&lt;/H3&gt;
&lt;P&gt;&lt;A href="https://www.microsoft.com/en/customers/story/26271-maersk-sap-on-azure" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Maersk&lt;/STRONG&gt;&lt;/A&gt;&amp;nbsp;migrated 500 SAP servers and a petabyte of data to Azure in six months — with near 100% uptime and zero incidents — and is now using Azure OpenAI with SAP to let teams query invoice and shipment data in natural language.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;EM&gt;"This wasn't just a migration. It was a mindset shift. We needed to move from managing infrastructure to driving engineering innovation."&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;— Roman Kulczykowski, Senior Director, SAP Technology Platform, Maersk&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We're pleased to share the product updates behind these outcomes:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;From &lt;STRONG&gt;SAP Joule + Microsoft Copilot&lt;/STRONG&gt; to agent-to-agent workflows, SAP and Microsoft are turning SAP processes into reusable AI-powered building blocks.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Microsoft Fabric&lt;/STRONG&gt;’s SAP footprint just grew: SAP BDC Data Connect, Datasphere replication, certified partners.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Microsoft Sentinel for SAP:&lt;/STRONG&gt;Expanded SAP detections, richer SAP ETD cross-signal correlation, and upcoming LogServ/ASIM integration bring SAP telemetry natively into your XDR workflows.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;SAP Deployment Automation Framework&lt;/STRONG&gt; expands support for highly available SAP architectures with HANA scale-out and HSR capabilities, enabling GitHub-native deployments and centralized configuration management.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;SAP Testing Automation Framework&lt;/STRONG&gt; advances high availability validation with scale-out HANA testing, backup validation, and integrated configuration checks to enable continuous reliability assurance.&lt;/LI&gt;
&lt;LI&gt;Extended the &lt;STRONG&gt;Observability Dashboard&lt;/STRONG&gt; with additional infrastructure checks and introduced a reusable AIOps pattern to move from observability insights to governed operational action.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Let's dive into the summary of product updates and services.&lt;/P&gt;
&lt;H1&gt;&lt;STRONG&gt;Extend and Innovate &lt;/STRONG&gt;&lt;/H1&gt;
&lt;H2&gt;Copilot Studio &amp;amp; Power Platform&lt;/H2&gt;
&lt;H3&gt;Joule &amp;amp; Microsoft Copilot: Adoption and Enablement&lt;/H3&gt;
&lt;P&gt;The Joule and Microsoft 365 Copilot integration reached general availability in late 2025, and we now see hundreds of customers actively exploring and onboarding the solution. To accelerate adoption, SAP and Microsoft are delivering:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Updated onboarding guidance such as the &lt;A href="https://discovery-center.cloud.sap/missiondetail/4741/5025/" target="_blank" rel="noopener"&gt;SAP Discovery Center Mission – Integrate Joule and Microsoft 365 Copilot&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Dedicated SAP services to &lt;A href="(https:/me.sap.com/notes/3722273" target="_blank" rel="noopener"&gt;support&lt;/A&gt; customers getting started.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;First Agent-to-Agent (A2A) Scenarios with Nestlé&lt;/H3&gt;
&lt;P&gt;We are continuing to evolve the integration beyond chat-based experiences toward true agent interoperability. At SAPPHIRE, Nestlé is showcasing early Agent-to-Agent (A2A) scenarios, where:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;SAP services are exposed via the agent gateway&lt;/LI&gt;
&lt;LI&gt;Copilot Studio acts as the orchestration layer, consuming Joule services using an open, vendor-neutral A2A protocol.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This marks an important step toward a multi-agent ecosystem across SAP and Microsoft.&lt;/P&gt;
&lt;H3&gt;Easier SAP Integration with Copilot Studio&lt;/H3&gt;
&lt;P&gt;Many SAP customers expose standard and custom APIs using the SAP Business Technology Platform connected via SAP Cloud Connector to their SAP systems like SAP S/4HANA or even older SAP ECC systems. Using SAP API Management customers can already today expose these SAP OData Services and soon also MCP Servers which can be consumed in Copilot Studio.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;More information about SAP with Microsoft can be found on &lt;A href="https://learn.microsoft.com/en-us/azure/sap/microsoft-ai/about-sap-with-microsoft-ai" target="_blank" rel="noopener"&gt;Microsoft Learn&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Microsoft Fabric&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;We continue to deepen the integration between Microsoft Fabric and SAP solutions by evolving our strategy to offer options to leverage their SAP data in Fabric:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;We are expanding aligned integration options with Mirroring for SAP Datasphere, generally available since March 2026. This technology integrates SAP Datasphere replications flows into the mirroring capabilities of Microsoft Fabric. With these technologies you can seamlessly integrate the data integration tools of SAP with the power of Microsoft Fabric.&lt;/LI&gt;
&lt;LI&gt;In addition, we are collaborating closely with SAP to make &lt;A href="https://blog.fabric.microsoft.com/en-us/blog/29410" target="_blank" rel="noopener"&gt;SAP Business Data Cloud Connect for Microsoft Fabric&lt;/A&gt; available for customers in the second half of 2026. This will allow bi-directional, zero-copy sharing between SAP Business Data Cloud and Microsoft Fabric, significantly simplifying many use cases that previously required moving and managing copies of data.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Sentinel Solution for SAP&lt;/H2&gt;
&lt;P&gt;Microsoft Sentinel for SAP continues to expand coverage of the SAP core, SAP BTP, SAP LogServ and the broader SAP ecosystem — giving SOC teams broader, deeper, and more contextualized SAP signal inside their existing Microsoft XDR workflows.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/azure/sentinel/sap/sap-btp-security-content#built-in-analytics-rules" target="_blank" rel="noopener"&gt;New SAP detections&lt;/A&gt; — catalog of out-of-the-box detection expanded to high profile targets such as Integration Suite, Build WorkZone, and Cloud Identity Services&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/azure/sentinel/sap/sap-logserv-overview" target="_blank" rel="noopener"&gt;SAP Logserv&lt;/A&gt; roadmap — solution will allow re-use of Microsoft’s &lt;A href="https://learn.microsoft.com/azure/sentinel/normalization" target="_blank" rel="noopener"&gt;Advanced Security Information Model&lt;/A&gt; (ASIM) and other standard tables so customers and partners can profit from black-box detections apply existing XDR investments directly to their SAP telemetry.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/sap/solution-partner-overview#solutions-provided-by-sap-as-vendor" target="_blank" rel="noopener"&gt;SAP ETD&lt;/A&gt; correlation with Microsoft XDR — the SAP Enterprise Threat Detection solution now ships email artifacts alongside IP and host, enabling deeper cross-signal correlation across SAP and Microsoft Defender (previously limited to IP and host only)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The result: more out-of-the-box coverage, better re-use of existing Microsoft and partner detection investments, and richer correlation between SAP and the rest of the Microsoft Defender estate.&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Microsoft Entra&lt;/H2&gt;
&lt;P&gt;Microsoft Entra ID and Entra ID Governance extend identity lifecycle and entitlement management into SAP via integration with &lt;A href="https://learn.microsoft.com/en-us/entra/identity/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial" target="_blank" rel="noopener"&gt;SAP Cloud Identity Services (SCI)&lt;/A&gt;, &lt;A href="https://learn.microsoft.com/en-us/entra/id-governance/entitlement-management-sap-integration" target="_blank" rel="noopener"&gt;SAP Identity Access Governance (IAG)&lt;/A&gt;, and SAP Access Control (AC). Microsoft and SAP have significantly deepened their collaboration in identity governance — delivering an end-to-end solution that extends Microsoft Entra into SAP landscapes at enterprise scale.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;End-to-end integration with SAP Identity Access Governance (IAG) now available in public preview, enabling customers to:&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; Publish SAP business roles into Entra entitlement catalogs and assign SAP access through Entra access packages&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; Enforce approval workflows and Separation of Duties (SoD) policies natively&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; Integration with SAP IAG also supports environments still relying on SAP AC, providing a phased migration path toward cloud-first governance.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;General availability of the improved SAP Cloud Identity Services Connector in Microsoft Entra featuring SCIM 2.0 support, provisioning of Groups &amp;amp; Group Memberships and OAuth 2.0-based authentication replacing basic authentication&lt;/LI&gt;
&lt;LI&gt;Day-zero visibility through &lt;A href="https://learn.microsoft.com/en-us/entra/identity/app-provisioning/how-to-account-discovery" target="_blank" rel="noopener"&gt;account discovery&lt;/A&gt; allowing customers to correlate SAP accounts with Entra identities via SAP Cloud Identity Services and get immediate transparency into existing SAP identities. It also accelerates onboarding into governance workflows&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The result is a modern, cloud-based identity governance platform for SAP, combining Microsoft’s identity lifecycle automation with SAP-native compliance controls, and a clear migration path as SAP IDM approaches end of maintenance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Purview&lt;/H2&gt;
&lt;P&gt;Microsoft Purview allows uniform data governance and compliance across the enterprise including SAP sources. Purview released several notable updates for SAP&amp;nbsp;since the last edition:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/register-scan-sap-hana" target="_blank" rel="noopener"&gt;SAP Calculation View&lt;/A&gt; support&amp;nbsp;for metadata scans, relevant for HANA DB and a major customer ask is now generally available.&lt;/LI&gt;
&lt;LI&gt;Scoped scanning&amp;nbsp;(configure exactly which metadata to scan) for&amp;nbsp;ECC and S/4HANA is now in&amp;nbsp;Public Preview&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/register-scan-sap-bw" target="_blank" rel="noopener"&gt;BW/4HANA connector&lt;/A&gt; is also now generally available&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;Modern Authentication for SAP Integrations&lt;/H4&gt;
&lt;P&gt;As the ecosystem evolves away from legacy authentication models, Microsoft and SAP are enabling secure, cloud-native integrations by replacing Basic Authentication with OAuth 2.0-based patterns across key scenarios. These innovations establish best security practices by replacing Basic Authentication with the secure OAuth 2.0 protocol and avoid the use of shared credentials that have an expiration. The outcome: A modern, secure integration layer for SAP, aligned with Zero Trust principles and ready for AI-driven and API-based enterprise architectures.&lt;/P&gt;
&lt;H4&gt;Secure Email Integration: SAP ↔ Exchange Online&lt;/H4&gt;
&lt;P&gt;With the deprecation of Basic Authentication, SAP systems &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/exchange-online-integration-sap-email-outbound" target="_blank" rel="noopener"&gt;now integrate with Exchange Online using OAuth 2.0 and Microsoft Entra ID&lt;/A&gt; for outbound email scenarios. The SAP ABAP systems authenticate using client credentials or certificate-based (JWT) flows. Also communication is secured via SMTP OAuth 2.0, eliminating password-based authentication. This modern approach ensures authentication without the need for password for SAP outbound communication and alignment with SAP and Microsoft.&lt;/P&gt;
&lt;H4&gt;Extending Modern Authentication to SAP SuccessFactors APIs&lt;/H4&gt;
&lt;P&gt;Beyond infrastructure scenarios, modern authentication is also being adopted across SAP SaaS integrations with new integration patterns using OAuth-secured access to SAP SuccessFactors OData APIs the Microsoft Entra ID acts as the central identity provider and token issuer enabling secure, governed API access without credential-based authentication.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;&lt;STRONG&gt;SAP on Azure Software Products and Services &lt;/STRONG&gt;&lt;/H1&gt;
&lt;H2&gt;SAP Deployment and Testing Automation Framework&lt;/H2&gt;
&lt;P&gt;The first half of 2026 marks the most significant release cycle for both the SAP Deployment Automation Framework (SDAF) and the SAP Testing Automation Framework (STAF) since their inception. The latest releases deliver broad platform expansion, deeper high-availability coverage, and a matured testing capability that extends well beyond initial scope.&lt;/P&gt;
&lt;P&gt;Highlights at a glance:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;SDAF now supports GitHub Actions as a first-class deployment path alongside Azure DevOps and CLI&lt;/LI&gt;
&lt;LI&gt;Azure App Configuration integration provides centralized, single-source-of-truth configuration management&lt;/LI&gt;
&lt;LI&gt;Deep investment in HANA scale-out with Pacemaker and HSR, including SAPHanaSR-angi support for SLES&lt;/LI&gt;
&lt;LI&gt;Platform coverage expanded to RHEL 10, OracleLinux9, and newer SLES release.&lt;/LI&gt;
&lt;LI&gt;STAF adds scale-out HSR testing, and Azure Backup Testing integration for SAP HANA&lt;/LI&gt;
&lt;LI&gt;Configuration Checks capability, a rewrite of the open-source Quality Checks tool, now ships natively within STAF. Introduced scheduling support for both HA functional tests and configuration checks&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;SAP Deployment Automation Framework (SDAF)&lt;/H2&gt;
&lt;P&gt;SDAF now supports GitHub Actions end-to-end, including automated workflow setup, container-based execution, and built-in secret management; providing deployment experience on GitHub equivalent to Azure DevOps. Azure App Configuration integration centralizes deployment parameters across control planes and workload zones, eliminating parameter configuration drift across environments.&lt;/P&gt;
&lt;P&gt;High-availability infrastructure coverage has seen its deepest investment to date. HANA scale-out with Pacemaker and HSR now supports SAPHanaSR-angi on SLES, adds conditional resource movement based on instance name and Pacemaker version, and enhances replication stability with improved retry and error-clearing logic. Additional updates include Azure Files NFS encryption in transit, hardened Oracle Data Guard automation with idempotent post-processing and dynamic SID handling, and improved networking logic for both greenfield and brownfield scenarios.&lt;/P&gt;
&lt;H2&gt;SAP Testing Automation Framework (STAF)&lt;/H2&gt;
&lt;P&gt;STAF continues to expand its SAP workload validation coverage and automation capabilities - making it easier to validate high availability designs, schedule tests at scale, and verify backup and restore readiness in Azure.&lt;/P&gt;
&lt;P&gt;STAF has introduced three major capabilities in the past few months:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Expanded high availability validation for SAP HANA with scale-out HANA System Replication (HSR) support (including the SAPHanaSR-ScaleOut provider and updated HA test coverage for scale-out topologies)&lt;/LI&gt;
&lt;LI&gt;Test scheduling and run management via REST API and CLI (with containerized deployment improvements to simplify operating the service).&lt;/LI&gt;
&lt;LI&gt;Azure Backup validation and functional testing for HANA through a dedicated Ansible module that enables end-to-end backup discovery and restore workflows (including restore monitoring and cross-VM restore scenarios).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The Configuration Checks capability, integrated natively into STAF from the open-source Quality Checks tool previewed in November 2025, now includes enhanced telemetry with duration tracking, updated disk performance thresholds, and improved HTML reporting.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H2&gt;Azure Center for SAP solutions Tools and Frameworks&lt;/H2&gt;
&lt;P&gt;We continue to enhance our scripts and supporting tools and frameworks outside the core product experience. These updates are designed to help customers and partners bridge the gap between evolving operational needs and available product capabilities.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The &lt;STRONG&gt;Observability Dashboard&lt;/STRONG&gt; has evolved into a more actionable operational view for Azure workload reviews, bringing security, network, and infrastructure signals into one place to improve visibility, reduce manual follow-ups, and support faster decision-making.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; The Security Dashboard now highlights key exposure and hygiene risks such as public inbound access, orphaned public IPs, storage accounts without Private Endpoint, and Basic tier load balancers.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; The Network Dashboard now includes VNet peering status, helping teams quickly validate connectivity posture alongside ExpressRoute, gateway, public IP SKU, UDR, subnet, and remote access checks. The Infrastructure Summary Dashboard helps identify configuration gaps such as VMs that support NVMe but are still using SCSI, failed VM extensions, and disabled Accelerated Networking.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; The extended dashboard also adds visibility into AFS subnet configuration, giving teams a clearer view of platform readiness and operational consistency across customer environments.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Operation Excellence AIOps Custom Agent&lt;/STRONG&gt;: As part of our AIOps work, we are exploring how AI can move beyond generic operational insights and help customers think differently about managing complex Azure workloads. The focus is on enabling customer-specific AI agents to use cases that reflect real operational challenges, business priorities, and environment-specific patterns, rather than applying a one-size-fits-all model. By combining observability, automation, Azure resource insights, operational telemetry, and approval-driven actions, customers can identify risks earlier, reduce manual investigation effort, and accelerate decision-making across their estate. This approach creates a practical path for customers to experiment safely, address targeted operational scenarios, and shape AI-enabled operations around the needs of their own workloads, teams, and governance models. For more, see &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/sapapplications/from-observability-to-action-building-an-ai-powered-aiops-agent-for-customer-spe/4515611" target="_blank" rel="noopener" data-lia-auto-title="From Observability to Action: Building an AI-Powered AIOps Agent for Customer-Specific Operations" data-lia-auto-title-active="0"&gt;From Observability to Action: Building an AI-Powered AIOps Agent for Customer-Specific Operations&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To learn more, visit the Microsoft sessions at SAP Sapphire 2026 and check out our &lt;A href="https://learn.microsoft.com/en-gb/collections/7rysj5w2007dn?wt.mc_id=saponazurecta_collections_webpage_azuremktg_csainfra" target="_blank" rel="noopener"&gt;SAP on Azure learning page.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 11 May 2026 13:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-product-announcements-summary-sap-sapphire-2026/ba-p/4517634</guid>
      <dc:creator>Hiren_Shah_Azure</dc:creator>
      <dc:date>2026-05-11T13:00:00Z</dc:date>
    </item>
    <item>
      <title>From Observability to Action: Building an AI-Powered AIOps Agent for Customer-Specific Operations</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/from-observability-to-action-building-an-ai-powered-aiops-agent/ba-p/4515611</link>
      <description>&lt;P data-start="612" data-end="909"&gt;Modern cloud operations are no longer just about collecting metrics and reacting to alerts. Customers are running increasingly complex, business-critical workloads where every environment has its own architecture, performance profile, operational constraints, approval process, and risk tolerance.&lt;/P&gt;
&lt;P data-start="911" data-end="1223"&gt;This means a generic monitoring dashboard is not enough. What customers demand is a controlled operating model that can turn platform signals into explainable recommendations, route those recommendations through the right approval path, execute only within agreed guardrails, and preserve a complete audit trail.&lt;/P&gt;
&lt;P data-start="1225" data-end="1517"&gt;In many customer environments, an &lt;STRONG&gt;observability dashboard&lt;/STRONG&gt; is used to review platform summaries, operational signals, health trends, and deep-dive assessments across critical Azure workloads. The next opportunity is to take one of those observations and turn it into a governed AIOps workflow.&lt;/P&gt;
&lt;P data-start="1519" data-end="1811"&gt;In this example, I use Premium SSD v2 performance tuning to show how a customer can begin with one measurable operational signal, apply clear policy, use AI to generate an explainable recommendation, request human approval, and then perform controlled execution using Azure-native automation.&lt;/P&gt;
&lt;P data-start="1813" data-end="1932"&gt;The goal is not to automate every operational scenario on day one. The goal is to establish a safe, repeatable pattern:&lt;/P&gt;
&lt;P data-start="1934" data-end="2005"&gt;&lt;STRONG data-start="1934" data-end="2005"&gt;Observe → Assess → Recommend → Approve → Execute → Validate → Learn&lt;/STRONG&gt;&lt;/P&gt;
&lt;P data-start="2007" data-end="2240"&gt;This pattern can later be extended to other operational scenarios such as VM rightsizing, backup failure triage, capacity risk detection, configuration drift, application health checks, and workload-aware performance recommendations.&lt;/P&gt;
&lt;H2 data-section-id="1vn7xl9" data-start="2242" data-end="2261"&gt;Why this matters&lt;/H2&gt;
&lt;P data-start="2263" data-end="2405"&gt;Most operations teams already have monitoring. The challenge is turning monitoring insight into safe, repeatable, governed operational action.&lt;/P&gt;
&lt;P data-start="2407" data-end="2755"&gt;For example, a platform team may observe that a disk repeatedly crosses an IOPS threshold. In a traditional model, an engineer reviews metrics, checks disk configuration, validates policy, raises a change, waits for approval, applies the update, and then validates the result. That process is reliable, but it can be manual, inconsistent, and slow.&lt;/P&gt;
&lt;P data-start="2757" data-end="3148"&gt;A customer-specific AI Operational Agent can help convert that process into a controlled workflow. The agent does not need to own the full change lifecycle from the beginning. A safer first step is for the agent to collect evidence, calculate utilisation, generate a recommendation, explain the rationale, request approval, and only then trigger execution through a governed automation path.&lt;/P&gt;
&lt;P data-start="3150" data-end="3377"&gt;Premium SSD v2 is a useful first scenario because disk size, IOPS, and throughput can be adjusted independently, making it a practical candidate for performance-efficiency automation where policy guardrails are clearly defined.&lt;/P&gt;
&lt;H2 data-section-id="1qv731" data-start="3379" data-end="3425"&gt;Scenario: Premium SSD v2 performance tuning&lt;/H2&gt;
&lt;P&gt;Premium SSD v2 is a useful first scenario because disk size, IOPS, and throughput can be adjusted independently, making it a practical candidate for performance-efficiency automation when used with appropriate guardrails. Microsoft documentation describes Premium SSD v2 as a managed disk type where performance can be configured flexibly, including IOPS and throughput characteristics. (&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types?" target="_blank" rel="noopener"&gt;Microsoft Learn&lt;/A&gt;)&lt;/P&gt;
&lt;P&gt;The objective is to identify tagged Premium SSD v2 disks, review recent performance, detect disks crossing configured utilisation thresholds, generate an explainable recommendation, and route the decision through a human-in-the-loop approval process before any change is applied.&lt;/P&gt;
&lt;P&gt;A practical reference implementation should support these outcomes:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Select only approved disks using tags.
Read disk IOPS and throughput from Azure Monitor Metrics.
Compare observed utilisation against policy thresholds.
Generate a recommendation, such as ScaleUpRecommended, ScaleDownCandidate, or NoChange.
Send a daily/weekly operational summary with threshold observations and charts.
Trigger approval only for actionable candidates.
Capture approval or rejection through Teams and/or email.
Execute immediately or schedule the change based on policy.
Record decisions, execution status, and audit data.
Validate post-change impact and feed the outcome back into the process.&lt;/LI-CODE&gt;
&lt;P&gt;This keeps the design focused. It does not try to build a generic “auto-fix everything” agent. It starts with one measurable operational scenario and builds a reusable pattern.&lt;/P&gt;
&lt;H2 data-section-id="1qv731" data-start="3379" data-end="3425"&gt;High-level architecture&lt;/H2&gt;
&lt;P&gt;A high level architecture for custom AIOps Agent scenario:&lt;/P&gt;
&lt;img&gt;Reference Architecture: Premium SSD v2 Customer AIOps Agent Workflow&lt;/img&gt;
&lt;P class="lia-clear-both"&gt;The key design principle is that the agent can recommend, explain, and orchestrate, but the change path remains governed. Policy controls what can be changed, approval controls when it can be changed, and audit records why the change was made.&lt;/P&gt;
&lt;H2 data-section-id="z9snln" data-start="6619" data-end="6647"&gt;Model used in the example&lt;/H2&gt;
&lt;P data-start="6649" data-end="6884"&gt;In this reference implementation, the AI explanation layer uses an Azure OpenAI GPT-4o [gpt-4o, mini, gpt-4.1 or new version can be leveraged based on the requirement] deployment to generate the natural-language recommendation summary, risk statement, approval text, and daily operational summary.&lt;/P&gt;
&lt;P data-start="6886" data-end="7075"&gt;The important design point is that the model is not the only decision-maker. The recommendation is generated through a combination of deterministic policy logic and AI-assisted explanation.&lt;/P&gt;
&lt;P data-start="7077" data-end="7102"&gt;The recommended split is:&lt;/P&gt;
&lt;LI-CODE lang="textile"&gt;Deterministic policy engine:
- Discovers eligible disks.
- Reads Azure Monitor metrics.
- Calculates IOPS and throughput utilisation.
- Applies customer thresholds.
- Determines whether the disk is NoChange, ScaleUpRecommended, or ScaleDownCandidate.
- Enforces tags, policy caps, approval requirement, dry-run mode, and execution mode.

AI model:
- Converts technical evidence into a clear recommendation narrative.
- Explains why the recommendation was generated.
- Summarises impact, risk, and required approval.
- Produces email or Teams approval text.
- Generates daily or weekly operational summaries.
- Does not directly execute Azure changes.
- Does not override customer policy.&lt;/LI-CODE&gt;
&lt;P&gt;This separation is important for enterprise trust. The model helps operations teams understand the recommendation, but the actual eligibility, threshold crossing, and execution guardrails remain deterministic and auditable.&lt;/P&gt;
&lt;H4&gt;Step 1: Select disks using tags&lt;/H4&gt;
&lt;P&gt;The agent should not scan and act on every disk in a subscription by default. A safer pattern is to require explicit tagging.&lt;/P&gt;
&lt;P&gt;Example tag model:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;az tag update `
  --resource-id "&amp;lt;disk-resource-id&amp;gt;" `
  --operation Merge `
  --tags `
    aiops-enabled=true `
    aiops-profile=database `
    aiops-approval-required=true `
    aiops-max-iops=5000 `
    aiops-max-mbps=250 `
    aiops-report-threshold-pct=80 `
    aiops-execution-mode=immediate-after-approval `
    aiops-apply-dry-run=true&lt;/LI-CODE&gt;
&lt;P&gt;Suggested meaning:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;aiops-enabled=true
Allows the disk to be included in the AIOps workflow.

aiops-profile=database
Groups disks by workload profile or operational policy.

aiops-approval-required=true
Requires human approval before execution.

aiops-max-iops
Defines the maximum IOPS policy cap.

aiops-max-mbps
Defines the maximum throughput policy cap.

aiops-report-threshold-pct
Defines when a disk should appear in the threshold report.

aiops-execution-mode
Controls whether approval leads to immediate or scheduled execution.

aiops-apply-dry-run=true
Allows safe validation without changing the disk.&lt;/LI-CODE&gt;
&lt;P&gt;For early testing, keep &lt;EM&gt;aiops-apply-dry-run=true&lt;/EM&gt;. Move to real execution only after the recommendation logic, approval flow, audit trail, and rollback process have been validated.&lt;/P&gt;
&lt;H4&gt;Step 2: Discover eligible disks&lt;/H4&gt;
&lt;P&gt;The daily summary function can use Azure Resource Graph or Azure Resource Manager to discover tagged disks. The query should return only resources explicitly opted into the workflow.&lt;/P&gt;
&lt;P&gt;Example Resource Graph query:&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;Resources
| where type =~ 'microsoft.compute/disks'
| where tostring(tags['aiops-enabled']) =~ 'true'
| where tostring(tags['aiops-profile']) =~ 'database'
| project id, name, resourceGroup, location, sku, properties, tags&lt;/LI-CODE&gt;
&lt;P&gt;This makes the design scalable. Adding a disk to the agent does not require changing workflow code. Operations teams apply the approved tag set, and the next scheduled run discovers the resource.&lt;/P&gt;
&lt;H4&gt;Step 3: Collect disk metrics&lt;/H4&gt;
&lt;P&gt;For each selected disk, the agent queries Azure Monitor Metrics. Azure Monitor provides a Metrics REST API that lists metric values for a resource through the Microsoft.Insights/metrics endpoint. (&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/rest/api/monitor/metrics/list?view=rest-monitor-2023-10-01&amp;amp;tabs=HTTP" target="_blank" rel="noopener"&gt;Microsoft Learn&lt;/A&gt;)&lt;/P&gt;
&lt;P&gt;Example resource-level pattern:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;GET https://management.azure.com/&amp;lt;resource-id&amp;gt;/providers/Microsoft.Insights/metrics
    ?api-version=2023-10-01
    &amp;amp;timespan=&amp;lt;start-time&amp;gt;/&amp;lt;end-time&amp;gt;
    &amp;amp;interval=PT1H
    &amp;amp;metricnames=&amp;lt;metric-list&amp;gt;
    &amp;amp;aggregation=Average,Maximum&lt;/LI-CODE&gt;
&lt;P&gt;Typical disk metrics for this scenario include:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Composite Disk Read Operations/sec
Composite Disk Write Operations/sec
Composite Disk Read Bytes/sec
Composite Disk Write Bytes/sec&lt;/LI-CODE&gt;
&lt;P&gt;For larger estates, consider the Azure Monitor Metrics Batch API. Microsoft documents that the batch API can list metric values for multiple resources, which is useful when querying many resources in the same subscription, region, and resource type. (&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/rest/api/monitor/metrics-batch/batch?view=rest-monitor-2023-10-01&amp;amp;tabs=HTTP" target="_blank" rel="noopener"&gt;Microsoft Learn&lt;/A&gt;)&lt;/P&gt;
&lt;P data-start="10226" data-end="10414"&gt;The agent should collect metrics over a practical time window, such as the last 24 hours, and aggregate into time buckets. This avoids making recommendations based on a single short spike.&lt;/P&gt;
&lt;P data-start="10416" data-end="10433"&gt;Example approach:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Metric window: Last 24 hours
Aggregation: Average and Maximum
Bucket size: 5 minutes or 15 minutes
Signal required: Sustained threshold crossing across multiple buckets&lt;/LI-CODE&gt;
&lt;H4&gt;Step 4: Calculate utilisation&lt;/H4&gt;
&lt;P&gt;The agent calculates total observed IOPS and throughput, then compares those values against the current provisioned settings and customer-defined thresholds.&lt;/P&gt;
&lt;P&gt;Example calculations:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Total IOPS = Read IOPS + Write IOPS 
IOPS utilisation % = Peak Total IOPS / Provisioned IOPS × 100 
Total throughput MiB/s = Read MiB/s + Write MiB/s 
Throughput utilisation % = Peak Total MiB/s / Provisioned MBps × 100&lt;/LI-CODE&gt;
&lt;P&gt;Example output:&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
  "diskName": "&amp;lt;disk-name&amp;gt;",
  "resourceGroup": "&amp;lt;resource-group&amp;gt;",
  "provisionedIops": 3000,
  "provisionedMbps": 125,
  "peakTotalIops": 3059.6,
  "iopsUtilisationPercent": 102.0,
  "peakTotalMiBps": 42.95,
  "throughputUtilisationPercent": 34.4,
  "thresholdObservation": "IOPS crossed the configured threshold",
  "recommendation": "ScaleUpRecommended",
  "approvalRequired": true
}&lt;/LI-CODE&gt;
&lt;H4&gt;Step 5: Generate a recommendation&lt;/H4&gt;
&lt;P data-start="11632" data-end="11830"&gt;The recommendation should be generated using transparent policy logic first. For a first implementation, avoid complex or opaque decision-making. Start with simple rules and make the reason visible.&lt;/P&gt;
&lt;P data-start="11832" data-end="11860"&gt;Example deterministic logic:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;If IOPS utilisation is above threshold: 
  Mark as threshold crossed. 

If throughput utilisation is above threshold: 
  Mark as threshold crossed. 

If threshold is crossed and policy allows a higher target: 
  Recommend scale-up. 

If utilisation remains low for a sustained period: 
  Mark as a scale-down candidate. 

Otherwise: 
  Recommend no change.&lt;/LI-CODE&gt;
&lt;P&gt;Example recommendation rule:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;def generate_disk_recommendation(
    peak_iops,
    provisioned_iops,
    peak_mbps,
    provisioned_mbps,
    threshold_pct,
    max_iops,
    max_mbps,
    sustained_bucket_count
):
    iops_utilisation = (peak_iops / provisioned_iops) * 100 if provisioned_iops else 0
    mbps_utilisation = (peak_mbps / provisioned_mbps) * 100 if provisioned_mbps else 0

    threshold_crossed = (
        iops_utilisation &amp;gt;= threshold_pct or
        mbps_utilisation &amp;gt;= threshold_pct
    )

    if threshold_crossed and sustained_bucket_count &amp;gt;= 3:
        target_iops = min(max_iops, int(provisioned_iops * 1.25))
        target_mbps = min(max_mbps, int(provisioned_mbps * 1.25))

        return {
            "recommendation": "ScaleUpRecommended",
            "reasonCode": "SustainedThresholdCrossing",
            "currentIops": provisioned_iops,
            "recommendedIops": target_iops,
            "currentMbps": provisioned_mbps,
            "recommendedMbps": target_mbps,
            "approvalRequired": True
        }

    return {
        "recommendation": "NoChange",
        "reasonCode": "WithinPolicyThreshold",
        "approvalRequired": False
    }&lt;/LI-CODE&gt;
&lt;P data-start="13436" data-end="13513"&gt;The output from this policy logic becomes the grounded input to the AI model.&lt;/P&gt;
&lt;P data-start="13515" data-end="13541"&gt;Example AI prompt pattern:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;You are an Azure operations assistant helping explain a disk performance recommendation.

Use only the evidence provided in the JSON payload.
Do not invent missing metrics.
Do not recommend a change if approvalRequired is false.
Explain the recommendation in a concise operational style.
Include:
1. Recommendation summary
2. Evidence
3. Risk or operational consideration
4. Approval request
5. Validation step after execution

Input JSON:
&amp;lt;recommendation_payload&amp;gt;&lt;/LI-CODE&gt;
&lt;P&gt;Example AI-generated recommendation narrative:&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;Recommendation: Scale up Premium SSD v2 performance for &amp;lt;disk-name&amp;gt;.

Reason: The disk crossed the configured IOPS utilisation threshold across multiple observed time buckets. Peak IOPS reached 3,596 against the current provisioned value of 3,000, representing 120% utilisation. Throughput utilisation remained within the configured policy range.

Proposed action: Increase provisioned IOPS within the approved customer policy cap. No throughput increase is recommended at this stage.

Risk consideration: The change should follow the approved maintenance or execution policy for this workload. Post-change validation should confirm that IOPS pressure has reduced and no negative workload signal is observed.

Approval required: Yes.&lt;/LI-CODE&gt;
&lt;P&gt;Example recommendation result:&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{ 
"recommendation": "ScaleUpRecommended", 
"reason": "IOPS utilisation exceeded the configured threshold for multiple time buckets.", 
"approvalRequired": true, 
"executionMode": "immediate-after-approval", 
"dryRun": true, 
"executionBlockers": [ "ApprovalRequired" ] 
}&lt;/LI-CODE&gt;
&lt;P&gt;This is important for trust. Operations teams should be able to see why a recommendation was made and which policy guardrails apply.&lt;/P&gt;
&lt;H4&gt;Step 6: Send a daily operational summary&lt;/H4&gt;
&lt;P&gt;The daily summary workflow should run on a schedule, call the summary function, and send a concise email to the operations team.&lt;/P&gt;
&lt;P&gt;The email should include:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Eligible disk count 
Disks crossing threshold 
Actionable recommendations 
Approvals in the last 24 hours 
Rejections in the last 24 hours 
Scheduled actions 
Completed actions 
Failed actions&lt;/LI-CODE&gt;
&lt;P&gt;For the disk summary table, keep the content compact:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Disk 
Resource group 
Provisioned IOPS 
Provisioned throughput 
Peak total IOPS 
IOPS utilisation 
Peak total MiB/s 
Throughput utilisation 
Threshold observation 
Recommendation&lt;/LI-CODE&gt;
&lt;P&gt;Avoid putting every raw metric value into the email. Instead, attach or link charts only for disks that crossed the threshold, or for the top N disks by utilisation.&lt;/P&gt;
&lt;H4&gt;Step 7: Trigger approval only for actionable candidates&lt;/H4&gt;
&lt;P&gt;The summary function can return an &lt;EM&gt;approvalCandidates&lt;/EM&gt; array. The daily workflow should trigger the approval workflow only for those candidates.&lt;/P&gt;
&lt;P&gt;Example candidate payload:&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{ 
"diskResourceId": "&amp;lt;disk-resource-id&amp;gt;", 
"diskName": "&amp;lt;disk-name&amp;gt;", 
"resourceGroup": "&amp;lt;resource-group&amp;gt;", 
"triggerSource": "DailySummaryThresholdDetection", 
"assessmentMode": "recommendation-only", 
"alertContext": { 
        "thresholdObservation": "IOPS crossed the configured threshold", 
        "peakTotalIops": 3059.6, 
        "iopsUtilisationPercent": 102.0
    } 
}&lt;/LI-CODE&gt;
&lt;P&gt;This avoids sending unnecessary approvals for resources that do not need action.&lt;/P&gt;
&lt;H4&gt;Step 8: Use human-in-the-loop approval&lt;/H4&gt;
&lt;P&gt;The approval workflow should perform a fresh assessment before asking for approval. This prevents stale decisions if metrics, tags, or policy have changed since the daily summary was generated.&lt;/P&gt;
&lt;P&gt;A practical design can support both:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Primary channel: Teams Adaptive Card 
Fallback channel: Email approval links&lt;/LI-CODE&gt;
&lt;P&gt;Microsoft Teams supports actions that post an adaptive card and wait for a response, allowing a workflow to pause until a user responds. (&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/connectors/teams/?tabs=text1%2Cdotnet" target="_blank" rel="noopener"&gt;Microsoft Learn&lt;/A&gt;)&lt;/P&gt;
&lt;P&gt;The Teams card should include:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Disk name 
Resource group 
Recommendation 
Reason 
Peak utilisation 
Execution mode 
Execution blockers 
Approve / Reject buttons 
Optional comments&lt;/LI-CODE&gt;
&lt;P&gt;The approval flow should use &lt;STRONG&gt;first-decision-wins&lt;/STRONG&gt; semantics:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;If Teams approval is received first: 
     Record the Teams decision. 
     Ignore later email link clicks. 

If email approval is received first: 
     Record the email decision. 
     Ignore later Teams responses. 

If a duplicate response is received: 
     Return AlreadyProcessed and do not execute again.&lt;/LI-CODE&gt;
&lt;P&gt;This avoids duplicate execution and gives a clean audit model.&lt;/P&gt;
&lt;H4&gt;Step 9: Capture who approved or rejected&lt;/H4&gt;
&lt;P&gt;Every decision should be written to an audit store.&lt;/P&gt;
&lt;P&gt;Example approval record:&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{ 
    "assessmentId": "&amp;lt;assessment-id&amp;gt;", 
    "diskResourceId": "&amp;lt;disk-resource-id&amp;gt;", 
    "approvalStatus": "Approved", 
    "approverEmail": "&amp;lt;approver@contoso.com&amp;gt;", 
    "approvalSource": "TeamsAdaptiveCard", 
    "comments": "Approved for the agreed maintenance window.", 
    "processedUtc": "&amp;lt;timestamp-utc&amp;gt;", 
    "executionStatus": "Scheduled" 
}&lt;/LI-CODE&gt;
&lt;P&gt;For production, prefer authenticated approval channels such as Teams Adaptive Cards or an authenticated approval workflow rather than relying only on email links with query-string identity.&lt;/P&gt;
&lt;H4&gt;Step 10: Execute immediately or schedule&lt;/H4&gt;
&lt;P&gt;After approval, the agent checks execution policy.&lt;/P&gt;
&lt;P&gt;Example immediate execution policy:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;aiops-execution-mode=immediate-after-approval 
aiops-apply-dry-run=true&lt;/LI-CODE&gt;
&lt;P&gt;Example scheduled execution policy:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;aiops-execution-mode=schedule-in-change-window 
aiops-change-window-utc=22:00-02:00 
aiops-apply-dry-run=true&lt;/LI-CODE&gt;
&lt;P&gt;Execution behaviour:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Approved + immediate-after-approval: 
    Execute now if dry-run is false. 
    Otherwise record a dry-run execution. 

Approved + schedule-in-change-window: 
    Store a scheduled action. 
    Execute when the approved window opens. 

Rejected: 
    Record rejection. 
    Do not execute. 

Duplicate response: 
    Return AlreadyProcessed.&lt;/LI-CODE&gt;
&lt;P&gt;Premium SSD v2 performance should only be adjusted within documented limits and customer-defined policy. The workflow should validate the target IOPS and throughput before calling Azure Resource Manager. Microsoft documentation describes Premium SSD v2 performance characteristics and configuration considerations, including IOPS and throughput behaviour. (&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types?" target="_blank" rel="noopener"&gt;Microsoft Learn&lt;/A&gt;)&lt;/P&gt;
&lt;H4&gt;Step 11: Use managed identity and least privilege&lt;/H4&gt;
&lt;P&gt;The Function App or workflow should use managed identity where possible. Azure Logic Apps supports managed identities for authenticating to Microsoft Entra-protected Azure resources, which avoids storing credentials or access tokens in workflow definitions. (&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/logic-apps/authenticate-with-managed-identity?tabs=consumption" target="_blank" rel="noopener"&gt;Microsoft Learn&lt;/A&gt;)&lt;/P&gt;
&lt;P&gt;A reference implementation needs permissions for:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Resource discovery 
Metric query 
Tag read 
Disk read 
Disk update, only if controlled execution is enabled 
Audit write&lt;/LI-CODE&gt;
&lt;P&gt;For a proof of concept, a broad role might be used temporarily in a test environment. For production, use a custom least-privilege role that allows only the specific read and update operations required.&lt;/P&gt;
&lt;H4&gt;Step 12: Store audit state&lt;/H4&gt;
&lt;P&gt;The agent should maintain an audit table or equivalent state store.&lt;/P&gt;
&lt;P&gt;Suggested fields:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;AssessmentId 
DiskResourceId 
DiskName 
ResourceGroup 
Recommendation 
ApprovalRequired 
ApprovalStatus 
ApproverEmail 
ApprovalSource 
ExecutionMode 
ExecutionStatus 
DryRun 
CreatedUtc 
ProcessedUtc 
ExecutionResult&lt;/LI-CODE&gt;
&lt;P&gt;This state is useful for:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Daily/Weekly reporting 
Duplicate decision prevention 
Scheduled execution 
Audit review 
Operational troubleshooting 
Post-change validation&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Step 13: Validate post-change impact&lt;/H4&gt;
&lt;P&gt;Execution is not the end of the workflow. The agent should validate whether the change produced the expected operational outcome.&lt;/P&gt;
&lt;P&gt;Post-change checks may include:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Was the disk update successful? 
Did provisioned IOPS or throughput change as expected? 
Did observed utilisation reduce? 
Did latency or queue depth improve? 
Was there any negative workload signal? 
Should the recommendation logic be adjusted?&lt;/LI-CODE&gt;
&lt;P&gt;This closes the loop:&lt;/P&gt;
&lt;P data-start="399" data-end="1027"&gt;&lt;STRONG&gt;Action → Validation → Learning → Improved recommendation&lt;/STRONG&gt;&lt;/P&gt;
&lt;H4&gt;Technical reference flow&lt;/H4&gt;
&lt;P&gt;A simplified end-to-end flow:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Scheduled workflow runs every 24 hours.
Workflow calls the summary function.
Function selects disks by approved tags.
Function queries Azure Monitor Metrics.
Function calculates utilisation and threshold crossings.
Deterministic policy logic generates the recommendation.
Azure OpenAI generates the explanation, summary, and approval text.
Daily summary email or Teams message is sent.
Approval workflow is triggered only for actionable candidates.
Approval workflow performs a fresh assessment.
Teams and/or email approval is sent.
First valid approval or rejection is recorded.
Approved actions are executed immediately or scheduled.
Audit state is updated.
Post-change validation is performed.
Next daily summary reports decisions and outcomes.&lt;/LI-CODE&gt;
&lt;H4&gt;Why this pattern is reusable&lt;/H4&gt;
&lt;P&gt;Premium SSD v2 tuning is only one example. The same pattern can be reused for other customer-specific operational scenarios:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;VM rightsizing recommendations
Disk latency investigation
Backup failure triage
Capacity threshold management
Configuration drift detection
Network health assessment
Application health checks
Workload-aware operational recommendations&lt;/LI-CODE&gt;
&lt;P&gt;The reusable pattern is:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Customer-specific signal
+ Policy-aware recommendation
+ AI-generated explanation
+ Human-in-the-loop approval
+ Controlled execution
+ Audit and feedback&lt;/LI-CODE&gt;
&lt;H4&gt;Test the flow (Start with Dry-Run)&lt;/H4&gt;
&lt;P&gt;1. Set disk tags for safe dry-run&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;$diskId = "/subscriptions/xxxxx-xx-xx-xxxxx/resourceGroups/jtrg-sap-prod/providers/Microsoft.Compute/disks/jtdisk-aiops-pssdv2-test-001"

az tag update `
  --resource-id $diskId `
  --operation Merge `
  --tags `
    aiops-enabled=true `
    aiops-profile=database `
    aiops-approval-required=true `
    aiops-report-threshold-pct=80 `
    aiops-target-utilisation-pct=70 `
    aiops-max-iops=5000 `
    aiops-max-mbps=250 `
    aiops-execution-mode=immediate-after-approval `
    aiops-apply-dry-run=true&lt;/LI-CODE&gt;
&lt;P&gt;2. Trigger the approval Logic App&lt;/P&gt;
&lt;P&gt;Replace &amp;lt;PASTE_LogicApp_AIOPS_APPROVAL_HTTP_TRIGGER_URL&amp;gt; with the HTTP trigger URL for your Logic App URL.&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;$logicAppUrl = "&amp;lt;PASTE_LogicApp_AIOPS_APPROVAL_HTTP_TRIGGER_URL&amp;gt;"

$payload = @{
    schemaId       = "aiops-premiumssd-v2-approval-test-v1"
    diskResourceId = "/subscriptions/xxxxx-xx-xx-xxxxxx/resourceGroups/jtrg-sap-prod/providers/Microsoft.Compute/disks/jtdisk-aiops-pssdv2-test-001"
    diskName       = "jtdisk-aiops-pssdv2-test-001"
    resourceGroup  = "jtrg-sap-prod"
    triggerSource  = "ManualApprovalWorkflowTest"
    assessmentMode = "recommendation-only"
    alertContext   = @{
        alertRule                    = "Manual test - Premium SSD v2 IOPS threshold"
        severity                     = "Sev3"
        monitorCondition             = "Fired"
        thresholdPercent             = 80
        peakTotalIops                = 3059.6
        peakTotalMiBps               = 42.95
        iopsUtilisationPercent       = 102.0
        throughputUtilisationPercent = 34.4
        thresholdObservation         = "Manual test: IOPS crossed 80% threshold"
    }
}

$response = Invoke-RestMethod `
  -Method Post `
  -Uri $logicAppUrl `
  -ContentType "application/json" `
  -Body ($payload | ConvertTo-Json -Depth 10)

$response | ConvertTo-Json -Depth 20&lt;/LI-CODE&gt;
&lt;P&gt;3. Expected result&lt;/P&gt;
&lt;P&gt;In the approval Logic App run history, you should see:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Run Disk Assessment → Succeeded
Send Approval Email → Succeeded
If Teams Approval Enabled → True
Post Teams Approval Card And Wait → Running / Waiting&lt;/LI-CODE&gt;
&lt;P data-start="2542" data-end="2563"&gt;4. Test real disk update only after dry-run works&lt;/P&gt;
&lt;P data-start="2542" data-end="2563"&gt;Change only this tag:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;az tag update `
  --resource-id $diskId `
  --operation Merge `
  --tags aiops-apply-dry-run=false&lt;/LI-CODE&gt;
&lt;P data-start="2683" data-end="2750"&gt;Then trigger the same payload again and approve the new Teams card.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;An email notification will also be sent, with approval handled on a first-approval basis, as discussed earlier.&lt;/P&gt;
&lt;img /&gt;
&lt;P data-start="2752" data-end="2785"&gt;5. Validate the disk after approval&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;az disk show `
  --ids $diskId `
  --query "{name:name, iops:diskIOPSReadWrite, mbps:diskMBpsReadWrite, sku:sku.name}" `
  -o table&lt;/LI-CODE&gt;
&lt;P&gt;Expected result after real approval:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;{
  "approvalStatus": "Approved",
  "executionStatus": "Executed"
}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Daily Summary email example&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Conclusion&lt;/H4&gt;
&lt;P data-start="22261" data-end="22516"&gt;Customer specific AIOps should not start with an expectation of full automation across every operational scenario. A safer and more sustainable approach is to start with one measurable signal, clear guardrails, human-in-the-loop approval, and controlled execution.&lt;/P&gt;
&lt;P data-start="22518" data-end="22899"&gt;Premium SSD v2 performance tuning is a practical first use case because it has clear metrics, configurable performance settings, and measurable post-change outcomes. By combining Azure Monitor, Azure Functions, Azure Logic Apps, Teams approval, Azure OpenAI, managed identity, and audit storage, platform teams can move from passive visibility to safe, governed operational action.&lt;/P&gt;
&lt;P data-start="22901" data-end="23119" data-is-last-node="" data-is-only-node=""&gt;The outcome is not just a disk tuning workflow. It is a reusable operating model for building customer-specific AI Operational Agents that improve performance efficiency while preserving control, compliance, and trust.&lt;/P&gt;</description>
      <pubDate>Fri, 08 May 2026 14:30:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/from-observability-to-action-building-an-ai-powered-aiops-agent/ba-p/4515611</guid>
      <dc:creator>jitendrasingh</dc:creator>
      <dc:date>2026-05-08T14:30:00Z</dc:date>
    </item>
    <item>
      <title>Aligning SAP application servers with the HANA primary zone on Azure (Public Preview) — Part 1</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/aligning-sap-application-servers-with-the-hana-primary-zone-on/ba-p/4490925</link>
      <description>&lt;P data-line="4"&gt;Many SAP on Azure customers improve resilience by deploying critical SAP tiers across two distinct Azure Availability Zones. A common pattern for the SAP application layer in cross-zone designs is Active/Passive:&lt;/P&gt;
&lt;UL data-line="6"&gt;
&lt;LI data-line="6"&gt;One set of SAP application servers is active in the same zone as the database primary.&lt;/LI&gt;
&lt;LI data-line="7"&gt;A second, identical set is passive in another zone, ready to be activated when needed.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="9"&gt;This pattern is described in Microsoft guidance (and is often the starting point for cross-zone SAP discussions):&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-zones" target="_blank" rel="noopener" data-href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-zones"&gt;https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-zones&lt;/A&gt;&lt;/P&gt;
&lt;P data-line="11"&gt;The operational challenge is what happens after a failover.&lt;/P&gt;
&lt;P data-line="13"&gt;When the HANA primary moves to another zone, customers typically want the SAP application layer to follow so that the active application servers remain co-located with the database primary. In many environments, cross-zone network performance is well within acceptable ranges; however, for latency-sensitive workloads and specific regional topologies, keeping application and database tiers in the same zone can be a useful optimization. As always, validate with measurements for your own workload and region.&lt;/P&gt;
&lt;P data-line="15"&gt;This post introduces a new Pacemaker resource agent for SAP on Azure deployments, that helps automate the Active/Passive application-server alignment. The resource agent is currently in a public preview.&lt;/P&gt;
&lt;H2 data-line="17"&gt;Why align the SAP application tier?&lt;/H2&gt;
&lt;P data-line="19"&gt;In SAP HANA deployments on Azure, it's common to place the HANA primary and secondary VMs, as well as the SAP application VMs in different availability zones to improve resilience.&lt;/P&gt;
&lt;P data-line="21"&gt;SAP workloads are sensitive to latency, and while cross-zone latency is often acceptable for SAP workloads, it can become a factor in certain regional topologies - especially when application-to-database round trips are frequent. Keeping the active application tier in the same zone as the HANA primary reduces those cross-zone hops and helps make performance more consistent during steady state and after a failover.&lt;/P&gt;
&lt;P data-line="23"&gt;To achieve minimal latency, many customers choose to:&lt;/P&gt;
&lt;UL data-line="25"&gt;
&lt;LI data-line="25"&gt;Keep application servers deployed across zones for resiliency, and&lt;/LI&gt;
&lt;LI data-line="26"&gt;When the HANA primary changes zones, switch which application-server set is active, so it runs in the same zone as the primary.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="28"&gt;Doing this manually is possible, but it adds steps during an already time-sensitive event and increases the chance of running "active" application capacity in the wrong zone longer than intended. This pacemaker resource agent addresses this gap: it keeps the application tier aligned with the current HANA primary in a predictable way after failovers, while preserving an active/passive design across zones.&lt;/P&gt;
&lt;H2 data-line="30"&gt;Introducing the resource agent&lt;/H2&gt;
&lt;P data-line="32"&gt;Azure SAP Zone Resource Agent is a Pacemaker resource agent that aligns SAP application server VMs with the current HANA primary VM.&lt;/P&gt;
&lt;P data-line="34"&gt;At a high level, the agent:&lt;/P&gt;
&lt;UL data-line="36"&gt;
&lt;LI data-line="36"&gt;Detects where HANA primary is running (Azure Availability Zone, or a customer-provided logical group)&lt;/LI&gt;
&lt;LI data-line="37"&gt;Ensures the "same zone" SAP application server VMs are running.&lt;/LI&gt;
&lt;LI data-line="38"&gt;Starts SAP on those VMs.&lt;/LI&gt;
&lt;LI data-line="39"&gt;Deactivates or stops SAP on application servers in the other availability zone and optionally stops or deallocates the corresponding VMs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="41"&gt;This gives you an automated way to keep the application tier aligned with the database tier after failovers, while preserving the active/standby design across zones.&lt;/P&gt;
&lt;H2 data-line="43"&gt;Architecture at a glance&lt;/H2&gt;
&lt;P data-line="45"&gt;The resource agent is designed for an Active/Passive application tier across zones: Only one availability zone should have active application servers at any given time.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 data-line="49"&gt;What the agent automates&lt;/H2&gt;
&lt;UL data-line="51"&gt;
&lt;LI data-line="51"&gt;Automates application-server alignment to the HANA primary zone for Active/Passive application-tier designs.&lt;/LI&gt;
&lt;LI data-line="52"&gt;Provides clear operational choices (fast "passive mode" vs "stop/deallocate" for cost management)&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 data-line="54"&gt;Supported placement patterns&lt;/H2&gt;
&lt;P data-line="56"&gt;This Resource Agent targets cross-zone SAP deployments for Active-Passive deployment patterns:&lt;/P&gt;
&lt;UL data-line="58"&gt;
&lt;LI data-line="58"&gt;Explicit zone assignment: where VM zone information is available (the agent can read the HANA primary VM's and application server VMs' zone and align the application tier accordingly).&lt;/LI&gt;
&lt;LI data-line="59"&gt;Zone placement achieved indirectly: in some architectures using Proximity Placement Groups (PPG) and Availability Sets, the deployment achieves cross-zone separation, but the VM zone metadata isn't available to the agent. In those cases, you can provide the zone/group as an input by mapping HANA VMs and application server VMs into logical groups.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 data-line="61"&gt;How it works (conceptually)&lt;/H2&gt;
&lt;P data-line="63"&gt;Think of Azure SAP Zone Alignment Resource Agent as an orchestration layer that sits alongside your existing Pacemaker-managed HANA high availability setup:&lt;/P&gt;
&lt;UL data-line="65"&gt;
&lt;LI data-line="65"&gt;Signal: Pacemaker knows which HANA node is primary.&lt;/LI&gt;
&lt;LI data-line="66"&gt;Decision: determine the target zone/group of the HANA primary.&lt;/LI&gt;
&lt;LI data-line="67"&gt;Action: interact with Azure to start/stop VMs and invoke SAP start/stop actions remotely.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="69"&gt;Under the hood, the agent uses:&lt;/P&gt;
&lt;UL data-line="71"&gt;
&lt;LI data-line="71"&gt;Azure Instance Metadata Service (IMDS) to understand VM context.&lt;/LI&gt;
&lt;LI data-line="72"&gt;Azure Resource Manager APIs to manage application VMs (start/stop/deallocate) and run remote commands.&lt;/LI&gt;
&lt;LI data-line="73"&gt;Standard SAP control operations to start/stop or make instances inactive.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="75"&gt;You do not need to change how HANA failover is handled. HANA failover decisions remain exclusively managed by the SAPHANA-SR-* agents. This Resource Agent focuses solely on ensuring the application tier follows the database tier in a predictable and repeatable manner.&lt;/P&gt;
&lt;H2 data-line="77"&gt;Two operational modes: fast recovery vs cost optimization&lt;/H2&gt;
&lt;P data-line="79"&gt;Customers have different priorities during failovers. The preview supports two modes so you can choose the trade-off that best matches your operational goals.&lt;/P&gt;
&lt;H3 data-line="81"&gt;Option 1: Deactivate (passive mode)&lt;/H3&gt;
&lt;P data-line="83"&gt;In this mode, SAP instances in the non-primary zone are placed into an inactive/passive state (so they don't take new workload), while the VMs remain running.&lt;/P&gt;
&lt;P data-line="85"&gt;Best for: fastest "swing back" during failover&lt;/P&gt;
&lt;P data-line="87"&gt;Trade-off: the inactive-zone VMs are still running (no compute cost reduction)&lt;/P&gt;
&lt;H3 data-line="89"&gt;Option 2: Soft shutdown + stop/deallocate.&lt;/H3&gt;
&lt;P data-line="91"&gt;In this mode, the agent initiates a graceful stop of SAP Application Servers in the non-primary zone and then stops/deallocates the VMs.&lt;/P&gt;
&lt;P data-line="93"&gt;Best for: reducing compute usage of SAP application servers in the inactive zone in Pay as You Go model.&lt;/P&gt;
&lt;P data-line="95"&gt;Trade-off: reactivation takes longer (VM boot + SAP start)&lt;/P&gt;
&lt;H2 data-line="97"&gt;What you need (high level)&lt;/H2&gt;
&lt;P data-line="99"&gt;This Pacemaker resource agent assumes a specific but common topology:&lt;/P&gt;
&lt;UL data-line="101"&gt;
&lt;LI data-line="101"&gt;Two identical sets of SAP application server VMs (one set per zone/group)&lt;/LI&gt;
&lt;LI data-line="102"&gt;A Pacemaker cluster managing HANA system replication (primary/secondary)&lt;/LI&gt;
&lt;LI data-line="103"&gt;Application-side routing configured so users and jobs can run on either set when activated.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="105"&gt;Operationally, you'll also need:&lt;/P&gt;
&lt;UL data-line="107"&gt;
&lt;LI data-line="107"&gt;A managed identity with permissions to perform the required VM operations against the application VMs.&lt;/LI&gt;
&lt;LI data-line="108"&gt;Azure Linux Agent installed and running on the SAP application VMs (for remote command execution)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="110"&gt;Note on operating system.&lt;/P&gt;
&lt;P data-line="112"&gt;This public preview focuses on SUSE Linux Enterprise Server (SLES), specifically SLES 15 SP5 and later.&lt;/P&gt;
&lt;H2 data-line="114"&gt;Current scope and limitations&lt;/H2&gt;
&lt;P data-line="116"&gt;This preview is intentionally focused so we can learn quickly and make improvements based on customer feedback.&lt;/P&gt;
&lt;UL data-line="118"&gt;
&lt;LI data-line="118"&gt;Supported: SAP ABAP systems on HANA scale-up running on SLES&lt;/LI&gt;
&lt;LI data-line="119"&gt;Not supported: SAP Java, HANA scale-out, multi-SID environments&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 data-line="121"&gt;Getting started&lt;/H2&gt;
&lt;P data-line="123"&gt;If you'd like to evaluate the preview, start here:&lt;/P&gt;
&lt;P data-line="125"&gt;The resource agent is available upstream in the ClusterLabs resource-agents &lt;A class="lia-external-url" href="https://github.com/ClusterLabs/resource-agents/blob/main/heartbeat/azure-sap-zone.in" target="_blank" rel="noopener"&gt;repository&lt;/A&gt;.&lt;/P&gt;
&lt;P data-line="127"&gt;For detailed technical guidance - including prerequisites, installation steps, configuration examples, and troubleshooting - see&amp;nbsp;&lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/sapapplications/azure-sap-zone-resource-agent-%E2%80%94-technical-deep-dive-part-2/4490935" target="_blank" rel="noopener" data-lia-auto-title="Part 2" data-lia-auto-title-active="0"&gt;Part 2&lt;/A&gt;&amp;nbsp;of this blog series.&lt;/P&gt;
&lt;P data-line="129"&gt;If you encounter issues or have suggestions, please file them via GitHub Issues so we can track and respond.&lt;/P&gt;
&lt;H2 data-line="131"&gt;What's next&lt;/H2&gt;
&lt;P data-line="133"&gt;This post covered the concepts, features, and operational modes of the Azure SAP Zone Resource Agent. In&amp;nbsp;&lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/sapapplications/azure-sap-zone-resource-agent-%E2%80%94-technical-deep-dive-part-2/4490935" target="_blank" rel="noopener" data-lia-auto-title="Part 2" data-lia-auto-title-active="0"&gt;Part 2&lt;/A&gt;&amp;nbsp;of this series, we go hands-on with the technical details - architecture deep dive, prerequisites, step-by-step installation, cluster configuration examples, and troubleshooting guidance.&lt;/P&gt;
&lt;H2 data-line="135"&gt;Public preview expectations&lt;/H2&gt;
&lt;P data-line="137"&gt;During public preview:&lt;/P&gt;
&lt;UL data-line="139"&gt;
&lt;LI data-line="139"&gt;This solution is provided as a Public Preview for evaluation and feedback.&lt;/LI&gt;
&lt;LI data-line="140"&gt;It is not covered by a formal support commitment.&lt;/LI&gt;
&lt;LI data-line="141"&gt;The design, configuration, and behaviors may evolve based on learnings.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="143"&gt;Because of that, we recommend using this in non-production environments while it is in preview.&lt;/P&gt;
&lt;P data-line="145"&gt;If you're interested in piloting the preview, your feedback will help shape what becomes generally available and supported.&lt;/P&gt;
&lt;H2 data-line="147"&gt;Disclaimer&lt;/H2&gt;
&lt;P data-line="149"&gt;This post describes a public preview capability. It is shared for informational purposes only and is subject to change. It is not a substitute for your organization's validation, testing, and operational readiness reviews.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 28 Apr 2026 14:15:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/aligning-sap-application-servers-with-the-hana-primary-zone-on/ba-p/4490925</guid>
      <dc:creator>sanoopt</dc:creator>
      <dc:date>2026-04-28T14:15:00Z</dc:date>
    </item>
    <item>
      <title>Azure SAP Zone Resource Agent (Public Preview) — Technical Deep Dive (Part 2)</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/azure-sap-zone-resource-agent-public-preview-technical-deep-dive/ba-p/4490935</link>
      <description>&lt;H2 data-line="6"&gt;Overview&lt;/H2&gt;
&lt;P data-line="8"&gt;In&amp;nbsp;&lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/sapapplications/aligning-sap-application-servers-with-the-hana-primary-zone-on-azure-public-prev/4490925" target="_blank" rel="noopener" data-lia-auto-title="Part 1" data-lia-auto-title-active="0"&gt;&lt;STRONG&gt;Part 1&lt;/STRONG&gt;&lt;/A&gt;, we discussed why keeping the SAP application tier aligned with the HANA primary zone matters for latency-sensitive workloads, and how the Azure SAP Zone Resource Agent automates this alignment after failovers. In this post, we get into the specifics - how the agent is structured, what it needs to run, and how to set it up in your Pacemaker cluster.&lt;/P&gt;
&lt;P data-line="10"&gt;The&amp;nbsp;azure-sap-zone&amp;nbsp;resource agent is a Pacemaker resource agent designed to manage the alignment of SAP application Azure Virtual Machines (VMs) with the primary HANA Azure VM. This agent ensures that SAP application servers are started in the same Azure availability zone as the HANA primary VM to maintain high availability and optimal performance.&lt;/P&gt;
&lt;H3 data-line="12"&gt;Key Benefits&lt;/H3&gt;
&lt;UL data-line="13"&gt;
&lt;LI data-line="13"&gt;&lt;STRONG&gt;Reduced Latency&lt;/STRONG&gt;: Minimizes cross-zone network latency between application and database tiers&lt;/LI&gt;
&lt;LI data-line="14"&gt;&lt;STRONG&gt;High Availability&lt;/STRONG&gt;: Maintains SAP system availability during failover scenarios&lt;/LI&gt;
&lt;LI data-line="15"&gt;&lt;STRONG&gt;Automated Management&lt;/STRONG&gt;: Automatically handles VM and SAP instance lifecycle during zone transitions&lt;/LI&gt;
&lt;LI data-line="16"&gt;&lt;STRONG&gt;Cost Optimization&lt;/STRONG&gt;: Enables efficient resource utilization across availability zones&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 data-line="20"&gt;Architecture&lt;/H2&gt;
&lt;P data-line="22"&gt;The following diagram illustrates how the resource agent manages SAP application server alignment with the primary HANA database across Azure availability zones:&lt;/P&gt;
&lt;img /&gt;
&lt;P data-line="79"&gt;&lt;STRONG&gt;Key Components:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL data-line="80"&gt;
&lt;LI data-line="80"&gt;&lt;STRONG&gt;HANA Cluster&lt;/STRONG&gt;: Primary and secondary HANA VMs are deployed in&amp;nbsp;&lt;STRONG&gt;separate availability zones&lt;/STRONG&gt;&amp;nbsp;with System Replication configured&lt;/LI&gt;
&lt;LI data-line="28"&gt;&lt;STRONG&gt;Pacemaker Cluster&lt;/STRONG&gt;: Runs across both zones with the azure-sap-zone resource agent deployed on both nodes. The SAP application server VMs are not Pacemaker cluster members - they are managed remotely by the agent via Azure APIs.&lt;/LI&gt;
&lt;LI data-line="82"&gt;&lt;STRONG&gt;Application Servers&lt;/STRONG&gt;: Identical sets of SAP application VMs deployed in both availability zones&lt;/LI&gt;
&lt;LI data-line="83"&gt;&lt;STRONG&gt;Azure Management API&lt;/STRONG&gt;: Used by the resource agent to control VM lifecycle and execute remote commands&lt;/LI&gt;
&lt;LI data-line="84"&gt;&lt;STRONG&gt;Managed Identity&lt;/STRONG&gt;: Provides authentication for Azure API operations&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="86"&gt;&lt;STRONG&gt;Current State (Zone 1 Primary):&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL data-line="87"&gt;
&lt;LI data-line="87"&gt;&lt;STRONG&gt;Zone 1&lt;/STRONG&gt;: HANA Primary is active, SAP application servers are&amp;nbsp;&lt;STRONG&gt;ACTIVE&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI data-line="88"&gt;&lt;STRONG&gt;Zone 2&lt;/STRONG&gt;: HANA Secondary is in standby, SAP application servers are&amp;nbsp;&lt;STRONG&gt;STANDBY&lt;/STRONG&gt;&amp;nbsp;(VMs may be running but SAP instances deactivated, or VMs stopped based on&amp;nbsp;stop_vms&amp;nbsp;parameter)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="90"&gt;&lt;STRONG&gt;Failover Scenario (Zone 2 becomes Primary):&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL data-line="91"&gt;
&lt;LI data-line="91"&gt;HANA failover occurs from Zone 1 to Zone 2&lt;/LI&gt;
&lt;LI data-line="92"&gt;Pacemaker detects the failover and the azure-sap-zone resource agent triggers&lt;/LI&gt;
&lt;LI data-line="93"&gt;Agent starts VMs and SAP instances in Zone 2 (same zone as new primary HANA)&lt;/LI&gt;
&lt;LI data-line="94"&gt;Agent stops/deactivates SAP instances and optionally stops VMs in Zone 1&lt;/LI&gt;
&lt;LI data-line="95"&gt;&lt;STRONG&gt;Result&lt;/STRONG&gt;: Zone 2 becomes the active zone with both HANA Primary and active SAP application servers&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2 data-line="105"&gt;How It Works&lt;/H2&gt;
&lt;H3 data-line="107"&gt;Background&lt;/H3&gt;
&lt;P data-line="108"&gt;In Azure deployments of SAP systems with scale-up HANA configurations, optimizing latency between SAP application servers and the HANA database server can significantly enhance performance. In typical zonal deployments, primary and secondary HANA servers are located in different availability zones, with SAP application servers distributed across these zones. In certain Azure regions, cross-zonal latency may be higher, affecting performance for processes involving significant data transfer between application and database tiers.&lt;/P&gt;
&lt;H3 data-line="110"&gt;Solution Overview&lt;/H3&gt;
&lt;P data-line="111"&gt;This resource agent addresses latency concerns by placing critical application servers in the same availability zone as the primary HANA database server. The solution provisions identical SAP application server VMs in both availability zones, with only one set active at any given time.&lt;/P&gt;
&lt;H3 data-line="113"&gt;Execution Workflow&lt;/H3&gt;
&lt;P data-line="114"&gt;During a database failover, the resource agent executes the following phases in sequence:&lt;/P&gt;
&lt;OL data-line="116"&gt;
&lt;LI data-line="116"&gt;&lt;STRONG&gt;start_vms_in_same_zone&lt;/STRONG&gt;: Initiates virtual machines in the same zone as the primary HANA VM&lt;/LI&gt;
&lt;LI data-line="117"&gt;&lt;STRONG&gt;wait_for_vms_in_same_zone_to_start&lt;/STRONG&gt;: Waits for VMs in the same zone to start successfully&lt;/LI&gt;
&lt;LI data-line="118"&gt;&lt;STRONG&gt;start_sap_in_same_zone&lt;/STRONG&gt;: Starts SAP instances in the same zone (parallel execution supported)&lt;/LI&gt;
&lt;LI data-line="119"&gt;&lt;STRONG&gt;wait_for_sap_in_same_zone_to_start&lt;/STRONG&gt;: Waits for SAP instances in the same zone to start successfully&lt;/LI&gt;
&lt;LI data-line="120"&gt;&lt;STRONG&gt;stop_sap_in_diff_zone&lt;/STRONG&gt;: Stops or deactivates SAP instances in different zones (behavior depends on&amp;nbsp;stop_vms&amp;nbsp;parameter)&lt;/LI&gt;
&lt;LI data-line="121"&gt;&lt;STRONG&gt;wait_for_sap_in_diff_zone_to_stop&lt;/STRONG&gt;: Waits for SAP instances in different zones to shut down (skipped when&amp;nbsp;stop_vms=false)&lt;/LI&gt;
&lt;LI data-line="122"&gt;&lt;STRONG&gt;stop_vms_in_diff_zone&lt;/STRONG&gt;: Stops VMs in different zones (skipped when&amp;nbsp;stop_vms=false)&lt;/LI&gt;
&lt;/OL&gt;
&lt;P data-line="124"&gt;The resource agent supports both&amp;nbsp;&lt;STRONG&gt;SAPHanaSR&lt;/STRONG&gt;&amp;nbsp;and&amp;nbsp;&lt;STRONG&gt;SAPHanaSR-angi&lt;/STRONG&gt;&amp;nbsp;(A Next Generation Interface) resource agents for HANA state detection.&lt;/P&gt;
&lt;P data-line="126"&gt;Each phase includes built-in timeout management (controlled by the wait_time parameter) and retry logic. The stop_vms parameter determines whether the agent fully stops and deallocates VMs or just deactivates SAP instances in the non-primary zone.&lt;/P&gt;
&lt;H2&gt;Cluster Attributes&lt;/H2&gt;
&lt;P&gt;The resource agent uses the following cluster node attributes to track execution state:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 88.3333%; height: 105px; border-width: 1px;"&gt;&lt;thead&gt;&lt;tr style="height: 35px;"&gt;&lt;th style="height: 35px;"&gt;Attribute&lt;/th&gt;&lt;th style="height: 35px;"&gt;Description&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;azure_sap_zone_current_phase&lt;/td&gt;&lt;td style="height: 35px;"&gt;Stores the current phase of execution&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;azure_sap_zone_phase_start_time&lt;/td&gt;&lt;td style="height: 35px;"&gt;Records the start time for each phase (used for timeout detection)&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Configuration Parameters&lt;/H2&gt;
&lt;P&gt;The following cluster resource parameters configure the resource agent's behavior:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="0.1" style="border-width: 0.1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th style="border-width: 0.1px;"&gt;Name&lt;/th&gt;&lt;th style="border-width: 0.1px;"&gt;Description&lt;/th&gt;&lt;th style="border-width: 0.1px;"&gt;Type&lt;/th&gt;&lt;th style="border-width: 0.1px;"&gt;Default&lt;/th&gt;&lt;th style="border-width: 0.1px;"&gt;Required&lt;/th&gt;&lt;th style="border-width: 0.1px;"&gt;Example&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;sid&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;SAP System ID (SID) name&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;string&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;-&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✓&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;S4H&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;hana_sid&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;HANA System ID (if different from SAP SID)&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;string&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;sid&amp;nbsp;value&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✗&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;HDB&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;hana_vm_zones&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;Mapping of HANA VM name to logical zone group (optional; for non-zonal/PPG scenarios)&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;string&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;-&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✗&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;hanavm1:1,hanavm2:2&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;verbose&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;Enable verbose logging&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;boolean&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;false&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✗&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;true&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;soft_shutdown_timeout&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;Soft shutdown timeout (seconds). Used as the timeout argument for SAP stop operations when&amp;nbsp;stop_vms=true&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;integer&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;600&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✗&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;600&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;app_vm_names&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;Comma-separated list of SAP application server VM names&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;string&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;-&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✗*&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;sapapp01,sapapp02,sapapp03,sapapp04&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;app_vm_name_pattern&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;Regex pattern to identify SAP application server VM names&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;string&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;-&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✗*&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;sapapp.*&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;resource_group&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;Azure resource group for SAP application servers&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;string&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;HANA VMs RG&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✗&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;sap-app-rg&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;hana_resource&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;Name of the HANA resource in Pacemaker cluster&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;string&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;-&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✓&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;rsc_SAPHana_S4H_HDB00&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;client_id&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;Client ID of user-assigned managed identity (optional for system identity)&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;string&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;-&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✗&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;a1b2c3d4-e5f6-7890-abcd-ef1234567890&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;stop_vms&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;Stop VMs in different zones (true) or just deactivate SAP instances (false)&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;boolean&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;false&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✗&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;false&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;wait_before_stop_sap&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;Wait time before stopping SAP instances in different zones (seconds)&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;integer&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;300&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✗&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;300&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;wait_time&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;Wait time for phases to complete (seconds)&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;integer&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;600&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✗&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;600&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;retry_count&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;Azure API retry count&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;integer&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;3&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✗&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;3&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;retry_wait&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;Wait time between retries (seconds)&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;integer&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;20&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✗&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;20&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 0.1px;"&gt;app_vm_zones&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;Mapping of app VM name to logical zone group (optional; for non-zonal/PPG scenarios)&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;string&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;-&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;✗*&lt;/td&gt;&lt;td style="border-width: 0.1px;"&gt;sapapp01:1,sapapp02:1,sapapp03:2,sapapp04:2&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;: Provide at least one of&amp;nbsp;app_vm_names,&amp;nbsp;app_vm_name_pattern, or&amp;nbsp;app_vm_zones. If multiple are specified, the effective VM list is the union of:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;app_vm_names&amp;nbsp;(explicit list),&lt;/LI&gt;
&lt;LI&gt;VMs matching&amp;nbsp;app_vm_name_pattern&amp;nbsp;(when&amp;nbsp;app_vm_names&amp;nbsp;is not provided), and&lt;/LI&gt;
&lt;LI&gt;VM names present in&amp;nbsp;app_vm_zones.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If&amp;nbsp;app_vm_zones&amp;nbsp;is provided but neither&amp;nbsp;app_vm_names&amp;nbsp;nor&amp;nbsp;app_vm_name_pattern&amp;nbsp;are set, the agent treats&amp;nbsp;app_vm_zones&amp;nbsp;as the authoritative source of application VM names.&lt;/P&gt;
&lt;P&gt;If both&amp;nbsp;app_vm_names&amp;nbsp;and&amp;nbsp;app_vm_name_pattern&amp;nbsp;are set,&amp;nbsp;app_vm_names&amp;nbsp;is used (pattern matching is skipped).&amp;nbsp;app_vm_zones&amp;nbsp;is still merged in.&lt;/P&gt;
&lt;P&gt;app_vm_zones&amp;nbsp;is a&amp;nbsp;&lt;EM&gt;supplemental&lt;/EM&gt;&amp;nbsp;mapping primarily intended for non-zonal/PPG scenarios; it can be used just for the subset of VMs that have no Azure zone metadata.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Non-zonal/PPG Note&lt;/STRONG&gt;: In proximity placement group (PPG) or other non-zonal deployments, Azure VM metadata and ARM VM properties may not include an Availability Zone. In that case:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Set&amp;nbsp;hana_vm_zones&amp;nbsp;to map each HANA VM name to a logical group label (e.g.&amp;nbsp;hanavm1:1,hanavm2:2).&lt;/LI&gt;
&lt;LI&gt;Set&amp;nbsp;app_vm_zones&amp;nbsp;to map each SAP application VM (or just the subset missing zone metadata) to a logical group label. These are&amp;nbsp;&lt;EM&gt;logical&lt;/EM&gt;&amp;nbsp;labels used for alignment, not necessarily Azure Availability Zones.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Warning (zone/group mappings)&lt;/STRONG&gt;: Be very careful when setting&amp;nbsp;hana_vm_zones&amp;nbsp;and&amp;nbsp;app_vm_zones&amp;nbsp;(sometimes referred to as “hana zone” / “app VM zone” parameters). In deployments where Azure zone metadata is unavailable, these mappings&amp;nbsp;&lt;EM&gt;fully determine&lt;/EM&gt;&amp;nbsp;which application VMs are considered “same group” vs “different group”.&lt;/P&gt;
&lt;P&gt;If the grouping is wrong, the agent can take action on the wrong servers:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;With&amp;nbsp;stop_vms=false, it may deactivate (make passive) SAP instances on the wrong app VMs.&lt;/LI&gt;
&lt;LI&gt;With&amp;nbsp;stop_vms=true, it may soft-shutdown SAP and stop/deallocate the wrong app VMs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Double-check the VM name → group assignments and keep them consistent across the HANA and app tiers.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H3&gt;Parameter interactions and practical notes&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;hana_sid&amp;nbsp;is used when the HANA Pacemaker attributes are named using a different SID than the SAP application SID.&lt;/LI&gt;
&lt;LI&gt;When&amp;nbsp;stop_vms=true, the agent:
&lt;UL&gt;
&lt;LI&gt;waits&amp;nbsp;wait_before_stop_sap&amp;nbsp;seconds before initiating shutdown (to reduce churn during rapid failovers),&lt;/LI&gt;
&lt;LI&gt;calls&amp;nbsp;sapcontrol -function Stop &amp;lt;soft_shutdown_timeout&amp;gt;&amp;nbsp;on the "different-zone" app VMs (soft shutdown with a configurable timeout),&lt;/LI&gt;
&lt;LI&gt;waits for process&amp;nbsp;dispstatus&amp;nbsp;to become&amp;nbsp;GRAY&amp;nbsp;(stopped) before deallocating VMs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;When&amp;nbsp;stop_vms=false, the agent:
&lt;UL&gt;
&lt;LI&gt;calls&amp;nbsp;sapcontrol -function ABAPSetServerInactive&amp;nbsp;on the "different-zone" app VMs,&lt;/LI&gt;
&lt;LI&gt;leaves the SAP instances running but in&amp;nbsp;&lt;STRONG&gt;inactive/passive mode&lt;/STRONG&gt;, and&lt;/LI&gt;
&lt;LI&gt;does&amp;nbsp;&lt;STRONG&gt;not&lt;/STRONG&gt;&amp;nbsp;stop/deallocate the Azure VMs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Timeouts:
&lt;UL&gt;
&lt;LI&gt;Most phases use&amp;nbsp;wait_time.&lt;/LI&gt;
&lt;LI&gt;The stop/wait-for-stop window effectively needs to cover both&amp;nbsp;wait_time&amp;nbsp;and&amp;nbsp;soft_shutdown_timeout.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;app_vm_zones&amp;nbsp;format&lt;/H3&gt;
&lt;P&gt;Use a comma-separated mapping:&amp;nbsp;vm_name:group.&lt;/P&gt;
&lt;P&gt;Example:&amp;nbsp;app_vm_zones="sapapp01:1,sapapp02:1,sapapp03:2,sapapp04:2"&lt;/P&gt;
&lt;H3&gt;Start-time validation&lt;/H3&gt;
&lt;P&gt;If you provide&amp;nbsp;app_vm_zones&amp;nbsp;or&amp;nbsp;hana_vm_zones&amp;nbsp;in a deployment where Azure zone metadata&amp;nbsp;&lt;EM&gt;is&lt;/EM&gt; available, the agent validates on every start that the provided values match Azure. It will fail to start if they do not match.&lt;/P&gt;
&lt;H2&gt;Prerequisites&lt;/H2&gt;
&lt;H3&gt;Topology requirement (critical)&lt;/H3&gt;
&lt;P&gt;This solution assumes you have&amp;nbsp;&lt;STRONG&gt;two equivalent sets of SAP application server VMs&lt;/STRONG&gt;, one set placed/aligned with each HANA VM zone (or logical group in non-zonal/PPG deployments). Only one set is expected to be active at a time.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Zonal deployments&lt;/STRONG&gt;: provision the same application server capacity in each Availability Zone used by the HANA primary/secondary VMs.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Non-zonal / PPG deployments&lt;/STRONG&gt;: provision two equivalent application server sets and map them consistently using&amp;nbsp;hana_vm_zones&amp;nbsp;and&amp;nbsp;app_vm_zones.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;“Equivalent/identical” here means the VMs are prepared to run the same SAP application workload (same SAP installation/SID/instance layout and configuration as applicable for your landscape), so the agent can start SAP on the “same-zone” set and deactivate/stop SAP on the “different-zone” set during failover.&lt;/P&gt;
&lt;H3&gt;SAP workload routing/groups (required)&lt;/H3&gt;
&lt;P&gt;To ensure workloads continue seamlessly when the active application-server set switches zones/groups, configure your SAP group/routing settings to include&amp;nbsp;&lt;STRONG&gt;both&lt;/STRONG&gt;&amp;nbsp;application server sets as appropriate for your landscape, including:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;SAP logon groups:&amp;nbsp;SMLG&lt;/LI&gt;
&lt;LI&gt;RFC server groups:&amp;nbsp;RZ12&lt;/LI&gt;
&lt;LI&gt;Background/batch server groups:&amp;nbsp;SM61&lt;/LI&gt;
&lt;LI&gt;Spool server groups:&amp;nbsp;SPAD&lt;/LI&gt;
&lt;LI&gt;Update configuration/groups:&amp;nbsp;SM14&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;System Requirements&lt;/H3&gt;
&lt;H4&gt;Operating System Support&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;SUSE Linux Enterprise Server (SLES)&lt;/STRONG&gt;: 15 SP5 and above&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;Network Requirements&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;HANA VMs must have outbound access to Azure API endpoints&lt;/LI&gt;
&lt;LI&gt;Required for VM management operations (start, stop, execute commands)&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;Azure Linux VM Agent&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;Must be installed on all SAP application server VMs&lt;/LI&gt;
&lt;LI&gt;Pre-installed on Azure Marketplace images&lt;/LI&gt;
&lt;LI&gt;Manual installation required for custom/non-Marketplace images&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/agent-linux" target="_blank" rel="noopener"&gt;Installation Guide&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;Python Environment&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Python 3.x&lt;/STRONG&gt;&amp;nbsp;installed on HANA cluster nodes&lt;/LI&gt;
&lt;LI&gt;Required Python packages: requests (all other imports are Python standard library)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Verification Command:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;python3 -c 'import os, sys, time, subprocess, re, requests, shlex, random; from typing import Dict, List, Optional'&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;HANA Resource Agent Compatibility&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;SAPHanaSR&lt;/STRONG&gt;: Traditional SAP HANA System Replication resource agent&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;SAPHanaSR-angi&lt;/STRONG&gt;: SAP HANA System Replication A Next Generation Interface resource agent&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The azure-sap-zone resource agent automatically detects which HANA resource agent is in use and adapts accordingly.&lt;/P&gt;
&lt;H3&gt;Azure Permissions&lt;/H3&gt;
&lt;P&gt;The resource agent requires either a user-assigned managed identity (via&amp;nbsp;client_id) or a system-assigned managed identity with specific Azure permissions:&lt;/P&gt;
&lt;H4&gt;Required Azure Role Actions&lt;/H4&gt;
&lt;LI-CODE lang="json"&gt;{
    "permissions": [
        {
            "actions": [
                "Microsoft.Compute/*/read",
                "Microsoft.Compute/virtualMachines/start/action",
                "Microsoft.Compute/virtualMachines/restart/action",
                "Microsoft.Compute/virtualMachines/powerOff/action",
                "Microsoft.Compute/virtualMachines/deallocate/action",
                "Microsoft.Compute/virtualMachines/runCommand/action",
                "Microsoft.Compute/virtualMachines/runCommands/read",
                "Microsoft.Compute/virtualMachines/runCommands/write"
            ],
            "notActions": [],
            "dataActions": [],
            "notDataActions": []
        }
    ]
}&lt;/LI-CODE&gt;
&lt;H4&gt;Identity Assignment Requirements&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;User-assigned managed identity must be assigned to both HANA servers&lt;/LI&gt;
&lt;LI&gt;Identity must have Virtual Machine Contributor role (or custom role with above actions)&lt;/LI&gt;
&lt;LI&gt;Role assignment scope: SAP application server VMs' resource group (recommended) or individual VMs&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Limitations&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Supported SAP Systems&lt;/STRONG&gt;: ABAP systems on HANA scale-up only&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Not Supported&lt;/STRONG&gt;: SAP JAVA, HANA scale-out, multi-SID environments&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Installation&lt;/H2&gt;
&lt;H3&gt;Step 1: Azure Configuration&lt;/H3&gt;
&lt;P&gt;Configure Azure resources using Azure CLI.&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli" target="_blank" rel="noopener"&gt;Install Azure CLI&lt;/A&gt;&amp;nbsp;if not already available.&lt;/P&gt;
&lt;H4&gt;PowerShell Script for Azure Setup&lt;/H4&gt;
&lt;LI-CODE lang="powershell"&gt;# Define parameters - Update these values for your environment
$subscriptionId = "Your-Subscription-ID"
$hanaResourceGroup = "HANA-VMs-Resource-Group"
$hanaVMNames = @("hana-vm1", "hana-vm2")
$managedIdentityName = "sap-azure-zone-alignment"
$customAzureRole = "Azure SAP Zone Alignment"

# Resource group scope assignment (recommended)
$sapAppResourceGroup = "SAP-Application-Servers-Resource-Group"

# Alternative: Direct VM assignment
$sapAppVMNames = @("sap-app1", "sap-app2", "sap-app3", "sap-app4")

# Login to Azure
az login

# Verify Azure Linux Agent and run-command capability on application servers
$sapAppVMNames | ForEach-Object -ThrottleLimit $sapAppVMNames.Count -Parallel {
    $vmName = $_
    $result = az vm run-command invoke `
        --resource-group $using:sapAppResourceGroup `
        --name $vmName `
        --command-id RunShellScript `
        --scripts "systemctl is-active waagent" `
        --output json 2&amp;gt;&amp;amp;1 | ConvertFrom-Json
    $msg = $result.value[0].message
    if ($msg -match '\[stdout\]\s*active') {
        Write-Host "[$vmName] OK - waagent active, run-command working"
    } else {
        Write-Host "[$vmName] FAIL - unexpected output: $msg"
    }
}

# Create custom Azure role
$roleDefinition = @{
    Name = $customAzureRole
    IsCustom = $true
    Description = "Custom Azure role for sap-azure-zone pacemaker resource agent"
    Actions = @(
        "Microsoft.Compute/*/read",
        "Microsoft.Compute/virtualMachines/start/action",
        "Microsoft.Compute/virtualMachines/restart/action",
        "Microsoft.Compute/virtualMachines/powerOff/action",
        "Microsoft.Compute/virtualMachines/deallocate/action",
        "Microsoft.Compute/virtualMachines/runCommand/action",
        "Microsoft.Compute/virtualMachines/runCommands/read",
        "Microsoft.Compute/virtualMachines/runCommands/write"
    )
    NotActions = @()
    AssignableScopes = @("/subscriptions/$subscriptionId")
} | ConvertTo-Json -Depth 3

$roleDefinition | Out-File -FilePath "$env:TEMP\az-role.json" -Encoding utf8
az role definition create --role-definition "$env:TEMP\az-role.json"

# Recommendation: Use a user-assigned managed identity for authentication. 
# System-assigned managed identities are also supported; if you choose this option, 
# ensure that system-assigned managed identity is enabled on both HANA VMs and that 
# the required roles (listed below) are assigned to each system identity.

# Create user-assigned managed identity
$managedIdentityResourceId = az identity create `
    --resource-group $hanaResourceGroup `
    --name $managedIdentityName `
    --query id --output tsv

# Assign managed identity to HANA VMs
foreach ($vmName in $hanaVMNames) {
    az vm identity assign `
        --resource-group $hanaResourceGroup `
        --name $vmName `
        --identities $managedIdentityResourceId
}

# Alternative: Enable system-assigned managed identity (uncomment if preferred)
# foreach ($vmName in $hanaVMNames) {
#     az vm identity assign `
#         --resource-group $hanaResourceGroup `
#         --name $vmName
# }

# Assign role to managed identity (resource group scope)
$managedIdentityPrincipalId = az identity show `
    --resource-group $hanaResourceGroup `
    --name $managedIdentityName `
    --query principalId --output tsv

az role assignment create `
    --assignee-object-id $managedIdentityPrincipalId `
    --assignee-principal-type ServicePrincipal `
    --role $customAzureRole `
    --scope "/subscriptions/$subscriptionId/resourceGroups/$sapAppResourceGroup"

# Display the client ID (needed for cluster configuration)
Write-Host "Managed Identity Client ID:"
az identity show `
    --resource-group $hanaResourceGroup `
    --name $managedIdentityName `
    --query clientId --output tsv&lt;/LI-CODE&gt;
&lt;H3&gt;Step 2: Install Resource Agent&lt;/H3&gt;
&lt;H4&gt;Download and Install on Both HANA Cluster Nodes&lt;/H4&gt;
&lt;LI-CODE lang="powershell"&gt;# Download the resource agent script
curl -o azure-sap-zone.in https://raw.githubusercontent.com/ClusterLabs/resource-agents/refs/heads/main/heartbeat/azure-sap-zone.in

# Create the resource agent file
sudo cp azure-sap-zone.in /usr/lib/ocf/resource.d/heartbeat/azure-sap-zone

# Update the interpreter line
# Note: the downloaded file typically starts with the placeholder `#!@PYTHON@ -tt`.
# Replace it with your actual python3 path.
PYTHON3_PATH="$(command -v python3)"
echo "python3: ${PYTHON3_PATH}"
# Bash note: `!` triggers history expansion inside double-quotes, so use this quoting form.
sudo sed -i '1 s|^#!@PYTHON@ -tt$|#!'"${PYTHON3_PATH}"' -tt|' /usr/lib/ocf/resource.d/heartbeat/azure-sap-zone

# If you need to force a specific interpreter path, you can also do:
# sudo sed -i '1 s|^#!@PYTHON@ -tt$|#!/usr/bin/python3 -tt|' /usr/lib/ocf/resource.d/heartbeat/azure-sap-zone

# Convert line endings and set permissions
sudo dos2unix /usr/lib/ocf/resource.d/heartbeat/azure-sap-zone
sudo chmod +x /usr/lib/ocf/resource.d/heartbeat/azure-sap-zone

# Copy to secondary node (alternative: repeat above steps manually)
sudo scp /usr/lib/ocf/resource.d/heartbeat/azure-sap-zone &amp;lt;secondary-hana-vm&amp;gt;:/usr/lib/ocf/resource.d/heartbeat/&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Configuration&lt;/H2&gt;
&lt;H3&gt;Configuration Options&lt;/H3&gt;
&lt;P&gt;The resource agent provides two distinct behaviors for application servers in the&amp;nbsp;&lt;EM&gt;different&lt;/EM&gt;&amp;nbsp;zone/group.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Option&lt;/th&gt;&lt;th&gt;Setting&lt;/th&gt;&lt;th&gt;What happens to SAP&lt;/th&gt;&lt;th&gt;Do the “different-zone” servers take new users/jobs/sessions?&lt;/th&gt;&lt;th&gt;What happens to the Azure VMs&lt;/th&gt;&lt;th&gt;Typical trade-off&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;1) Deactivate (Passive mode)&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;stop_vms=false&lt;/td&gt;&lt;td&gt;SAP stays&amp;nbsp;&lt;STRONG&gt;running&lt;/STRONG&gt;, but the agent calls&amp;nbsp;sapcontrol -function ABAPSetServerInactive&amp;nbsp;to set the instance&amp;nbsp;&lt;STRONG&gt;inactive/passive&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;No&lt;/STRONG&gt;&amp;nbsp;— the server is kept out of service for new workload (e.g., new user logons, new batch/background work, and other new sessions)&lt;/td&gt;&lt;td&gt;VMs stay&amp;nbsp;&lt;STRONG&gt;running&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;Fastest&lt;/STRONG&gt;&amp;nbsp;to make active again, but&amp;nbsp;&lt;STRONG&gt;no Azure compute cost savings&lt;/STRONG&gt;&amp;nbsp;for the inactive zone because the VMs keep running&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;2) Soft shutdown + stop/deallocate&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;stop_vms=true&lt;/td&gt;&lt;td&gt;Agent calls&amp;nbsp;sapcontrol -function Stop &amp;lt;soft_shutdown_timeout&amp;gt;&amp;nbsp;(graceful stop with a configurable timeout) and waits until the instance is stopped (dispstatus=GRAY)&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;No&lt;/STRONG&gt;&amp;nbsp;— during shutdown the instance is not available for new workload/sessions&lt;/td&gt;&lt;td&gt;After shutdown, VMs are&amp;nbsp;&lt;STRONG&gt;stopped and deallocated&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;Slower&lt;/STRONG&gt;&amp;nbsp;to re-activate (VM boot + SAP start), but&amp;nbsp;&lt;STRONG&gt;can save costs&lt;/STRONG&gt;&amp;nbsp;in pay-as-you-go models by deallocating the inactive-zone VMs&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;Notes:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;soft_shutdown_timeout&amp;nbsp;controls how long SAP is given to stop gracefully.&lt;/LI&gt;
&lt;LI&gt;stop_vms=true&amp;nbsp;is the only mode where the agent will stop/deallocate VMs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Note on capacity:&lt;/STRONG&gt;&amp;nbsp;When using&amp;nbsp;stop_vms=true, deallocated VMs are not guaranteed to have capacity available when restarted. Consider using&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-machines/capacity-reservation-overview" target="_blank" rel="noopener" data-href="https://learn.microsoft.com/en-us/azure/virtual-machines/capacity-reservation-overview"&gt;On-Demand Capacity Reservations (ODCR)&lt;/A&gt; or &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/capacity-reservation-create?tabs=portal1%2Capi1%2Capi2" target="_blank" rel="noopener"&gt;Capacity Reservation Groups &lt;/A&gt;to ensure VM sizes remain available in both zones. The resource agent does not manage capacity reservations — this is an infrastructure planning consideration.&lt;/P&gt;
&lt;H3&gt;Cluster Configuration Examples&lt;/H3&gt;
&lt;P data-line="409"&gt;The configuration has two parts:&lt;/P&gt;
&lt;OL data-line="410"&gt;
&lt;LI data-line="410"&gt;&lt;STRONG&gt;Create the primitive resource&lt;/STRONG&gt;&amp;nbsp;— choose one of the examples below (A–F) based on your deployment pattern.&lt;/LI&gt;
&lt;LI data-line="411"&gt;&lt;STRONG&gt;Create the clone and order constraint&lt;/STRONG&gt;&amp;nbsp;— this is required regardless of which example you use (see&amp;nbsp;&lt;A href="https://file+.vscode-resource.vscode-cdn.net/q%3A/git/generic/Blog_Part2_Technical_DeepDive.md#step-2-create-clone-and-order-constraint" target="_blank" rel="noopener" data-href="#step-2-create-clone-and-order-constraint"&gt;Step 2&lt;/A&gt;&amp;nbsp;below).&lt;/LI&gt;
&lt;/OL&gt;
&lt;P data-line="413"&gt;Note on monitor interval: the resource agent advertises a default monitor interval of 300 seconds in its meta-data. The examples below use a shorter interval (e.g. 10s) to detect failovers quickly; choose an interval appropriate for your environment.&lt;/P&gt;
&lt;H4 data-line="415"&gt;Step 1: Create the primitive resource&lt;/H4&gt;
&lt;P data-line="417"&gt;Choose the example that matches your deployment:&lt;/P&gt;
&lt;H4&gt;Example A: Zonal deployment (system-assigned managed identity) + explicit VM list&lt;/H4&gt;
&lt;P&gt;Use this when Azure Availability Zones are present and you want to provide an explicit list of application VMs.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;SLES (crmsh):&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;sudo crm configure primitive azure-sap-zone ocf:heartbeat:azure-sap-zone \
    params sid=&amp;lt;SAP_SID&amp;gt; \
           hana_resource=&amp;lt;HANA_CLUSTER_RESOURCE_NAME&amp;gt; \
           app_vm_names=&amp;lt;app_vm1,app_vm2,app_vm3,app_vm4&amp;gt; \
           stop_vms=false \
           wait_time=600 \
           verbose=true \
    meta failure-timeout=120s \
    op start start-delay=60s interval=0s timeout=360s \
    op monitor interval=10s timeout=360s \
    op stop timeout=10s interval=0s on-fail=ignore&lt;/LI-CODE&gt;
&lt;H4&gt;Example B: Zonal deployment (user-assigned managed identity) + VM name pattern&lt;/H4&gt;
&lt;P&gt;Use this when Azure Availability Zones are present and you want the agent to discover application VMs by name.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;SLES (crmsh):&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;sudo crm configure primitive azure-sap-zone ocf:heartbeat:azure-sap-zone \
    params sid=&amp;lt;SAP_SID&amp;gt; \
           hana_resource=&amp;lt;HANA_CLUSTER_RESOURCE_NAME&amp;gt; \
           app_vm_name_pattern=&amp;lt;REGEX_OR_PREFIX_PATTERN&amp;gt; \
           client_id=&amp;lt;MANAGED_IDENTITY_CLIENT_ID&amp;gt; \
           stop_vms=false \
           wait_time=600 \
           verbose=true \
    meta failure-timeout=120s \
    op start start-delay=60s interval=0s timeout=360s \
    op monitor interval=10s timeout=360s \
    op stop timeout=10s interval=0s on-fail=ignore&lt;/LI-CODE&gt;
&lt;H4&gt;Example C: Non-zonal / PPG deployment (logical grouping)&lt;/H4&gt;
&lt;P&gt;Use this when Azure zone metadata is missing (for example, proximity placement group or other non-zonal deployments).&lt;/P&gt;
&lt;P&gt;Key points:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Set&amp;nbsp;hana_vm_zones&amp;nbsp;to map each HANA VM name to a logical group label (for example&amp;nbsp;hanavm1:1,hanavm2:2).&lt;/LI&gt;
&lt;LI&gt;Set&amp;nbsp;app_vm_zones&amp;nbsp;to map each application VM name (or just the subset missing zone metadata) to a logical group label.&lt;/LI&gt;
&lt;LI&gt;If Azure later reports real zone data for those VMs, the agent validates on every start that your mapping matches Azure and fails if it does not.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;SLES (crmsh):&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;sudo crm configure primitive azure-sap-zone ocf:heartbeat:azure-sap-zone \
    params sid=&amp;lt;SAP_SID&amp;gt; \
           hana_resource=&amp;lt;HANA_CLUSTER_RESOURCE_NAME&amp;gt; \
        hana_vm_zones="&amp;lt;hana_vm1&amp;gt;:1,&amp;lt;hana_vm2&amp;gt;:2" \
           app_vm_zones="sapapp01:1,sapapp02:1,sapapp03:2,sapapp04:2" \
           stop_vms=false \
           wait_time=600 \
           verbose=true \
    meta failure-timeout=120s \
    op start start-delay=60s interval=0s timeout=360s \
    op monitor interval=10s timeout=360s \
    op stop timeout=10s interval=0s on-fail=ignore&lt;/LI-CODE&gt;
&lt;H4&gt;Example D: Mixed deployment (mostly zonal, a few VMs missing zone metadata)&lt;/H4&gt;
&lt;P&gt;Use this when most application VMs have Azure zone metadata, but a small subset does not. Provide the full VM list via&amp;nbsp;app_vm_names&amp;nbsp;(or discovery via&amp;nbsp;app_vm_name_pattern), and provide&amp;nbsp;app_vm_zones&amp;nbsp;only for the VMs that are missing zone metadata.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;SLES (crmsh):&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;sudo crm configure primitive azure-sap-zone ocf:heartbeat:azure-sap-zone \
    params sid=&amp;lt;SAP_SID&amp;gt; \
           hana_resource=&amp;lt;HANA_CLUSTER_RESOURCE_NAME&amp;gt; \
           app_vm_names=&amp;lt;app_vm1,app_vm2,app_vm3,...&amp;gt; \
           app_vm_zones="&amp;lt;nonzonal_vm_a&amp;gt;:1,&amp;lt;nonzonal_vm_b&amp;gt;:2" \
           stop_vms=false \
           wait_time=600 \
           verbose=true&lt;/LI-CODE&gt;
&lt;H4&gt;Example E: Zonal deployment with stop_vms=true (shutdown + deallocate different-zone VMs)&lt;/H4&gt;
&lt;P&gt;Use this when you want maximum cost optimization by shutting down and deallocating the application VMs in the non-primary zone.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;SLES (crmsh):&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;sudo crm configure primitive azure-sap-zone ocf:heartbeat:azure-sap-zone \
    params sid=&amp;lt;SAP_SID&amp;gt; \
           hana_resource=&amp;lt;HANA_CLUSTER_RESOURCE_NAME&amp;gt; \
           app_vm_names=&amp;lt;app_vm1,app_vm2,app_vm3,app_vm4&amp;gt; \
           stop_vms=true \
           wait_before_stop_sap=300 \
           soft_shutdown_timeout=600 \
           wait_time=600 \
           verbose=true \
    op start start-delay=60s interval=0s timeout=360s \
    op monitor interval=10s timeout=360s \
    op stop timeout=10s interval=0s on-fail=ignore&lt;/LI-CODE&gt;
&lt;H4&gt;Example F: HANA SID differs from SAP SID (hana_sid)&lt;/H4&gt;
&lt;P&gt;Use this when the HANA cluster uses a different SID, so the HANA Pacemaker attributes are named hana_&amp;lt;hana_sid&amp;gt;_*.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;SLES (crmsh):&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;sudo crm configure primitive azure-sap-zone ocf:heartbeat:azure-sap-zone \
    params sid=&amp;lt;SAP_SID&amp;gt; \
           hana_sid=&amp;lt;HANA_SID&amp;gt; \
           hana_resource=&amp;lt;HANA_CLUSTER_RESOURCE_NAME&amp;gt; \
           app_vm_names=&amp;lt;app_vm1,app_vm2,app_vm3,app_vm4&amp;gt; \
           stop_vms=false \
           verbose=true&lt;/LI-CODE&gt;
&lt;H4 data-line="538"&gt;Step 2: Create clone and order constraint&lt;/H4&gt;
&lt;P&gt;After creating the primitive resource using any of the examples above, run the following commands to create the clone resource and order constraint. This is&amp;nbsp;&lt;STRONG&gt;required&lt;/STRONG&gt; for all deployment patterns.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;SLES (crmsh):&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;# Create clone resource (runs on both nodes)
sudo crm configure clone cln_azure-sap-zone azure-sap-zone \
    meta clone-node-max=1 target-role=Started interleave=true

# Create order constraint (start after HANA resource)
sudo crm configure order ord_azure-sap-zone Mandatory: &amp;lt;HANA_CLONE_RESOURCE&amp;gt; cln_azure-sap-zone symmetrical=false&lt;/LI-CODE&gt;
&lt;H2&gt;Usage&lt;/H2&gt;
&lt;P&gt;After installation and configuration, the resource agent will automatically:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Monitor HANA primary location&lt;/STRONG&gt;: Detects which availability zone hosts the current HANA primary&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Manage application servers&lt;/STRONG&gt;: Starts/stops or activates/deactivates SAP application servers based on zone alignment&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Handle failover scenarios&lt;/STRONG&gt;: Automatically adjusts during HANA failover events&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;Manual Operations&lt;/H3&gt;
&lt;H4&gt;Enable Verbose Logging&lt;/H4&gt;
&lt;LI-CODE lang="shell"&gt;# SLES
sudo crm_resource --resource azure-sap-zone --set-parameter verbose --parameter-value true&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Resource Management&lt;/H4&gt;
&lt;P&gt;&lt;STRONG&gt;Put resource in maintenance mode (for maintenance):&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Setting the resource to unmanaged mode stops Pacemaker from running any operations (start, stop, monitor) on the resource, which prevents the agent from taking action during planned maintenance.&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;# SLES — enable maintenance mode
sudo crm resource maintenance cln_azure-sap-zone on&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Resume resource management:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;# SLES — disable maintenance mode
sudo crm resource maintenance cln_azure-sap-zone off&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 data-line="660"&gt;Validating the Setup with a Test Failover&lt;/H2&gt;
&lt;P data-line="662"&gt;Once the resource agent is installed and configured, we recommend running through a test failover on a non-production system to confirm everything works end to end. The steps below walk you through the before, during, and after of a validation cycle.&lt;/P&gt;
&lt;H3 data-line="664"&gt;Step 1: Verify the resource agent is running&lt;/H3&gt;
&lt;P data-line="666"&gt;Before triggering a failover, confirm the resource agent is healthy and the cluster sees it on both nodes:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;# Check overall cluster status
sudo crm status

# Verify the azure-sap-zone clone is started on both nodes
sudo crm resource show cln_azure-sap-zone&lt;/LI-CODE&gt;
&lt;P data-line="666"&gt;You should see the clone resource running on both HANA cluster nodes.&lt;/P&gt;
&lt;H3 data-line="678"&gt;Step 2: Check the initial state&lt;/H3&gt;
&lt;P data-line="680"&gt;Record the current state so you can compare after the failover:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;# Which node is the HANA primary?
sudo crm status | grep -i "Masters\|Promoted"

# What zone/phase does the resource agent report?
sudo crm_attribute --name azure_sap_zone_current_phase --query --quiet --node $(hostname)

# Check application server VM power state (from Azure CLI, if available)
az vm list -g &amp;lt;SAP-App-Resource-Group&amp;gt; -d --query "[].{Name:name, PowerState:powerState, Zone:zones[0]}" -o table&lt;/LI-CODE&gt;
&lt;P&gt;At this point the phase should be all_phases_completed (if the agent has already aligned once) or Started / no_action_required depending on which node you are on.&lt;/P&gt;
&lt;H3 data-line="695"&gt;Step 3: Enable verbose logging (recommended)&lt;/H3&gt;
&lt;P data-line="697"&gt;Turn on verbose logging before the failover so you can trace every phase in detail:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;sudo crm_resource --resource azure-sap-zone --set-parameter verbose --parameter-value true&lt;/LI-CODE&gt;
&lt;H3 data-line="703"&gt;Step 4: Trigger a test HANA failover&lt;/H3&gt;
&lt;P data-line="705"&gt;&lt;STRONG&gt;Important&lt;/STRONG&gt;: Only perform this on a test/non-production system.&lt;/P&gt;
&lt;P data-line="705"&gt;You can trigger a controlled HANA takeover using standard Pacemaker commands. The exact method depends on your HANA resource agent:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;# Option A: Migrate the HANA primary to the secondary node
sudo crm resource move &amp;lt;HANA_CLONE_RESOURCE&amp;gt; &amp;lt;target-node&amp;gt; force

# After the move completes, clear the location constraint so Pacemaker can manage normally
sudo crm resource clear &amp;lt;HANA_CLONE_RESOURCE&amp;gt;&lt;/LI-CODE&gt;
&lt;P data-line="717"&gt;Alternatively, if your runbook uses&amp;nbsp;sr_takeover&amp;nbsp;or&amp;nbsp;SAPHanaSR&amp;nbsp;tools, follow your existing takeover procedure. The key point is that the HANA primary ends up on the other node/zone.&lt;/P&gt;
&lt;H3 data-line="719"&gt;Step 5: Monitor the resource agent's progress&lt;/H3&gt;
&lt;P&gt;After the failover, the resource agent on the new primary node will detect the zone change and begin executing its phases. You can watch it in real time:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;# Watch the phase attribute update (run on the new primary node)
watch -n 5 'crm_attribute --name azure_sap_zone_current_phase --query --quiet --node $(hostname)'&lt;/LI-CODE&gt;
&lt;P data-line="728"&gt;You should see the phase progress through:&lt;/P&gt;
&lt;OL data-line="729"&gt;
&lt;LI data-line="729"&gt;start_vms_in_same_zone&lt;/LI&gt;
&lt;LI data-line="730"&gt;wait_for_vms_in_same_zone_to_start&lt;/LI&gt;
&lt;LI data-line="731"&gt;start_sap_in_same_zone&lt;/LI&gt;
&lt;LI data-line="732"&gt;wait_for_sap_in_same_zone_to_start&lt;/LI&gt;
&lt;LI data-line="733"&gt;stop_sap_in_diff_zone&lt;/LI&gt;
&lt;LI data-line="734"&gt;wait_for_sap_in_diff_zone_to_stop&amp;nbsp;(only when&amp;nbsp;stop_vms=true)&lt;/LI&gt;
&lt;LI data-line="735"&gt;stop_vms_in_diff_zone&amp;nbsp;(only when&amp;nbsp;stop_vms=true)&lt;/LI&gt;
&lt;LI data-line="736"&gt;all_phases_completed&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3 data-line="738"&gt;Step 6: Validate the outcome&lt;/H3&gt;
&lt;P data-line="740"&gt;Once the phase reaches&amp;nbsp;all_phases_completed, verify that the application tier has been aligned correctly.&lt;/P&gt;
&lt;P data-line="742"&gt;&lt;STRONG&gt;Check application VMs in the same zone as the new HANA primary:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;# Verify VMs are running
az vm list -g &amp;lt;SAP-App-Resource-Group&amp;gt; -d --query "[].{Name:name, PowerState:powerState, Zone:zones[0]}" -o table

# Verify SAP instances are active (GREEN) — run on a same-zone app VM
sapcontrol -nr &amp;lt;instance_number&amp;gt; -function GetProcessList&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-line="752"&gt;All SAP processes on the same-zone VMs should show&amp;nbsp;dispstatus: GREEN.&lt;/P&gt;
&lt;P data-line="754"&gt;&lt;STRONG&gt;Check application VMs in the different zone:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL data-line="756"&gt;
&lt;LI data-line="756"&gt;If&amp;nbsp;stop_vms=false: the VMs should still be running, but SAP instances should be in&amp;nbsp;&lt;STRONG&gt;inactive/passive&lt;/STRONG&gt;&amp;nbsp;mode. You can verify this by checking logon groups (SMLG) or the server's active status.&lt;/LI&gt;
&lt;LI data-line="757"&gt;If&amp;nbsp;stop_vms=true: the VMs should be&amp;nbsp;&lt;STRONG&gt;stopped/deallocated&lt;/STRONG&gt; in the Azure portal or via&amp;nbsp;az vm list.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 data-line="759"&gt;Step 7: Review the logs&lt;/H3&gt;
&lt;P data-line="761"&gt;Check the Pacemaker log to confirm all phases executed without errors:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;# View all agent activity
sudo grep -i 'azure-sap-zone' /var/log/pacemaker/pacemaker.log | tail -50

# Filter to only INFO/WARNING/ERROR messages (skip routine monitor noise)
sudo grep -iE 'azure-sap-zone.*(INFO|WARNING|ERROR):' /var/log/pacemaker/pacemaker.log | grep -v -iE "All phases|monitor: Started"&lt;/LI-CODE&gt;
&lt;P data-line="771"&gt;&lt;STRONG&gt;Example output (filtered):&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Apr 23 10:15:32 hanavm1 azure-sap-zone INFO: monitor: Started
Apr 23 10:15:32 hanavm1 azure-sap-zone INFO: Executing phase: start_vms_in_same_zone
Apr 23 10:15:35 hanavm1 azure-sap-zone INFO: Executing phase: wait_for_vms_in_same_zone_to_start
Apr 23 10:15:45 hanavm1 azure-sap-zone INFO: All VMs are started
Apr 23 10:15:45 hanavm1 azure-sap-zone INFO: Executing phase: start_sap_in_same_zone
Apr 23 10:15:45 hanavm1 azure-sap-zone INFO: Starting SAP on VMs: ['sapapp01', 'sapapp02']
Apr 23 10:16:02 hanavm1 azure-sap-zone INFO: Executing phase: wait_for_sap_in_same_zone_to_start
Apr 23 10:16:15 hanavm1 azure-sap-zone INFO: All SAP instances are started
Apr 23 10:16:15 hanavm1 azure-sap-zone INFO: Executing phase: stop_sap_in_diff_zone
Apr 23 10:16:15 hanavm1 azure-sap-zone INFO: Setting SAP instances to passive mode on VMs: ['sapapp03', 'sapapp04']
Apr 23 10:16:20 hanavm1 azure-sap-zone INFO: All phases have been executed successfully
Apr 23 10:16:20 hanavm1 azure-sap-zone INFO: monitor: Finished&lt;/LI-CODE&gt;
&lt;P data-line="771"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-line="771"&gt;Look for:&lt;/P&gt;
&lt;UL data-line="772"&gt;
&lt;LI data-line="772"&gt;&lt;STRONG&gt;Phase transitions&lt;/STRONG&gt;: confirm each phase started and completed in order&lt;/LI&gt;
&lt;LI data-line="773"&gt;&lt;STRONG&gt;No errors&lt;/STRONG&gt;: no&amp;nbsp;ERROR&amp;nbsp;or&amp;nbsp;FAIL&amp;nbsp;messages&lt;/LI&gt;
&lt;LI data-line="774"&gt;&lt;STRONG&gt;Timing&lt;/STRONG&gt;: note how long the full cycle took — this is your expected failover alignment time&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 data-line="776"&gt;Step 8: Clean up&lt;/H3&gt;
&lt;P data-line="778"&gt;After validation, you can disable verbose logging to reduce log volume:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;sudo crm_resource --resource azure-sap-zone --set-parameter verbose --parameter-value false&lt;/LI-CODE&gt;
&lt;P&gt;If you triggered the failover using crm resource move, make sure the location constraint was cleared (Step 4) so Pacemaker can manage resources normally going forward.&lt;/P&gt;
&lt;H2&gt;Troubleshooting&lt;/H2&gt;
&lt;H3&gt;Common Issues&lt;/H3&gt;
&lt;H4&gt;1. Authentication Problems&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;Verify managed identity is assigned to HANA VMs&lt;/LI&gt;
&lt;LI&gt;Check Azure role assignments&lt;/LI&gt;
&lt;LI&gt;Ensure proper permissions on target application server VMs&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;2. Network Connectivity&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;Validate outbound access to Azure API endpoints&lt;/LI&gt;
&lt;LI&gt;Check firewall rules and network security groups&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;3. Azure Linux Agent Issues&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;Verify agent status:&amp;nbsp;systemctl status waagent&lt;/LI&gt;
&lt;LI&gt;Check agent logs:&amp;nbsp;/var/log/waagent.log&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Log Analysis&lt;/H3&gt;
&lt;H4&gt;View Resource Agent Logs&lt;/H4&gt;
&lt;LI-CODE lang="shell"&gt;sudo grep -i 'azure-sap-zone' /var/log/pacemaker/pacemaker.log&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Common Log Patterns&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Phase transitions&lt;/STRONG&gt;: Look for "current_phase" changes&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;API errors&lt;/STRONG&gt;: Search for "Azure API" or "HTTP" error codes&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Timeout issues&lt;/STRONG&gt;: Check for "timeout" or "wait_time exceeded"&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Performance Monitoring&lt;/H3&gt;
&lt;P&gt;Monitor the following metrics:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Phase execution times&lt;/STRONG&gt;: Should complete within configured&amp;nbsp;wait_time&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;API response times&lt;/STRONG&gt;: Azure API calls should be &amp;lt; 30 seconds&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;VM startup times&lt;/STRONG&gt;: Application server boot time affects total failover duration&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;FAQ&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Q: Can I use this with SAP JAVA systems?&lt;/STRONG&gt;&amp;nbsp;A: No, this resource agent currently only supports SAP ABAP systems on HANA scale-up configurations.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q: What HANA resource agents are supported?&lt;/STRONG&gt;&amp;nbsp;A: The agent supports both SAPHanaSR and SAPHanaSR-angi (A Next Generation Interface) resource agents and automatically detects which one is in use.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q: What happens if an application server VM fails to start?&lt;/STRONG&gt;&amp;nbsp;A: The resource agent will retry based on the&amp;nbsp;retry_count&amp;nbsp;parameter and eventually fail the phase if the VM doesn't start within the&amp;nbsp;wait_time.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q: Can I run this in a multi-SID environment?&lt;/STRONG&gt;&amp;nbsp;A: No, multi-SID environments are not currently supported.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q: Can I use different SIDs for SAP and HANA?&lt;/STRONG&gt;&amp;nbsp;A: Yes, use the&amp;nbsp;hana_sid&amp;nbsp;parameter if your HANA SID differs from your SAP SID.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q: Can I use system-assigned managed identity instead of user-assigned?&lt;/STRONG&gt;&amp;nbsp;A: Yes, simply omit the&amp;nbsp;client_id&amp;nbsp;parameter and ensure system-assigned managed identity is enabled on both HANA VMs with appropriate permissions.&lt;/P&gt;
&lt;H2&gt;Important Notes&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Resource State Management&lt;/STRONG&gt;&amp;nbsp;Upon completion, the cluster attribute&amp;nbsp;azure_sap_zone_current_phase&amp;nbsp;is set to&amp;nbsp;all_phases_completed. The resource agent will not take further action until restarted.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Maintenance Operations&lt;/STRONG&gt; When performing maintenance on application servers in different zones (e.g., patching), put the resource in maintenance mode to prevent the agent from taking action:&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;# SLES
sudo crm resource maintenance cln_azure-sap-zone on&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Zone Alignment&lt;/STRONG&gt;&amp;nbsp;This solution requires identical SAP application server VMs in both availability zones. Only one set should be active at any time.&lt;/P&gt;
&lt;H2 data-line="738"&gt;Wrapping up&lt;/H2&gt;
&lt;P data-line="740"&gt;With the information in this post, you should have what you need to evaluate the Azure SAP Zone Resource Agent in your environment - from setting up the managed identity and permissions, to installing the agent, configuring the cluster, and troubleshooting common issues. If you haven't already, we recommend reading&amp;nbsp;&lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/sapapplications/aligning-sap-application-servers-with-the-hana-primary-zone-on-azure-public-prev/4490925" target="_blank" rel="noopener" data-lia-auto-title="Part 1" data-lia-auto-title-active="0"&gt;Part 1&lt;/A&gt;&amp;nbsp;for an introduction to the concepts and features behind this solution.&lt;/P&gt;
&lt;P data-line="742"&gt;We welcome your feedback during this public preview. If you encounter issues or have suggestions, please file them via GitHub Issues on the&amp;nbsp;&lt;A href="https://github.com/ClusterLabs/resource-agents" target="_blank" rel="noopener" data-href="https://github.com/ClusterLabs/resource-agents"&gt;ClusterLabs resource-agents repository&lt;/A&gt;.&lt;/P&gt;
&lt;H2 data-line="746"&gt;Public preview expectations&lt;/H2&gt;
&lt;P data-line="748"&gt;During public preview:&lt;/P&gt;
&lt;UL data-line="749"&gt;
&lt;LI data-line="749"&gt;This solution is provided as a Public Preview for evaluation and feedback.&lt;/LI&gt;
&lt;LI data-line="750"&gt;It is not covered by a formal support commitment.&lt;/LI&gt;
&lt;LI data-line="751"&gt;The design, configuration, and behaviors may evolve based on learnings.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="753"&gt;Because of that, we recommend using this in non-production environments while it is in preview.&lt;/P&gt;
&lt;P data-line="755"&gt;If you're interested in piloting the preview, your feedback will help shape what becomes generally available and supported.&lt;/P&gt;
&lt;H2 data-line="757"&gt;Disclaimer&lt;/H2&gt;
&lt;P data-line="759"&gt;This post describes a public preview capability. It is shared for informational purposes only and is subject to change. It is not a substitute for your organization's validation, testing, and operational readiness reviews.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 28 Apr 2026 14:15:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/azure-sap-zone-resource-agent-public-preview-technical-deep-dive/ba-p/4490935</guid>
      <dc:creator>sanoopt</dc:creator>
      <dc:date>2026-04-28T14:15:00Z</dc:date>
    </item>
    <item>
      <title>SAP + Microsoft 365: A Unified AI Experience That Works Where You Work</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-microsoft-365-a-unified-ai-experience-that-works-where-you/ba-p/4480380</link>
      <description>&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Coauthors: Angel Zhu, Senior Product Manager, M365 Copilot Agent Ecosystem and Christoph Ruehle, Principal Product Manager, Joule/SAP Business AI at SAP&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We’ve all experienced it: you’re reviewing data, a message pings, an email comes in, and suddenly you’re juggling inboxes, chats, and screens just to finish one workflow. Those small interruptions feel routine — yet employees switch between apps over 1,200 times per day, adding up to weeks of lost productivity each year (e&lt;EM&gt;stimate based on productivity and context-switching research, Harvard Business Review, 2022)&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;It’s not the work that’s slowing us down. It’s the constant friction of moving between where the data lives and where communication happens.&lt;/P&gt;
&lt;P&gt;That’s why, building on SAP and Microsoft’s long-standing partnership, the new bi-directional integration between Joule and Microsoft 365 Copilot —&amp;nbsp;&lt;STRONG&gt;now generally available &lt;/STRONG&gt;— is designed to help people stay focused and get more done in the tools they already use.&lt;/P&gt;
&lt;P&gt;By bringing SAP business context and Microsoft 365 collaboration together into a unified experience, work can now keep flowing wherever it starts. Customers are already seeing an impact.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;“The integration of Microsoft&amp;nbsp;365&amp;nbsp;Copilot and Joule is central to Vodafone’s vision of creating an end-to-end, AI-enabled user experience,”&amp;nbsp;&lt;STRONG&gt;said Andrea Schiavi, Vodafone, AI Product Lead&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;"By unifying SAP business data with the Microsoft 365 productivity tools, we will create a seamless, intelligent workflow that accelerates decision making and boosts productivity. It will elevate our agentic assistant, AskHR, with shared context across apps so it can guide employees more effectively and resolve tasks faster.”&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;One AI that follows the work — not the other way around&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;When you’re in Microsoft 365,&amp;nbsp;&lt;STRONG&gt;Copilot&lt;/STRONG&gt;&amp;nbsp;can access SAP processes and data through Joule.&lt;BR /&gt;When you’re in SAP,&amp;nbsp;&lt;STRONG&gt;Joule&lt;/STRONG&gt;&amp;nbsp;can draw on information from Microsoft 365 through Copilot. No switching apps, just continuity.&lt;/P&gt;
&lt;P&gt;For example, a finance manager working in Microsoft Excel can use Microsoft 365 Copilot and Joule to review a purchase order directly in the spreadsheet. &amp;nbsp;Joule retrieves real-time SAP business data behind the scenes, enabling a confident decision — without opening another screen.&lt;/P&gt;
&lt;P&gt;Later, reviewing that purchase order inside&amp;nbsp;SAP, the same finance manager can ask&amp;nbsp;Joule&amp;nbsp;whether any recent&amp;nbsp;emails or Teams messages&amp;nbsp;contain information that should be considered before confirming the approval. Copilot brings that context into SAP automatically, so nothing gets missed.&lt;/P&gt;
&lt;P&gt;Wherever the task begins,&amp;nbsp;&lt;STRONG&gt;the right assistant shows up with the context needed to finish it&lt;/STRONG&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;How it works&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;In Microsoft 365 Copilot&lt;/STRONG&gt; — within Teams, Word, PowerPoint, Excel, and OneNote: &lt;STRONG&gt;Joule&lt;/STRONG&gt;&amp;nbsp;to instantly access SAP insights and complete SAP tasks — with prompts, feedback, and citations.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;In&lt;/STRONG&gt; &lt;STRONG&gt;SAP apps&lt;/STRONG&gt;: ask&amp;nbsp;&lt;STRONG&gt;Joule&lt;/STRONG&gt;&amp;nbsp;your question; when Microsoft 365 context is relevant, Joule automatically routes to&amp;nbsp;&lt;STRONG&gt;Micr&lt;/STRONG&gt;&lt;STRONG&gt;osoft 365 &lt;/STRONG&gt;&lt;STRONG&gt;Copilot&lt;/STRONG&gt;&amp;nbsp;— bringing in data from email, Teams, SharePoint, or OneDrive.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;No new interface to learn. The right assistant answers based on where you are.&amp;nbsp;&lt;A href="https://aka.ms/AgentShowcaseSAP" target="_blank" rel="noopener"&gt;See Joule and Microsoft 365 Copilot integration in action!&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why this matters and why it’s here now&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Work has evolved. SAP remains the trusted source of business truth, and Microsoft 365 is where collaboration happens. Bringing AI assistance across both environments simply helps people work the way they already do — with fewer interruptions, faster decisions, and the confidence that every action is grounded in the right business and communication context.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Get started today&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To begin using the Joule and Microsoft 365 Copilot integration, start by downloading the &lt;A href="https://marketplace.microsoft.com/en-us/product/office/wa200008645?tab=overview" target="_blank" rel="noopener"&gt;SAP Joule app from Microsoft Marketplace&lt;/A&gt; and configuring the connection between SAP Cloud Identity Services on SAP BTP and Microsoft Entra.&lt;/P&gt;
&lt;P&gt;You'll need Joule Base and a Microsoft 365 license to enable the integration.&amp;nbsp;&lt;STRONG&gt;A Microsoft 365 Copilot license is required only when using Copilot skills within Joule&lt;/STRONG&gt;&lt;EM&gt;.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;To learn more about the Joule and Microsoft 365 Copilot integration, visit the &lt;A href="https://discovery-center.cloud.sap/ai-feature/4dfa3fea-c5d2-40e3-959d-317b07b6b64e/" target="_blank" rel="noopener"&gt;SAP Discovery Center&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Mon, 09 Feb 2026 18:26:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-microsoft-365-a-unified-ai-experience-that-works-where-you/ba-p/4480380</guid>
      <dc:creator>ssanjay27</dc:creator>
      <dc:date>2026-02-09T18:26:20Z</dc:date>
    </item>
    <item>
      <title>M-Series Sets a New Remote Storage on Mbv4 Demo</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/m-series-sets-a-new-remote-storage-on-mbv4-demo/ba-p/4470773</link>
      <description>&lt;P&gt;As we released &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/memory-optimized/mbsv3-series?tabs=sizebasic" target="_blank" rel="noopener"&gt;Mbv3 Series (Mbsv3 and Mbdsv3 Series)&lt;/A&gt;last year, demonstrating &lt;STRONG&gt;650K IOPS and 10GBps throughput&lt;/STRONG&gt; of remote disk storage with Premium SSD v2 and Ultra Disk on our 2-socket Azure Boost platform—already a major milestone for cloud performance, now, we’re thrilled to announce to the community another leap forward with the coming &lt;STRONG&gt;Standard_M304bs_4_v4&lt;/STRONG&gt; VM size from new Mbv4 Series on the&amp;nbsp;&lt;STRONG data-processed="true"&gt;6th generation Intel® Xeon® Scalable processors, &lt;/STRONG&gt;which delivers &lt;STRONG data-start="213" data-end="232"&gt;20% higher IOPS&lt;/STRONG&gt; and &lt;STRONG data-start="237" data-end="259"&gt;60% more bandwidth &lt;STRONG&gt;throughput &lt;/STRONG&gt;&lt;/STRONG&gt;than Mbv3, reaching &lt;STRONG data-start="270" data-end="283"&gt;780K IOPS&lt;/STRONG&gt; and &lt;STRONG data-start="288" data-end="310"&gt;16 GBps throughput &lt;/STRONG&gt;on remote storage. Below is the demo result from our lab for this new VM series,&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P data-start="107" data-end="477"&gt;The new &lt;STRONG&gt;Mbv4 series&lt;/STRONG&gt;, built on Azure Boost, delivers exceptional IOPS and throughput to power the most &lt;STRONG&gt;mission-critical enterprise workloads&lt;/STRONG&gt;. As part of the Azure M-series portfolio, Mbv4 is designed for memory-intensive and storage–intensive workloads, making it an ideal choice for relational databases, large-scale analytics, and other mission-critical data workloads. The Standard_M304bs_4_v4 preview is coming soon. Stay tuned to this blog site for the latest updates and announcements.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 20 Nov 2025 21:44:41 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/m-series-sets-a-new-remote-storage-on-mbv4-demo/ba-p/4470773</guid>
      <dc:creator>MingJiong_Zhang</dc:creator>
      <dc:date>2025-11-20T21:44:41Z</dc:date>
    </item>
    <item>
      <title>Designing, Migrating and Managing a 15+1-Node SAP BW Scale-Out Landscape on Microsoft Azure</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/designing-migrating-and-managing-a-15-1-node-sap-bw-scale-out/ba-p/3715003</link>
      <description>&lt;P class="lia-align-justify"&gt;This blog outlines the implementation of SAP BW Scale-Out with 15+1 nodes using virtual machines on the Azure platform, representing one of the early and pioneering examples of SAP BW at this scale on a hyperscale public cloud.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;It also highlights the technical considerations and work carried out by the Microsoft Customer &amp;amp; Partners team to understand and validate the performance characteristics of SAP BW, both on-premises and on Azure.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The underlying platform used memory-optimised Mv2-Series virtual machines to support large in-memory databases and demanding workloads. Specifically, the landscape comprised 16 × M416s_v2 (416 vCPU / 5.7 GiB memory), architected across:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Database nodes&lt;/LI&gt;
&lt;LI&gt;(A)SCS nodes&lt;/LI&gt;
&lt;LI&gt;Application server nodes&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;The customer had previously operated SAP HANA on-premises with 20 nodes [18+2 scale-out] and decided to move critical business systems—including SAP BW, SAP Warehouse Management and SAP IQ (Near-Line Storage)—to Azure as part of a data centre exit.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;As part of this migration, the partner proposed the modernisation of selected business processes to take advantage of Azure-native architecture components and improve the end-user experience, including:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Increased system availability by deploying the SAP system across Azure zones within the region to enhance the availability SLA.&lt;/LI&gt;
&lt;LI&gt;Automated failover of access points using Azure Standard Load Balancer.&lt;/LI&gt;
&lt;LI&gt;Optimisation of the scale-out setup using Azure NetApp Files.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;The move to Azure not only delivered high availability, it also improved how database connectivity is managed, removing the dependency on DNS for routing to the HANA database.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Implementing SAP BW Scale-Out for Very Large Databases (VLD) can be time-consuming, especially where there is limited prior experience at this scale. The architecture required careful review of critical design aspects, and the target design was tested in a lower environment to validate all technical tests. During this process, important insights were gained, particularly around Load Balancer configuration and the fine-tuning needed to align with customer business expectations.&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;1. Source Landscape: On-Premises (AS-IS)&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;The on-premises SAP BW landscape operated with 20 nodes [18+2] to support OLAP workloads.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;For any migration programme, it is essential to understand the database growth pattern to inform a 3–5-year growth plan. Unlike on-premises environments, the cloud does not require pre-allocation of infrastructure for a fixed growth trajectory. However, understanding growth trends and resource usage remains vital to:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Design an appropriate target solution.&lt;/LI&gt;
&lt;LI&gt;Avoid subsequent minor or major projects just to handle unexpected growth.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;For this system:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;The average peak consumption was assumed to be approximately 800,000 SAPS.&lt;/LI&gt;
&lt;LI&gt;The overall measured peak reached around 993,480 SAPS.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;Workspace is another critical factor when projecting memory usage and growth for SAP systems. Typically, customers estimate workspace at 50% or 70% on top of the data and code footprint.&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;2. Key AS-IS Information&lt;/H2&gt;
&lt;H3 class="lia-align-justify"&gt;2.1 User Load&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;Concurrent users must be taken into account, along with any expected changes as part of the migration roadmap. Additional capacity should be considered if significant changes in user load are anticipated on the target platform.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The User Activity view (diagram not shown here) illustrates how users interact with the system over time:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Total Users – total number of users who logged on during one week.&lt;/LI&gt;
&lt;LI&gt;Active Users – users who performed more than 400 transaction steps in one week.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 class="lia-align-justify"&gt;2.2 System Performance&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;Monitoring the size and growth of the HANA database is crucial for ongoing system stability and performance.&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;2.3 Log Throughput Requirement / Usage on the On-Premises System&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;One of the critical design considerations is the performance of the on-premises system, as this becomes the baseline for designing the target environment.&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;3. Target Architecture: 15+1 Scale-Out on Azure (To-Be)&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;The target architecture is based on SAP BW Scale-Out with 15+1 nodes in each Azure zone:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;SAP BW Scale-Out 15+1 using M416s_v2 in Zone 1.&lt;/LI&gt;
&lt;LI&gt;SAP BW Scale-Out 15+1 using M416s_v2 in Zone 2.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;To provide a consistent access experience across zones:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;A zone-redundant Standard Load Balancer is used to maintain a single point of access.&lt;/LI&gt;
&lt;LI&gt;Two frontend IP addresses are configured, with two backend pools for Production and DR nodes respectively.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 class="lia-align-justify"&gt;4. Key Target Design Consideration for Scale-Out&lt;/H2&gt;
&lt;H3 class="lia-align-justify"&gt;4.1 SAP HANA Cloud Measurement Tool (HCMT)&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;SAP mandates execution of the HANA Cloud Measurement Tool (HCMT) as part of validating the configuration.&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;4.2 Compute – SAP on Azure Certification&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;The Azure compute platform (hardware) is required to have a valid SAP HANA hardware certification at the point of deployment. The selected SKU must be listed in the Certified and Supported SAP HANA® Hardware Directory.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Microsoft Cloud offers multiple SKUs that are certified by SAP to run SAP and HANA workloads. The same directory maintains certified hardware SKUs across all providers. SAP HANA, as an in-memory database, must meet specific certification criteria to be supported by SAP. The Microsoft engineering team works closely with SAP to bring new SKUs into the list of Microsoft Cloud hardware supported for HANA workloads.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;For HANA workloads, M-Series (Mv1 &amp;amp; Mv2) are preferred SKUs, though Microsoft also offers E-Series and HLI [HANA Large Instances], which are supported for HANA workloads.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In this design:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Compute: 16 × M416s_v2 in Zone 1 and 16 × M416s_v2 in Zone 2.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 class="lia-align-justify"&gt;4.3 Network Considerations&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;Network configuration at both the OS and virtual network (VNET) layers plays a critical role in achieving the required throughput and latency between components. For Scale-Out, there must be additional focus on host-based routing and the selection of the right NIC for HSR (HANA System Replication).&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The Load Balancer configuration is designed to maintain the same logical hostname for applications and third parties connecting to the database, regardless of the database location across zones.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;There are key differences between Scale-Up and Scale-Out:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;In Scale-Up, the entire database runs on a single VM/SKU.&lt;/LI&gt;
&lt;LI&gt;In Scale-Out, the database is split and distributed across multiple VMs/SKUs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;The communication between VMs/SKUs in a scale-out configuration directly affects database performance. It is therefore highly recommended to have a dedicated NIC and subnet to support internode traffic.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Another key consideration is compute-to-storage communication. To ensure direct connectivity from compute to storage, host-based routing is recommended and is one of the design aspects to meet SAP HANA KPI targets during HCMT execution.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Different connection types can be summarised as:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;&lt;STRONG&gt;Internode traffic (scale-out communication)&lt;/STRONG&gt; – recommended to use a dedicated NIC and subnet.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Compute–storage traffic&lt;/STRONG&gt; – should use host-based routing to reach storage directly.&lt;/LI&gt;
&lt;LI style="font-weight: bold;"&gt;&lt;STRONG&gt;Client connection / user traffic.&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Additional traffic, depending on configuration, which can be merged with either client or internode traffic based on system requirements during peak periods and should be reviewed during performance and stress testing before go-live.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4 class="lia-align-justify"&gt;4.3.1 Network layout&lt;/H4&gt;
&lt;P class="lia-align-justify"&gt;Three subnets created:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;One for Client Network (NIC)&lt;/LI&gt;
&lt;LI&gt;One for Inter-Node communication &amp;amp; HSR&lt;/LI&gt;
&lt;LI&gt;One for Storage Network (NIC)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;Additionally, a delegated Azure NetApp Files (ANF) subnet.&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;4.4 Azure Standard Load Balancer with Scale-Out Design&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;A special requirement from the customer was to manage third-party connections to the HANA database across zones.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;To ensure seamless connectivity from third-party systems regardless of the active zone:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;An Azure Standard Load Balancer is configured in front of the scale-out nodes across zones.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;This Load Balancer:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Handles connections to the HANA database.&lt;/LI&gt;
&lt;LI&gt;Supports the DR failover scenario, maintaining connectivity across zones.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 class="lia-align-justify"&gt;4.5 Storage Considerations&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;Storage selection is simplified by the fact that only Azure NetApp Files is supported for scale-out configurations with a standby node. Scale-out without a standby node provides more options.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In all cases, storage must be configured to achieve the required IOPS and throughput without driving cost up unnecessarily.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;For scale-out with a standby node, Azure NetApp Files is the only supported storage as of June 2022. Alongside this, Azure NetApp Files provides several features that should be carefully evaluated in the target design.&lt;/P&gt;
&lt;H4 class="lia-align-justify"&gt;4.5.1 ANF Storage Tier&lt;/H4&gt;
&lt;P class="lia-align-justify"&gt;Ultra / Premium tiers are used as appropriate.&lt;/P&gt;
&lt;H4 class="lia-align-justify"&gt;4.5.2 ANF Features&lt;/H4&gt;
&lt;P class="lia-align-justify"&gt;Application Volume Group (AVG): The Application Volume Group for SAP HANA enables customers and partners to deploy all volumes required to install and operate an SAP HANA database according to best practices in a single, optimised workflow. It includes the use of Proximity Placement Group (PPG) with VMs to achieve automated, low-latency deployments.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Manual QoS: With manual QoS volumes, customers and partners do not need to overprovision volume quota to achieve higher throughput, because throughput can be assigned to each volume independently. Total available throughput is defined at the capacity pool level and depends on the size and type of storage.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Dynamic Tiering: Azure NetApp Files provides three performance tiers: Standard, Premium and Ultra. Dynamic Tiering allows customers and partners to use a higher service level for better performance or a lower service level for cost optimisation without waiting time.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;ANF Files Backup: The ANF backup feature allows customers and partners to offload Azure NetApp Files snapshots to Azure Blob Storage in a fast and cost-effective way, further protecting data from accidental deletion.&lt;/P&gt;
&lt;H4 class="lia-align-justify"&gt;4.5.3 Selected Layout&lt;/H4&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;ANF Ultra tier selected for HANA data, log and shared mount points.&lt;/LI&gt;
&lt;LI&gt;ANF Premium selected to host offline transaction log backups.&lt;/LI&gt;
&lt;LI&gt;Azure Blob Storage used to store AzAcSnap HANA data snapshots and offline transaction log backups.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 class="lia-align-justify"&gt;4.6 Backup and Restore Approach&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;Backup and restore runtime can be a critical blocker if it does not meet business RPO/RTO requirements.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;With Azure NetApp Files, most customers implement the AzAcSnap tool to manage HANA database snapshots, followed by AzCopy to transfer snapshots to Blob Storage for long-term retention.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In this design:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;AzAcSnap is defined as the primary backup, running every 6 hours.&lt;/LI&gt;
&lt;LI&gt;The first backup is transferred to Blob Storage for long-term retention.&lt;/LI&gt;
&lt;LI&gt;Transaction log backups run every 15 minutes and are written to an ANF Premium mount.&lt;/LI&gt;
&lt;LI&gt;AzCopy jobs then transfer these backups to Blob for long-term retention.&lt;/LI&gt;
&lt;LI&gt;A dedicated server is used to manage AzCopy transfers from ANF volumes to Blob Storage as a temporary measure until ANF Files backup is available in the required region.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 class="lia-align-justify"&gt;4.6 Run Operations and Monitoring&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;Run, or Business As Usual (BAU), marks the transition from the migration programme to the support project. It is critical that monitoring and configuration are in place to capture alerts and collect sufficient logs for investigation during issue resolution.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Key elements include:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Proper configuration of monitoring and kdump to ensure logs and dumps are available to analyse unforeseen issues related to the OS or SKU.&lt;/LI&gt;
&lt;LI&gt;Use of Zabbix together with Azure Monitor for Virtual Machines for ongoing monitoring.&lt;/LI&gt;
&lt;LI&gt;Kdump configured and enabled on all VMs to capture critical information for troubleshooting unexpected issues.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 class="lia-align-justify"&gt;5. New: Azure NetApp Files Flexible service level (2025 update)&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;Since this project was originally designed, Azure NetApp Files has introduced a new Flexible service level, which is particularly relevant for SAP BW and SAP HANA workloads on Azure.&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;5.1 What is the Flexible service level?&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;The Flexible service level is a new Azure NetApp Files throughput service level that decouples throughput from capacity. It is available for new manual QoS capacity pools. You configure pool throughput (MiB/s) and capacity (TiB) independently instead of being bound to a fixed MiB/s-per-TiB ratio.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;This makes it easier to right-size storage for high-throughput, low-capacity workloads (for example, HANA log volumes) and high-capacity, moderate-throughput workloads (for example, BW cold data or shared file systems).&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;5.2 128 MiB/s baseline throughput at no extra charge&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;A key benefit of the Flexible service level is the included baseline throughput:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;The minimum throughput you can assign to a Flexible capacity pool is 128 MiB/s, regardless of pool size.&lt;/LI&gt;
&lt;LI&gt;The first 128 MiB/s of throughput is included in the service level—often referred to as the baseline throughput—and is available at no additional performance surcharge.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;In practice, this means every Flexible capacity pool you create automatically includes 128 MiB/s of throughput, and you only pay for any additional throughput configured beyond that baseline.&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;5.3 Throughput scaling for demanding workloads&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;With Flexible service level, throughput can scale significantly. The maximum throughput is documented as up to 640 MiB/s per TiB per pool, with an upper bound defined as 5 × 128 MiB/s × pool size (TiB). Throughput can be increased when needed (for example, during peak loads or migration cutovers) and reduced later, subject to a documented cool-down period between downward adjustments.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;This flexibility is especially useful for SAP BW and SAP HANA systems with variable peak and off-peak windows, and for migration phases where temporary higher throughput is required for data loads, initial syncs or cutovers, with the option to optimise cost afterwards.&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;6. Where to learn more&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-service-levels?utm_source=chatgpt.com" target="_blank" rel="noopener"&gt;Service levels for Azure NetApp Files – detailed description of Standard, Premium, Ultra and Flexible service levels, including throughput formulas and examples&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-netapp-files/whats-new?utm_source=chatgpt.com" target="_blank" rel="noopener"&gt;What’s new in Azure NetApp Files – latest feature announcements and regional availability updates&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-service-levels#flexible-examples" target="_blank" rel="noopener"&gt;Service levels for Azure NetApp Files | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-solution-architectures#sap-hana" target="_blank" rel="noopener"&gt;Azure NetApp Files solution architectures for SAP HANA – reference architectures and best practices for SAP HANA on ANF&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 04 Dec 2025 11:36:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/designing-migrating-and-managing-a-15-1-node-sap-bw-scale-out/ba-p/3715003</guid>
      <dc:creator>jitendrasingh</dc:creator>
      <dc:date>2025-12-04T11:36:18Z</dc:date>
    </item>
    <item>
      <title>Azure delivers the first cloud VM with Intel Xeon 6 and CXL memory - now in Private Preview</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/azure-delivers-the-first-cloud-vm-with-intel-xeon-6-and-cxl/ba-p/4470067</link>
      <description>&lt;P&gt;Intel released their new Intel Xeon 6 6500/6700 series processor with P-cores this year. Intel Xeon 6 processors provide performance and scalability by delivering outstanding performance for transactional and analytical workloads and provide scale-up capacities of up to 64TB of memory.&lt;/P&gt;
&lt;P&gt;In addition, Intel Xeon 6 supports the new &lt;A href="https://community.intel.com/t5/Blogs/Tech-Innovation/Data-Center/Breaking-the-Memory-Wall-with-Compute-Express-Link-CXL/post/1594848" target="_blank" rel="noopener"&gt;Compute Express Link (CXL) &lt;/A&gt;standard that enables memory expansion to &lt;STRONG&gt;accommodate larger data sets in a cost-effective manner&lt;/STRONG&gt;. CXL Flat Memory Mode is a unique Intel Xeon 6 capability that enhances the ability to right-size the compute-to-memory ratio and improve scalability without sacrificing performance. This enhanced ability can help run SAP S/4HANA more efficiently and help enable greater flexibility for configurations so they can better align with business needs and improve the total cost of ownership.&lt;/P&gt;
&lt;P&gt;In collaboration with SAP and Intel, Microsoft is delighted to announce private preview of CXL technology on Azure M-series family of VMs. We believe that, when combined with advancements in the new Intel Xeon 6 processors, it can tackle the challenges of managing the growing volume of data in SAP software, meet the increased demand for faster compute performance and reduce overall TCO.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-20"&gt;Stefan Bäuerle, SVP, Head of BTP, HANA &amp;amp; Persistency at SAP noted:&lt;/SPAN&gt;&lt;/STRONG&gt; &lt;BR /&gt;&lt;EM&gt;“Intel Xeon 6 helps deliver system scalability to support the growing demand for high-performance computing and growing database capacity among SAP customers.”&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-20"&gt;Elyse Ge Hylander, Senior Director, Azure SAP Compute stated:&lt;/SPAN&gt;&lt;/STRONG&gt; &lt;BR /&gt;&lt;EM&gt;“At Microsoft, we are continually exploring new technological innovations to improve our customer experience. We are thrilled about the potential of Intel’s new Xeon 6 processors with CXL and Flat Memory Mode. This is a big step forward to deliver the next-level performance, reliability, and scalability to meet the growing demands of our customers.”&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-20"&gt;&lt;STRONG&gt;Bill Pearson, Vice President of Data Center and Artificial Intelligence at Intel states:&lt;/STRONG&gt;&lt;/SPAN&gt; &lt;BR /&gt;&lt;EM&gt;“Intel Xeon 6 represents a significant advancement for Intel, opening up exciting business opportunities to strengthen our collaboration with Microsoft Azure and SAP. The innovative instance architecture featuring CXL Flat Memory Mode is designed to enhance cost efficiency and performance optimization for SAP software and SAP customers.”&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you are interested in joining our CXL private preview in Azure, contact&amp;nbsp;&lt;A href="mailto:Mseries_CXL_Preview@microsoft.com" target="_blank" rel="noopener"&gt;Mseries_CXL_Preview@microsoft.com&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;###&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Co-author:&lt;/STRONG&gt; &lt;SPAN data-teams="true"&gt;Phyllis Ng - Senior Director of Hardware Strategic Planning (Memory and Storage) - Microsoft&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 20 Nov 2025 22:20:32 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/azure-delivers-the-first-cloud-vm-with-intel-xeon-6-and-cxl/ba-p/4470067</guid>
      <dc:creator>Elyse_Ge_Hylander</dc:creator>
      <dc:date>2025-11-20T22:20:32Z</dc:date>
    </item>
    <item>
      <title>SAP on Azure Product Announcements Summary – SAP TechEd 2025</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-product-announcements-summary-sap-teched-2025/ba-p/4465383</link>
      <description>&lt;P&gt;Today at SAP TechEd 2025, we are excited to share the next evolution of the Microsoft-SAP partnership. Building on decades of collaboration, we continue to advance RISE with SAP on Azure and deepen integrations with SAP S/4HANA Cloud public edition. Our latest innovations deliver enhanced security for SAP and non-SAP workloads, while unified analytics and AI-driven Copilot experiences empower customers to make smarter decisions.&lt;/P&gt;
&lt;P&gt;These advancements are designed to help customers accelerate their digital transformation, drive operational excellence, and unlock new business value.&lt;/P&gt;
&lt;H4&gt;Customer Spotlight: Medline&lt;/H4&gt;
&lt;P&gt;Medline’s SAP transformation on Microsoft Azure is &lt;A href="https://www.microsoft.com/en/customers/story/25243-medline-azure" target="_blank" rel="noopener"&gt;fueling new levels of agility and intelligence across its operations with SAP on Azure&lt;/A&gt;. The company’s migration boosted system resilience, improved key SAP workload transaction times by more than 80% and enabled real-time collaboration and predictive analytics for clinicians and business users - laying the groundwork to extend these insights through Copilot and Azure AI.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM class="lia-align-center"&gt;“When we partnered on the migration, it ushered in a completely new way in which Microsoft and Medline work together. It became a partnership, with the cloud migration becoming a stepping stone to bigger and brighter, more business-outcome–driven engagements.”&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM class="lia-align-center"&gt;— Jason Kaley, SVP, IT Operations &amp;amp; Architecture, Medline &lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H4&gt;Customer Spotlight: Commerz Real&lt;/H4&gt;
&lt;P&gt;Commerz Real, a German financial services firm specializing in real estate, infrastructure, and leasing, &lt;A href="https://www.microsoft.com/en/customers/story/25126-commerz-real-rise-with-sap" target="_blank" rel="noopener"&gt;modernized its SAP infrastructure by migrating its complete SAP landscape to SAP RISE on Azure&lt;/A&gt;. Built to address stringent regulatory, security, and performance demands, the platform delivers high scalability, real-time monitoring, and faster, more stable operations.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM&gt;“The decision to use Microsoft Azure was a deliberate one. In the past, security concerns and strict regulatory requirements kept us from moving SAP to the cloud. Today we say: If you don’t do that, you won’t survive in the market.”&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;— Nadine Felderer, Head of SAP Services, Commerz Real&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are pleased to announce additional SAP with Microsoft product updates and details to further help customers innovate on the most trusted cloud for SAP.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Bi-directional&lt;/STRONG&gt; Agent to Agent communication between &lt;STRONG&gt;Microsoft Copilot and SAP Joule&lt;/STRONG&gt;. Enterprise-ready SAP API enablement for AI through &lt;STRONG&gt;MCP in Azure API Management&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;General Availability of our &lt;STRONG&gt;agentless Sentinel for SAP&lt;/STRONG&gt; data connector with significantly simpler onboarding through SAP Integration Suite. Ready for the future.&lt;/LI&gt;
&lt;LI&gt;SAP released &lt;STRONG&gt;S/4HANA Cloud public edition&lt;/STRONG&gt; for our Sentinel Solution for SAP.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Microsoft Entra ID&lt;/STRONG&gt; advances SAP identity governance with new OAuth 2.0 support, &lt;STRONG&gt;SAP IAG&lt;/STRONG&gt; integration preview, and expanded &lt;STRONG&gt;SAP Access Control migration&lt;/STRONG&gt; for unified, secure access.&lt;/LI&gt;
&lt;LI&gt;Advanced support for &lt;STRONG&gt;High Availability with&lt;/STRONG&gt; &lt;STRONG&gt;SAP ASE (Sybase) database backup&lt;/STRONG&gt; on Azure Backup.&lt;/LI&gt;
&lt;LI&gt;SAP Deployment Automation Framework now supports &lt;STRONG&gt;highly available scale-out architectures with HANA System Replication&lt;/STRONG&gt; for large-scale resilient configurations.&lt;/LI&gt;
&lt;LI&gt;SAP Testing Automation Framework enhances high availability testing with &lt;STRONG&gt;offline Pacemaker cluster validation for RHEL/SUSE&lt;/STRONG&gt;, and native Linux-based validation tools quality checks&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Enhanced SAP Inventory and Observability Dashboard&lt;/STRONG&gt; to reduce operational risk, and supports production-ready SAP systems, along with a customizable Windows Quality Checks PowerShell template.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let's dive into the summary details of product updates and services.&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Extend and Innovate and Secure &lt;/STRONG&gt;&lt;/H2&gt;
&lt;H3&gt;&lt;STRONG&gt;Copilot Studio &lt;/STRONG&gt;&lt;STRONG&gt;and SAP Joule &lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Since the release of the Joule and Copilot integration earlier this year, we have seen great interest and adoption with customers and partners. The Joule as a host integration is planned to be released later this year. &amp;nbsp;&lt;A href="https://help.sap.com/docs/joule/integrating-joule-with-sap/integrating-joule-with-microsoft-365-copilot" target="_blank" rel="noopener"&gt;Integrating Joule with Microsoft 365 Copilot | SAP Help Portal&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For customers on their journey towards RISE and GROW, we also worked on the Azure API Management team to enable the exposure of SAP OData Services from your SAP Systems as an MCP server which then can be consumed in Copilot using Microsoft Copilot Studio. This enables the interaction of end-users with their SAP system based on any OData services. For more details, check out &lt;A href="https://learn.microsoft.com/en-us/azure/api-management/export-rest-mcp-server" target="_blank" rel="noopener"&gt;Expose REST API in API Management as MCP server &lt;/A&gt;and &lt;A href="https://www.youtube.com/watch?v=69L4UBLdi3g" target="_blank" rel="noopener"&gt;Copilot + SAP: Azure API Management, MCP and SAP OData&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;To simplify the integration and help customers and partners get started faster, we are releasing preconfigured Copilot Studio Agent that can orchestrate over other agents like SAP, Fabric and Microsoft 365. Customers can use these agents out of the box or use them as a foundation to extend and build their own Copilot Agents.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG style="color: rgb(30, 30, 30); font-size: 28px;"&gt;Microsoft Security for SAP&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Security is being&amp;nbsp;&lt;STRONG&gt;reengineered for the AI era&lt;/STRONG&gt;&amp;nbsp;-&amp;nbsp;moving beyond static, rule-bound controls and after-the-fact response toward platform-led, machine-speed defense.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Attackers think in graphs -&amp;nbsp;Microsoft does too.&amp;nbsp;We are&amp;nbsp;bringing relationship-aware context to Microsoft Security suite -&amp;nbsp;so defenders and AI can see connections, understand the impact of a potential compromise (blast radius), and act faster across pre-breach and post-breach scenarios.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;SAP S/4HANA Cloud public edition&lt;/STRONG&gt; &lt;A href="https://aka.ms/s4-pc-sentinel-release-blog" target="_blank" rel="noopener"&gt;Add-on&lt;/A&gt; for Microsoft Sentinel for SAP (preview): Enables deep, native integration of SAP telemetry with Sentinel, bringing advanced threat detection, investigation, and response to SAP workloads running in the cloud.&lt;/LI&gt;
&lt;LI&gt;Microsoft Sentinel for SAP &lt;A href="https://aka.ms/agentless-ga-blog" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Agentless Data Connector&lt;/STRONG&gt;&lt;/A&gt;: Now generally available, the agentless connector significantly simplifies deployment while delivering secure, high-fidelity ingestion of SAP audit and application logs into Sentinel.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Expanded Security Guidance&lt;/STRONG&gt;: Enhanced guidance for Microsoft Defender, Ransomware Protection, and Cyber Defense for SAP, helping customers implement best practices for hardening SAP environments and responding to evolving threats.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost-Efficient Long-Term Log Storage&lt;/STRONG&gt;: Organizations can now take advantage of &lt;A href="https://learn.microsoft.com/azure/sentinel/datalake/sentinel-lake-overview" target="_blank" rel="noopener"&gt;Sentinel Data Lake&lt;/A&gt; to retain SAP logs for 12 years at scale for compliance (NIS2, DORA) and forensic use cases - at a fraction of traditional storage costs.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Purview &lt;/STRONG&gt;shipping most requested features updates for our existing SAP connectors (SNC mode support in preview, CDS view support, and scoped metadata scanning) and a new connector for BW/4HANA.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;SAP has reiterated end of maintenance for SAP Identity Management (SAP IDM) by end of 2027 and is collaborating with Microsoft so customers can migrate identity scenarios to Microsoft Entra ID as the recommended successor approach.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Provisioning backbone in place&lt;/STRONG&gt;: Microsoft Entra released &lt;A href="https://learn.microsoft.com/entra/identity/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial" target="_blank" rel="noopener"&gt;new features&lt;/A&gt; for the built‑in connector for SAP Cloud Identity Services (CIS) to support authentication with OAuth 2.0, and provisioning of groups to streamline authorization management in downstream SAP targets like SAP S/4HANA and SAP BTP, enabling HR‑driven, &lt;A href="https://community.sap.com/t5/technology-blog-posts-by-members/identity-and-access-management-with-microsoft-entra-part-iii-successfactors/ba-p/14233747" target="_blank" rel="noopener"&gt;end‑to‑end identity lifecycles&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Private Preview: Microsoft Entra Integration with SAP IAG&lt;/STRONG&gt;: The private preview for Microsoft Entra integration with SAP Identity Access Governance (IAG) is now underway. Selected customers are testing Entra ID Governance access packages that include SAP IAG roles as resources, routing of access approvals through SAP IAG, and provisioning of roles across both systems. &lt;A href="https://forms.cloud.microsoft/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-KNzaa8WIhKvUH7PBDqQsJUOTNWS0owTk5TTU9LVVM2UE1YRkdRV0NJOS4u&amp;amp;route=shorturl" target="_blank" rel="noopener"&gt;Sign-Up here&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Enhanced Integration Scope with SAP Access Control (AC): &lt;/STRONG&gt;Driven by direct customer feedback, Microsoft and SAP are expanding the migration and integration scope to include SAP Access Control (AC). This enhancement will enable comprehensive access management, risk analysis, and policy enforcement on-premises, leveraging Microsoft Entra’s governance capabilities for improved security and compliance.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Together, these innovations give customers end-to-end visibility and protection across SAP landscapes—spanning public cloud, hybrid, and on-premises deployments.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;SAP on Azure Software Products and Services&amp;nbsp;&lt;/STRONG&gt;&lt;/H2&gt;
&lt;H3&gt;&lt;STRONG&gt;Azure Backup for SAP&amp;nbsp;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;We are committed to expanding backup support for additional SAP workloads. Following the &lt;A href="https://techcommunity.microsoft.com/blog/SAPApplications/sap-on-azure-product-announcements-summary-%E2%80%93-sap-sapphire-2025/4415281" target="_blank" rel="noopener"&gt;general availability of ASE backup&lt;/A&gt;, we have further enhanced its capabilities with the introduction of high availability configuration support. This enhancement delivers automatic backup support for SAP systems setup with Replication Server, ensuring seamless protection after failover or failback events without the need for manual intervention. As a result, users benefit from immediate and continuous data protection, along with a simplified restore process using a single backup chain.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have expanded our Snapshot backup capability for SAP HANA by adding Recovery Services Vault support. This will help customers store their snapshot backups with long term retention, while gaining protection from Ransomware attacks. Vault support brings in capabilities like immutability, soft-delete enablement, multi-user-authorization to further safeguard the data.&lt;/P&gt;
&lt;P&gt;We have also launched the preview for “Scale-out” support configurations for SAP HANA streaming backup, expanding our overall topology support.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;SAP Deployment Automation Framework&amp;nbsp;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;We are releasing updates to the SAP Deployment Automation Framework (SDAF) and SAP Testing Automation Framework (STAF) that expand testing coverage, improve reliability, and provide additional deployment flexibility for SAP environments on Azure.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;SAP Deployment Automation Framework (SDAF)&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;SDAF deployment and configuration scenarios now include scale-out architectures with HANA System Replication (HSR). This enhancement addresses resiliency requirements for large-scale deployments requiring multi-node scale-out configurations with built-in replication capabilities.&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;SDAF now supports GitHub Actions in addition to existing deployment methods including Azure DevOps pipelines, CLI scripts, and the WebApp interface. Organizations using GitHub for source control and infrastructure management can now deploy and manage SAP environments using their existing workflows and tooling preferences.&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;SAP Testing Automation Framework (STAF)&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;STAF now supports offline validation for SAP Pacemaker clusters. This capability enables testing of resource agent failover mechanisms without executing live cluster operations, reducing risk during validation cycles and allowing for pre-deployment verification of high availability configurations.&lt;/P&gt;
&lt;P&gt;The high availability testing suite has been updated to include SAPHanaSr-ANGI tests, ensuring compatibility with SUSE Linux Enterprise Server 15 and SAP HANA 2.0 SP5 environments. This update addresses the requirements of organizations running current SAP HANA releases on modern SUSE distributions.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;Configuration checks in preview, represents a rewrite of the open-source Quality Checks tool, now integrated as a native capability within STAF. This tool validates SAP on Azure installations against Microsoft reference architecture and configuration guidance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Azure Center and Azure Monitor for SAP Solutions&amp;nbsp;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;We are pleased to share that Azure Center for SAP solutions (ACSS) is now available in &lt;STRONG&gt;Italy North&lt;/STRONG&gt;, providing end-to-end SAP workload management to more customers across Europe.&lt;/P&gt;
&lt;P&gt;Additionally, Azure Monitor for SAP solutions (AMS) is now available in&lt;STRONG&gt; Italy North.&lt;/STRONG&gt; AMS continues to help SAP customers reliably monitor their mission-critical workloads on Azure with comprehensive insights.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Get started:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/center-sap-solutions/overview" target="_blank" rel="noopener"&gt;Azure Center for SAP solutions | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/monitor/about-azure-monitor-sap-solutions" target="_blank" rel="noopener"&gt;What is Azure Monitor for SAP solutions? | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/ACSSPortal" target="_blank" rel="noopener"&gt;Azure Portal&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Azure Center for SAP solutions Tools and Frameworks&amp;nbsp;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;We have refreshed our &lt;STRONG&gt;SAP on Azure Well-Architected Framework&lt;/STRONG&gt; and the accompanying &lt;STRONG&gt;SAP on Azure Assessment&lt;/STRONG&gt; to reflect the latest platform guidance. The update aligns with recent Azure innovations—including VMSS Flex, Premium SSD v2, Capacity Reservation Groups, Mv3-series, and NVMe-based SKUs—so architects and admins can plan and deploy with current best practices. The assessment is also now surfaced on the main Assessments hub for easier access and can be used as a repeatable checkpoint throughout your SAP deployment lifecycle.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Quality Checks (PowerShell) for windows:&lt;/STRONG&gt; We have published &lt;A href="https://github.com/Azure/SAP-on-Azure-Scripts-and-Utilities/tree/main/SAPOnAzureWindowsChecks" target="_blank" rel="noopener"&gt;a lightweight, read-only script&lt;/A&gt; for customers running SAP on Windows and SQL Server on Microsoft Azure. It performs post-provisioning health checks and outputs a color-coded HTML report plus JSON. Use it as a baseline template—customize the thresholds to your environment, and feel free to contribute enhancements to cover your configuration requirements.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Observability Dashboard:&lt;/STRONG&gt; Based on customer feedback, we have expanded the dashboard to surface design-impacting signals for running &lt;STRONG&gt;specialized workloads on Azure&lt;/STRONG&gt;. It now offers Overview, Security, Networking, and Inventory views, plus extended reports for managers and hands-on engineers. Updates make it easier to review VM redundancy, spot orphaned resources, see Capacity Reservation Groups with their associated VMs in the primary region, and count Public IPs on the Basic SKU—helping you stay on top of infrastructure hygiene and avoid unsupported configurations.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;SAP + Microsoft Co-Innovations&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Microsoft and SAP are always working on new solutions to help our customers adapt and grow their businesses in several areas including AI, Business Suite, Data, Cloud ERP, Security, SAP BTP, among others. Recently, we started a new era of Agentic AIOps collaboration between SAP and Microsoft with fully orchestrated multi-agent ecosystem for mission critical workload. Please &lt;A class="lia-external-url" href="https://www.sap.com/resources/sap-and-microsoft-lead-aiops-revolution" target="_blank" rel="noopener"&gt;check out this blog&lt;/A&gt; to learn more.&lt;/P&gt;</description>
      <pubDate>Tue, 04 Nov 2025 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-product-announcements-summary-sap-teched-2025/ba-p/4465383</guid>
      <dc:creator>Hiren_Shah_Azure</dc:creator>
      <dc:date>2025-11-04T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Evolving SAP Testing on Azure: What’s New in the SAP Testing Automation Framework</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/evolving-sap-testing-on-azure-what-s-new-in-the-sap-testing/ba-p/4465802</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;We're announcing the general availability (GA) of the &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sap/automation/testing-framework-architecture" target="_blank" rel="noopener"&gt;SAP Testing Automation Framework (STAF)&lt;/A&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;an &lt;A class="lia-external-url" href="https://github.com/azure/sap-automation-qa" target="_blank" rel="noopener"&gt;open-source&lt;/A&gt; orchestration tool that automates validation of SAP deployments on Microsoft Azure.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;With this release, &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;High Availability (HA) function testing&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;is now available for SAP HANA scale-up databases and SAP Central Services (ASCS/ERS) running in two-node Pacemaker clusters. In addition, we're introducing&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Configuration Checks (in preview) &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;that helps in validating SAP system configurations against Azure best practices across infrastructure, storage, OS parameters, and cluster resources.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;How STAF Works?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;STAF uses a hub-and-spoke architecture where a centralized management server orchestrates validation across your SAP landscape using ansible. The framework code resides on a management server that connects remotely to SAP systems via SSH. System-specific details (hostnames, credentials, topology) are stored in configuration files on the management server or in source-controlled configuration repository (in case of SDAF). This separation means no agents or framework components need to be installed on SAP virtual machines.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Setting up workspace configurations is a one-time activity. Once defined, the same configuration can be used repeatedly for pre-go-live validation, change verification, or periodic compliance audits without reconfiguration.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;High Availability Functional Testing&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;High Availability functional testing executes controlled failure scenarios to verify that Pacemaker clusters, HANA System Replication, and SAP Central Services respond correctly when failures occur. Each test follows a pattern: capture baseline state, inject failure, monitor cluster reaction, validate failover, and restore to stable state.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Failure Scenarios Included&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;HANA Database: &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;The functional test cases include&amp;nbsp;Indexserver&amp;nbsp;kill, network isolation, node crashes, storage freezes (in case of ANF filesystem), SBD fencing events etc. These test cases are part of the guidance outlined in the official document. &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;SAP Central Services:&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;The function test cases include A&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;SCS/ERS process termination, message server failures, network isolation, and planned resource migration etc. These test cases are part of the guidance outlined in the official document. &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Key Metrics Captured&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;For every test, the framework records:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Failure detection time: Duration from failure injection to Pacemaker detecting the issue through resource agent monitor operations.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Fencing duration: Time from failure detection to successful node isolation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Failover completion time: End-to-end duration from failure injection to full-service availability on the secondary node. For HANA, this includes takeover decision, log replay, and&amp;nbsp;indexserver&amp;nbsp;startup. For ASCS/ERS, it includes enqueue table replication and service restart.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Reports include pass/fail status, detailed timelines with millisecond-level precision, diagnostic logs for troubleshooting failed scenarios, and execution logs for troubleshooting framework failures.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;Offline Validation of High Availability Configuration&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Offline validation in STAF enables assessment of SAP HANA and Central Services high availability cluster configurations without requiring a live SSH connection to production systems. By comparing exported cluster information base (CIB) XML files from each node with best practices, STAF delivers non-intrusive validation, making it ideal for environments with restricted connectivity. To learn more about setup and usage, check out the documentation &lt;A class="lia-external-url" href="https://github.com/Azure/sap-automation-qa/blob/main/docs/HA_OFFLINE_VALIDATION.md" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Configuration Checks (Preview)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Configuration checks provide read-only validation of SAP system settings against documented Azure best practices and SAP Notes. The framework executes Azure infrastructure validations using Azure CLI from the management server, while OS and SAP-level checks use SSH connections to target systems. Integrating data from the management server and SAP virtual machine inspections enable comprehensive validation of the infrastructure deployed to run SAP systems.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;STAF automates validation of SAP infrastructure on Azure by checking VM, storage, and network configurations using Azure CLI, and ensures compliance with best practices for HANA and Azure Files/ANF. It also verifies OS/kernel parameters, SAP profile settings, high availability cluster configurations, and database-specific settings for both HANA and Db2, providing a comprehensive compliance check across all critical system layers.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Findings are categorized as Passed, Failed, Critical, Warning, or Informational with reference links to SAP Notes and Azure documentation for remediation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Getting Started with STAF&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;STAF is fully open source, available on&amp;nbsp;&lt;/SPAN&gt;&lt;A class="lia-external-url" href="https://github.com/Azure/sap-automation-qa" target="_blank" rel="noopener"&gt;GitHub&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;, providing complete transparency into test logic, Ansible playbooks, and Python modules. The repository includes comprehensive setup documentation, sample workspace configurations.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Setup Options&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;For standalone SAP systems, follow the &lt;/SPAN&gt;&lt;A class="lia-external-url" href="https://github.com/Azure/sap-automation-qa/blob/main/docs/SETUP.MD" target="_blank" rel="noopener"&gt;Setup Guide&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;to configure a management server with the framework code and workspace definitions.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;For environments already using SAP Deployment Automation Framework (SDAF), STAF integrates seamlessly into existing pipelines via the &lt;/SPAN&gt;&lt;A class="lia-external-url" href="https://github.com/Azure/sap-automation-qa/blob/main/docs/SDAF_INTEGRATION.md" target="_blank" rel="noopener"&gt;SDAF Integration Guide&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The framework supports SLES and RHEL distributions across multiple HA configurations: HANA scale-up, SAP Central Services with ENSA1/ENSA2, Azure Fence Agent or SBD-based fencing, and supports storage options including Azure Managed Disks, Azure Files, and Azure NetApp Files. For SLES environments, both SAPHanaSR and SAPHanaSR-angi topologies are supported.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Community and Contributions&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Contributions are welcome. Whether it's reporting issues, suggesting new test scenarios, or submitting pull requests, community feedback helps improve the framework for all SAP on Azure users. Visit the &lt;A class="lia-external-url" href="https://github.com/azure/sap-automation-qa" target="_blank" rel="noopener"&gt;GitHub repository&lt;/A&gt; to explore the code, review existing issues, or open new ones. For questions or discussions, engage through GitHub Issues.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;BR /&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;Start validating SAP systems today and ensure clusters are ready when it matters most.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 04 Nov 2025 06:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/evolving-sap-testing-on-azure-what-s-new-in-the-sap-testing/ba-p/4465802</guid>
      <dc:creator>DevanshJain</dc:creator>
      <dc:date>2025-11-04T06:00:00Z</dc:date>
    </item>
    <item>
      <title>SAP Business Data Cloud Now Available on Microsoft Azure</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-business-data-cloud-now-available-on-microsoft-azure/ba-p/4460551</link>
      <description>&lt;P&gt;We’re thrilled to announce that&amp;nbsp;&lt;STRONG&gt;SAP Business Data Cloud (SAP BDC) including SAP Databricks&lt;/STRONG&gt; is now available on &lt;STRONG&gt;Microsoft Azure &lt;/STRONG&gt;marking a major milestone in our strategic partnership with SAP and Databricks and our commitment to empowering customers with cutting-edge Data &amp;amp; AI capabilities.&lt;/P&gt;
&lt;P&gt;SAP BDC is a fully managed SaaS solution designed to unify, govern, and activate SAP and third-party data for advanced analytics and AI-driven decision-making. Customers can now &lt;STRONG&gt;deploy SAP BDC on Azure in US East, US West and Europe West,&lt;/STRONG&gt; with additional regions coming soon, and unlock transformative insights from their enterprise data with the scale, security, and performance of Microsoft’s trusted cloud platform.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Why SAP BDC on Azure Is a Game-Changer for Data &amp;amp; AI&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Deploying SAP BDC on Azure enables organizations to accelerate their &lt;STRONG&gt;Data &amp;amp; AI initiatives&lt;/STRONG&gt; by modernizing their SAP Business Warehouse systems and leveraging a modern data architecture that includes SAP HANA Cloud, data lake files and connectivity to Microsoft technology. Whether it’s building AI-powered intelligent applications, enabling semantically rich data products, or driving predictive analytics, SAP BDC on Azure provides the foundation for scalable, secure, and context-rich decision-making. &lt;BR /&gt;Running SAP BDC workloads on &lt;STRONG&gt;Microsoft Azure&lt;/STRONG&gt; unlocks the full potential of enterprise data by integrating SAP systems with non-SAP data using Microsoft’s powerful &lt;STRONG&gt;Data &amp;amp; AI services&lt;/STRONG&gt; - enabling customers to build intelligent applications grounded in critical business context.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why Azure is an Ideal Platform for Running SAP BDC&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Microsoft Azure stands out as a leading cloud platform for hosting SAP solutions, including SAP BDC. Azure’s global infrastructure, high-performance networking, and powerful Data &amp;amp; AI capabilities make it an ideal foundation for large-scale SAP workloads. When organizations face complex data environments and need seamless interoperability across tools, Azure’s resilient backbone and enterprise-grade services provide the scalability and reliability essential for building a robust SAP data architecture.&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Under the Hood: SAP Databricks in SAP BDC is Powered by Azure Databricks&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;A key differentiator of SAP BDC on Azure is that &lt;STRONG&gt;SAP Databricks&lt;/STRONG&gt;, a core component of BDC, runs on &lt;STRONG&gt;Azure Databricks&lt;/STRONG&gt;—Microsoft’s first-party service.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Databricks is a fully managed first party service making Microsoft Azure the optimal cloud for running Databricks workloads. &lt;/STRONG&gt;It uniquely offers:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Native integration with Microsoft Entra ID&lt;/STRONG&gt; for seamless access control.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Optimized performance with Power BI&lt;/STRONG&gt;, delivering unmatched analytics speed.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Enterprise-grade security and compliance&lt;/STRONG&gt;, inherent to Azure’s first-party services.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Joint engineering and unified support&lt;/STRONG&gt; from Microsoft and Databricks.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Zero-copy data sharing&lt;/STRONG&gt; between SAP BDC and Azure Databricks, enabling frictionless collaboration across platforms.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This deep integration ensures that customers benefit from the full power of Azure’s AI, analytics, and governance capabilities while running SAP workloads.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Expanding Global Reach: What’s Next&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;While SAP BDC is now live in three Azure regions US East, US West and Europe - we’re just getting started. Over the next few months, availability will expand to additional Azure regions such as Brazil and Canada.&lt;/P&gt;
&lt;P&gt;For the remaining regions, a continuously updated roadmap can be found on the &lt;A href="https://roadmaps.sap.com/board?range=2025Q3-2026Q3&amp;amp;q=azure&amp;amp;PRODUCT=73555000100800004851#Q3%202025" target="_blank" rel="noopener"&gt;SAP Roadmap Explorer website&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Final Thoughts&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This launch reinforces Microsoft Azure’s longstanding partnership with SAP, backed by over 30 years of trusted partnership and co-innovation. With SAP BDC now available on Azure, customers can confidently modernize their data estate, unlock AI-driven insights, and drive business transformation at scale.&lt;/P&gt;
&lt;P&gt;Stay tuned as we continue to expand availability and bring even more Data &amp;amp; AI innovations to our joint customers over the next few months.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 09 Oct 2025 23:08:48 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-business-data-cloud-now-available-on-microsoft-azure/ba-p/4460551</guid>
      <dc:creator>Hiren_Shah_Azure</dc:creator>
      <dc:date>2025-10-09T23:08:48Z</dc:date>
    </item>
    <item>
      <title>Announcing Public Preview for Business Process Solutions</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/announcing-public-preview-for-business-process-solutions/ba-p/4453658</link>
      <description>&lt;P&gt;In today’s AI powered enterprises, success hinges on access to reliable, unified business information. Whether you are deploying AI-augmented workflows or fully autonomous agentic solutions, one thing is clear: trusted, consistent data is the fuel that drives intelligent outcomes. Yet in many organizations, data remains fragmented across best of breed applications – creating blind spots in cross-functional processes and throwing roadblocks in the path of automation. Microsoft is dedicated to tackle these challenges, delivering a unified data foundation that accelerates AI adoption, simplifies automation and reduces risk – empowering businesses to unlock the full potential of unified data analytics and agentic intelligence.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;Our new solution offers cross-functional insights across previously siloed environments and includes:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Prebuilt data models for enterprise business applications in Microsoft Fabric&lt;/LI&gt;
&lt;LI&gt;Source system data mappings and transformations&lt;/LI&gt;
&lt;LI&gt;Prebuilt dashboards and reports in Power BI&lt;/LI&gt;
&lt;LI&gt;Prebuilt AI Agents in Copilot Studio (coming soon)&lt;/LI&gt;
&lt;LI&gt;Integrated Security and Compliance&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;By unifying Microsoft’s Fabric and AI solutions we can rapidly accelerate transformation and derisk AI rollout through repeatable, reliable, prebuilt solutions.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Functional Scope&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Our new solution currently supports a set of business applications and functional areas, enabling organizations to break down silos and drive actionable insights across their core processes. The platform covers key domains such as:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Finance:&lt;/STRONG&gt; Delivers a comprehensive view of financial performance, integrating data from general ledger, accounts receivable, and accounts payable systems. This enables finance teams to analyze trends, monitor compliance, and optimize cash flow management all from within Power BI. The associated Copilot agent provides not only access to this data via natural language but will also enable financial postings.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Sales:&lt;/STRONG&gt; Provides a complete perspective on customers’ opportunity to cash journeys, from initial opportunity through invoicing and payment via Power BI reports and dashboards. The associated Copilot agent can help improve revenue forecasting, by connecting structured ERP and CRM data with unstructured data from Microsoft 365, also tracking sales pipeline health and identify bottlenecks.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Procurement:&lt;/STRONG&gt; Supports strategic procurement and supplier management, consolidating purchase orders, goods receipts, and vendor invoicing data into a complete spend dashboard. This empowers procurement teams to optimize sourcing strategies, manage supplier risk, and control spend.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Manufacturing:&amp;nbsp;&lt;/STRONG&gt;(coming soon)&lt;STRONG&gt;:&lt;/STRONG&gt; Will extend coverage to manufacturing and production processes, enabling organizations to optimize resource allocation and monitor production efficiency.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Each item within Business Process Solutions is delivered as a complete, business-ready offering. These models are thoughtfully designed to ensure that organizations can move seamlessly from raw data to actionable execution. Key features include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Facts and Dimensions:&lt;/STRONG&gt; Each model is structured to capture both transactional details (facts) and contextual information (dimensions), supporting granular analysis and robust reporting across business processes.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Transformations:&lt;/STRONG&gt; Built-in transformations automatically prepare data for reporting and analytics, making it compatible with Microsoft Fabric. For example, when a business user needs to compare sales results from Europe, Asia, and North America, the solution transformations handle currency conversion behind the scenes. This ensures that results are consistent across regions, making analysis straightforward and reliable—without the need for manual intervention or complex configuration.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Insight to Action:&lt;/STRONG&gt; Customers will be able to leverage prebuilt Copilot Agents within Business Process Solutions to turn insight into action. These agents are deeply integrated not only with Microsoft Fabric and Microsoft Teams, but also connected source applications, enabling users to take direct, contextual actions across systems based on real-time insights. By connecting unstructured data sources such as emails, chats, and documents from Microsoft 365 apps, the agents can provide a holistic and contextualized view to support smarter decisions.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;With embedded triggers and intelligent agents, automated responses could be initiated based on new insights -- streamlining decision-making and enabling proactive, data-driven operations.&amp;nbsp; Ultimately, this will empower teams to not just understand what is happening on a wholistic level, but to also take faster and smarter actions, and with greater confidence.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Authorizations:&lt;/STRONG&gt; Data models are tailored to respect organizational security and access policies, ensuring that sensitive information is protected and only accessible to authorized users. The same user credential principles apply to the Copilot agents when interacting with/updating the source system in the user-context.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Behind the scenes, the solution automatically provisions the required objects and infrastructure to build the data warehouse, removing the usual complexity of bringing data together. It guarantees consistency and reliability, so organizations can focus on extracting value from their data rather than managing technical details.&amp;nbsp; This reliable data foundation serves as one of the key informants of the agentic business processes.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Accelerated Insights with Prebuilt Analytics&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Building on these robust data models, Business Process Solutions offer a suite of prebuilt Power BI reports tailored to common business processes. These reports provide immediate access to key metrics and trends, such as financial performance, sales effectiveness, and procurement efficiency.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Designed for rapid deployment, they allow organizations to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Start analyzing data from day one, without lengthy setup or customization.&lt;/LI&gt;
&lt;LI&gt;Adapt existing reports for your organization’s exact business needs.&lt;/LI&gt;
&lt;LI&gt;Demonstrate best practices for leveraging data models in analytics and decision-making.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This approach accelerates time-to-value and also empowers users to explore new analytical scenarios and drive continuous improvement.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Extensibility and Customization&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Every organization is unique and our new solution is designed to support this, allowing you to adapt analytics and data models to fit your specific processes and requirements. You can customize scope items, bring in your own tables and views, integrate new data sources as your business evolves, and combine data across Microsoft Fabric for deeper insights.&lt;/P&gt;
&lt;P&gt;Similarly, the associated agents will be customizable from Copilot Studio to adapt to your specific Enterprise apps configuration.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;This flexibility ensures that, no matter how your organization operates, Business Process Solutions helps you unlock the full value of your data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Data integration&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Business Process Solutions uses the same connectivity options as Microsoft Fabric and Copilot Studio but goes further by embedding best practices that make integration simpler and more effective. We recognize that no single pattern can address the diverse needs of all business applications. We also understand that many businesses have already invested in data extraction tools, which is why our solution supports a wide range of options, from native connectivity to third-party options that bring specialized capabilities to the table.&amp;nbsp; With Business Process Solutions we ensure data can be interacted with in a reliable and high-performant way, whether working with massive volumes or complex data structures.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;STRONG&gt;Getting started&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;If your organization is ready to unlock the value of unified analytics, getting started is simple. Just send us a request using the form at:&lt;STRONG&gt; &lt;A class="lia-external-url" href="https://aka.ms/JoinBusAnalyticsPreview" target="_blank" rel="noopener"&gt;https://aka.ms/JoinBusAnalyticsPreview&lt;/A&gt;&lt;/STRONG&gt;. Our team will guide you through the next steps and help you begin your journey.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Sep 2025 08:30:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/announcing-public-preview-for-business-process-solutions/ba-p/4453658</guid>
      <dc:creator>Hiren_Shah_Azure</dc:creator>
      <dc:date>2025-09-16T08:30:00Z</dc:date>
    </item>
    <item>
      <title>New Mbv3 Size, Standard_M416bs_v3, General Availability</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/new-mbv3-size-standard-m416bs-v3-general-availability/ba-p/4439103</link>
      <description>&lt;P&gt;As we launched memory-optimized M-series family with high remote storage performance,&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/memory-optimized/mbsv3-series?tabs=sizebasic" target="_blank" rel="noopener"&gt;Mb Family&lt;/A&gt;, in Sep ’24, there’s been strong demand for more computing optimization, especially from healthcare organizations. Today, we're excited to expand the Mbv3 portfolio to better support large-scale, mission-critical database workloads—especially for healthcare organizations operating EHR (Electronic Health Records) database on Azure.&amp;nbsp;&lt;STRONG&gt;We’re pleased to announce the general availability of the new Mbv3 size, Standard_M416bs_v3.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This new SKU offers increased vCPU cores along with enhanced memory and remote storage performance, making it ideal for high-performance database scenarios that require consistent throughput, scalability, and reliability.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key features &lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The &lt;STRONG&gt;Mbv3 series&lt;/STRONG&gt; are based on the 4th generation Intel® Xeon® Scalable processors, scale for workloads up to 4TB, and deliver with NVMe interface for higher remote disk storage performance.&lt;/LI&gt;
&lt;LI&gt;This VM size, newly added to the Mbv3 series, &lt;STRONG&gt;Standard_M416bs_v3,&lt;/STRONG&gt; offers 416 vCPU which is over 2x the vCPU of the largest launched Mbv3 VM.&lt;/LI&gt;
&lt;LI&gt;The&amp;nbsp;&lt;STRONG&gt;Standard_M416bs_v3 &lt;/STRONG&gt;offers high remote storage performance with up to 550,000 IOPS and 10 GBps of remote disk storage bandwidth.&lt;/LI&gt;
&lt;LI&gt;The increased remote storage performance of &lt;STRONG&gt;Mbv3 series&lt;/STRONG&gt; is ideal for storage throughput-intensive workloads such as relational databases and data analytics applications.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Mbsv3 series (NVMe)&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-border-color-21" border="1" style="width: 1050px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Size&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;vCPU&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Memory: GiB&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Max data disks&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Max uncached Premium SSD: IOPS/Throughput(MBps)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Max uncached Ultra Disk and Premium SSD V2 disk:&amp;nbsp; IOPS/Throughput(MBps)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Max NICs&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Max network bandwidth (Mbps)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Standard_M416bs_v3&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;416&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;3800&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;64&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;240,000/8,000&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;550,000/10,000&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;8&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;50,000&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Regional Availability and Pricing&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The VMs are now available in &lt;STRONG&gt;Central US, East US 2, East US, West US 2.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;For pricing details, please refer to &lt;A href="https://azure.microsoft.com/en-us/pricing/calculator/?ef_id=_k_Cj0KCQjw8eTFBhCXARIsAIkiuOxQ3dml1dnlGWy0a26qmaYbOoIzRnMJX1gG7r8_njzM15tFwmULQSAaAmkeEALw_wcB_k_&amp;amp;OCID=AIDcmm5edswduu_SEM__k_Cj0KCQjw8eTFBhCXARIsAIkiuOxQ3dml1dnlGWy0a26qmaYbOoIzRnMJX1gG7r8_njzM15tFwmULQSAaAmkeEALw_wcB_k_&amp;amp;gad_source=1&amp;amp;gad_campaignid=21496728177&amp;amp;gbraid=0AAAAADcJh_sfrlWAE2kw1CU2A9NHUh47y&amp;amp;gclid=Cj0KCQjw8eTFBhCXARIsAIkiuOxQ3dml1dnlGWy0a26qmaYbOoIzRnMJX1gG7r8_njzM15tFwmULQSAaAmkeEALw_wcB" target="_blank" rel="noopener"&gt;Pricing Calculator | Microsoft Azure&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 05 Sep 2025 01:20:32 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/new-mbv3-size-standard-m416bs-v3-general-availability/ba-p/4439103</guid>
      <dc:creator>MingJiong_Zhang</dc:creator>
      <dc:date>2025-09-05T01:20:32Z</dc:date>
    </item>
    <item>
      <title>Backup SAP Oracle Databases Using Azure VM Backup Snapshots</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/backup-sap-oracle-databases-using-azure-vm-backup-snapshots/ba-p/4408055</link>
      <description>&lt;P&gt;This blog article provides a comprehensive step-by-step guide for backing up SAP Oracle databases using Azure VM backup snapshots, ensuring data safety and integrity.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Installation of CIFS Utilities:&lt;/STRONG&gt; The process begins with the installation of cifs-utils on Oracle Linux, which is the recommended OS for running Oracle databases in the cloud.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Setting Up Environment Variables:&lt;/STRONG&gt; Users are instructed to define necessary environment variables for resource group and storage account names.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Creating SMB Credentials:&lt;/STRONG&gt; The guide explains how to create a folder for SMB credentials and retrieve the storage account key, emphasizing the need for appropriate permissions.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Mounting SMB File Share:&lt;/STRONG&gt; Instructions are provided for checking the accessibility of the storage account and mounting the SMB file share, which will serve as a backup location for archived logs.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Preparing Oracle Database for Backup:&lt;/STRONG&gt;Users must place the Oracle database in hot backup mode to ensure a consistent backup while allowing ongoing transactions.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Initiating Snapshot Backup:&lt;/STRONG&gt; Once the VM backup is configured, users can initiate a snapshot backup to capture the state of the virtual machine, including the Oracle database.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Restoration Process:&lt;/STRONG&gt; The document outlines the steps for restoring the Oracle database from the backup, including updating IP addresses and starting the database listener.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Final Steps and Verification:&lt;/STRONG&gt; Users are encouraged to verify the configuration and ensure that all necessary backups are completed successfully, including the SMB file share.&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 30 Jun 2025 20:35:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/backup-sap-oracle-databases-using-azure-vm-backup-snapshots/ba-p/4408055</guid>
      <dc:creator>Vamshi Polasa</dc:creator>
      <dc:date>2025-06-30T20:35:42Z</dc:date>
    </item>
    <item>
      <title>Moving Linux and Windows from SCSI to NVMe with one easy command</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/moving-linux-and-windows-from-scsi-to-nvme-with-one-easy-command/ba-p/4427954</link>
      <description>&lt;H1&gt;Introduction&lt;/H1&gt;
&lt;P&gt;In the ever-evolving world of cloud computing, maximizing performance and efficiency is crucial for businesses leveraging virtual machines (VMs) on platforms like Microsoft Azure, especially for high I/O workloads like SAP on Azure or database applications. One significant upgrade that can yield substantial performance improvements is converting your Azure VM from a SCSI (Small Computer System Interface) disk setup to NVMe (Non-Volatile Memory Express) using Azure Boost. This blog post will guide you through the process of making this conversion and explore the numerous advantages of NVMe over SCSI.&lt;/P&gt;
&lt;P&gt;In previous posts you had to prepare the OS yourself and it was a complex process for Linux and Windows.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now you can move to NVMe with just one simple easy command and the script will take care about everything including the preparation of your operating system.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Advantages of Azure Boost&lt;/H2&gt;
&lt;P&gt;Azure Boost is a powerful enhancement tool for Azure VMs, offering the following advantages:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Accelerated Disk Performance&lt;/STRONG&gt;: Azure Boost optimizes disk I/O operations, significantly increasing the speed and efficiency of your VM's storage.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Seamless Integration&lt;/STRONG&gt;: Easily integrates with existing Azure infrastructure, allowing for a smooth transition and immediate performance benefits.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost-Effective Optimization&lt;/STRONG&gt;: By enhancing the performance of existing VMs, Azure Boost helps reduce the need for more expensive hardware upgrades or additional resources.&lt;/LI&gt;
&lt;/OL&gt;
&lt;img /&gt;
&lt;H2&gt;What is changing for your VM?&lt;/H2&gt;
&lt;P&gt;Changing the host interface from SCSI to NVMe will not change the remote storage (OS disk or data disks), but change the way the operating systems sees the disks. The way the devices are shown depends on the varios VM size with v6 SKUs now also having up to 4 temporary disks using NVMe.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table style="width: 100%;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&amp;nbsp;&lt;/td&gt;&lt;td&gt;SCSI enabled VM&lt;/td&gt;&lt;td&gt;NVMe enabled VM (v5 and Mv3)&lt;/td&gt;&lt;td&gt;NVMe enabled v6&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;OS disk&lt;/td&gt;&lt;td&gt;/dev/sda&lt;/td&gt;&lt;td&gt;/dev/nvme0n1&lt;/td&gt;&lt;td&gt;/dev/nvme0n1&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Temp Disk&lt;/td&gt;&lt;td&gt;/dev/sdb&lt;/td&gt;&lt;td&gt;/dev/sda&lt;/td&gt;&lt;td&gt;/dev/nvme1n1&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;First Data Disk&lt;/td&gt;&lt;td&gt;/dev/sdc&lt;/td&gt;&lt;td&gt;/dev/nvme0n2&lt;/td&gt;&lt;td&gt;/dev/nvme0n2&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the following sections, we'll provide a step-by-step guide to converting your Azure VM from SCSI to NVMe using Azure Boost, ensuring you can take full advantage of these performance improvements and maintain a competitive edge in the cloud computing landscape.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Preparing your virtual machine (VM) from SCSI to NVMe&lt;/H2&gt;
&lt;P&gt;To migrate from SCSI to NVMe and benefit from higher performance some prerequisites need to be in place:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Your Azure VM Generation needs to be V2, you can check it using e.g. the portal on your VM&lt;BR /&gt;&lt;img&gt;Check VM generation on Azure Portal&lt;/img&gt;&lt;/LI&gt;
&lt;LI&gt;Windows
&lt;OL&gt;
&lt;LI&gt;On Windows 3rdparty software like Antivirus can influence the behavior after the migration, if you see a bluescreen please convert back to SCSI and try disabling your Antivirus/Security solution&lt;/LI&gt;
&lt;LI&gt;When you run e.g. a v6 VM you can get up to 4 tempdisks, all of them will be RAW and not preformated with NTFS&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;Linux
&lt;OL&gt;
&lt;LI&gt;Previously you were able to identify the Data Disks using LUN IDs in /dev/disk/azure/scsi1/lunX, as we migrate to NVMe those udev rules are not valid anymore. You can install the azure-vm-utils package or manually deploy a udev rule available on GitHub&lt;/LI&gt;
&lt;LI&gt;When you run e.g. a v6 VM you can get up to 4 tempdisks, all of them will be RAW, you can use e.g. cloud-init to run initializtion of those disks everytime the operating system starts&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;LI-SPOILER label="IMPORTANT"&gt;
&lt;P&gt;Conversion from VMs with tempdisk (e.g. Standard_D4ds_v5) to Intel or AMD v6 SKUs (e.g. Standard_D4ds_v6) is currently not supported. The only possible migration is through disk snapshots.&lt;/P&gt;
&lt;P&gt;You can convert VMs without tempdisk (e.g. Standard_D4s_v5) to v6 SKUs.&lt;/P&gt;
&lt;/LI-SPOILER&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Prepare your environment (required for local PowerShell)&lt;/H2&gt;
&lt;P&gt;When running local PowerShell you need to make sure to have all the requirements installed and configured&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Set-ExecutionPolicy Unrestricted&lt;/LI-CODE&gt;
&lt;P&gt;Install PowerShell modules for Azure&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Install-Module Az -Force&lt;/LI-CODE&gt;
&lt;P&gt;Download the script:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Invoke-WebRequest -Uri "https://raw.githubusercontent.com/Azure/SAP-on-Azure-Scripts-and-Utilities/refs/heads/main/Azure-NVMe-Utils/Azure-NVMe-Conversion.ps1" -OutFile "Azure-NVMe-Conversion.ps1"
&lt;/LI-CODE&gt;
&lt;P&gt;Logon to Azure and select the correct subscription&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Connect-AzAccount
Select-azsubscription -Subscription [Your-Subscription-Id]&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Migrate your VM&lt;/H2&gt;
&lt;LI-SPOILER label="IMPORTANT"&gt;
&lt;P&gt;You can always revert the migration back to SCSI&lt;/P&gt;
&lt;/LI-SPOILER&gt;
&lt;P&gt;To migrate your VM you need to know some parameters&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;ResourceGroupName &lt;/STRONG&gt;of the VM you want to convert&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;VMName &lt;/STRONG&gt;of the VM you want to convert&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;NewControllerType &lt;/STRONG&gt;will be SCSI or NVMe&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;VMSize &lt;/STRONG&gt;is the new VM SKU, can also be the same SKU if it supports both Controller Types&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Optional Parameters:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;StartVM &lt;/STRONG&gt;automatically starts withe VM after the migration&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;WriteLogFile &lt;/STRONG&gt;stores the output in a local file&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;IgnoreSKUCheck &lt;/STRONG&gt;does not check if the required VM Size is available in the region/zone&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;FixOperatingSystemSettings &lt;/STRONG&gt;automatically prepares your Windows or Linux system using Azure RunCommands&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;IgnoreOSCheck &lt;/STRONG&gt;does not run the OS check, the VM can be shutdown, you need to make sure your VM is prepared&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Sample Command and Output&lt;/H2&gt;
&lt;LI-CODE lang="powershell"&gt;PS /home/philipp&amp;gt; ./NVMe-Conversion.ps1 -ResourceGroupName testrg -VMName testvm -NewControllerType NVMe -VMSize Standard_E4bds_v5 -StartVM -FixOperatingSystemSettings                                          
00:00 - INFO      - Starting script Azure-NVMe-Conversion.ps1
00:00 - INFO      - Script started at 06/27/2025 15:41:39
00:00 - INFO      - Script version: 2025062704
00:00 - INFO      - Script parameters:
00:00 - INFO      -   ResourceGroupName -&amp;gt; testrg
00:00 - INFO      -   VMName -&amp;gt; testvm
00:00 - INFO      -   NewControllerType -&amp;gt; NVMe
00:00 - INFO      -   VMSize -&amp;gt; Standard_E4bds_v5
00:00 - INFO      -   StartVM -&amp;gt; True
00:00 - INFO      -   FixOperatingSystemSettings -&amp;gt; True
00:00 - INFO      - Script Version 2025062704                                                                           
00:00 - INFO      - Module Az.Compute is installed and the version is correct.
00:00 - INFO      - Module Az.Accounts is installed and the version is correct.
00:00 - INFO      - Module Az.Resources is installed and the version is correct.
00:00 - INFO      - Connected to Azure subscription name: AG-GE-CE-PHLEITEN
00:00 - INFO      - Connected to Azure subscription ID: 232b6759-xxxx-yyyy-zzzz-757472230e6c
00:00 - INFO      - VM testvm found in Resource Group testrg
00:01 - INFO      - VM testvm is running
00:01 - INFO      - VM testvm is running Linux
00:01 - INFO      - VM testvm is running SCSI
00:02 - INFO      - Running in Azure Cloud Shell
00:02 - INFO      - Authentication token is a SecureString
00:02 - INFO      - Authentication token received
00:02 - INFO      - Getting available SKU resources
00:02 - INFO      - This might take a while ...
00:06 - INFO      - VM SKU Standard_E4bds_v5 is available in zone 1
00:06 - INFO      - Resource disk support matches between original VM size and new VM size.
00:06 - INFO      - Found VM SKU - Checking for Capabilities
00:06 - INFO      - VM SKU has supported capabilities
00:06 - INFO      - VM supports NVMe
00:06 - INFO      - Pre-Checks completed
00:06 - INFO      - Entering Linux OS section
00:37 - INFO      -    Script output: Enable succeeded: 
00:37 - INFO      -    Script output: [stdout]
00:37 - INFO      -    Script output: [INFO] Operating system detected: sles
00:37 - INFO      -    Script output: [INFO] Checking if NVMe driver is included in initrd/initramfs...
00:37 - INFO      -    Script output: [INFO] NVMe driver found in initrd/initramfs.
00:37 - INFO      -    Script output: [INFO] Checking nvme_core.io_timeout parameter...
00:37 - INFO      -    Script output: [INFO] nvme_core.io_timeout is set to 240.
00:37 - INFO      -    Script output: [INFO] Checking /etc/fstab for deprecated device names...
00:37 - INFO      -    Script output: [INFO] /etc/fstab does not contain deprecated device names.
00:37 - INFO      -    Script output: 
00:37 - INFO      -    Script output: [stderr]
00:37 - INFO      -    Script output: 
00:37 - INFO      - Errors: 0 - Warnings: 0 - Info: 7
00:37 - INFO      - Shutting down VM testvm
01:18 - INFO      - VM testvm stopped
01:18 - INFO      - Checking if VM is stopped and deallocated
01:19 - INFO      - Setting OS Disk capabilities for testvm_OsDisk_1_165411276cbe459097929b981eb9b3e2 to new Disk Controller Type to NVMe
01:19 - INFO      - generated URL for OS disk update:
01:19 - INFO      - https://management.azure.com/subscriptions/232b6759-xxxx-yyyy-zzzz-757472230e6c/resourceGroups/testrg/providers/Microsoft.Compute/disks/testvm_OsDisk_1_165411276cbe459097929b981eb9b3e2?api-version=2023-04-02
01:19 - INFO      - OS Disk updated
01:19 - INFO      - Setting new VM Size from Standard_E4s_v3 to Standard_E4bds_v5 and Controller to NVMe
01:19 - INFO      - Updating VM testvm
01:54 - INFO      - VM testvm updated
01:54 - INFO      - Start after update enabled for VM testvm
01:54 - INFO      - Waiting for 15 seconds before starting the VM
02:09 - INFO      - Starting VM testvm
03:31 - INFO      - VM testvm started
03:31 - INFO      - As the virtual machine got started using the script you can check the operating system now
03:31 - INFO      - If you have any issues after the conversion you can revert the changes by running the script with the old settings
03:31 - IMPORTANT - Here is the command to revert the changes:
03:31 - INFO      -    .\Azure-NVMe-Conversion.ps1 -ResourceGroupName testrg -VMName testvm -NewControllerType SCSI -VMSize Standard_E4s_v3 -StartVM
03:31 - INFO      - Script ended at 06/27/2025 15:45:11
03:31 - INFO      - Exiting
PS /home/philipp&amp;gt;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Reverting back&lt;/H2&gt;
&lt;P&gt;The output shows a PowerShell command that will revert back your VM to SCSI at the end of the script:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;.\Azure-NVMe-Conversion.ps1 -ResourceGroupName testvg -VMName testvm -NewControllerType SCSI -VMSize Standard_E4s_v3 -StartVM&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Manually preparing Windows&lt;/H2&gt;
&lt;P&gt;To manually prepare Windows you just need to run one command. It will set the NVMe driver back to early start.&lt;/P&gt;
&lt;LI-SPOILER label="IMPORTANT"&gt;
&lt;P&gt;Everytime you boot Windows will evaluate the required drivers. If you set NVMe driver to the correct state, reboot and then check again, it will be started later during boot.&lt;/P&gt;
&lt;/LI-SPOILER&gt;&lt;LI-CODE lang="powerquery"&gt;sc.exe config stornvme start=boot&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Manually preparing Linux&lt;/H2&gt;
&lt;P&gt;To manually prepare Linux you need to make sure that&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;NVMe drivers are part of initrd/initramfs&lt;/LI&gt;
&lt;LI&gt;have the NVMe I/O timeout set to 240 seconds (nvme_core.io_timeout=240) in grub&lt;/LI&gt;
&lt;LI&gt;check /etc/fstab for any references to device names (e.g. /dev/sda) or old udev rule entries (e.g. /dev/disk/azure/scsi1/lun0)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Please refer to your Linux providers documentation how to adjust the required settings.&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jun 2025 18:08:43 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/moving-linux-and-windows-from-scsi-to-nvme-with-one-easy-command/ba-p/4427954</guid>
      <dc:creator>phleiten</dc:creator>
      <dc:date>2025-06-30T18:08:43Z</dc:date>
    </item>
    <item>
      <title>Azure Files NFS Encryption In Transit for SAP on Azure Systems</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/azure-files-nfs-encryption-in-transit-for-sap-on-azure-systems/ba-p/4426918</link>
      <description>&lt;P&gt;Azure Files NFS volumes now support&amp;nbsp;&lt;A href="https://aka.ms/nfs/EiT/Announcement" target="_blank" rel="noopener"&gt;encryption in-transit &lt;/A&gt;&amp;nbsp;via TLS. With this enhancement, Azure Files NFS v4.1 offers the robust security that modern enterprises require, without compromising performance by ensuring all traffic between clients and servers is fully encrypted. Now Azure Files NFS data can be encrypted end-to-end: at rest, in transit, and across the network.&lt;/P&gt;
&lt;P&gt;Using&amp;nbsp;&lt;A href="https://www.stunnel.org/" target="_blank" rel="noopener"&gt;Stunnel&lt;/A&gt;, an open-source TLS wrapper, Azure Files encrypts the TCP stream between the NFS client and Azure Files with strong encryption using AES-GCM, without needing Kerberos. This ensures data confidentiality while eliminating the need for complex setups or external authentication systems like Active Directory.&lt;/P&gt;
&lt;P&gt;The&amp;nbsp;&lt;A href="https://github.com/Azure/AZNFS-mount" target="_blank" rel="noopener"&gt;AZNFS&lt;/A&gt;&amp;nbsp;utility package simplifies encrypted mounts by installing and setting up Stunnel on the client (Azure VMs). The AZNFS mount helper mounts the NFS shares with TLS support. The mount helper initializes dedicated stunnel client process for each storage account’s IP address. The stunnel client process listens on a local port for inbound traffic and then redirects encrypted nfs client traffic to the 2049 port where NFS server is listening on.&lt;/P&gt;
&lt;P&gt;The AZNFS package runs a background job called&amp;nbsp;&lt;EM&gt;aznfswatchdog.&lt;/EM&gt; It ensures that stunnel processes are running for each storage account and cleans up after all shares from the storage account are unmounted. If for some reason a stunnel process is terminated unexpectedly, the watchdog process restarts it.&lt;/P&gt;
&lt;P&gt;For more details, refer to the following document:&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/files/encryption-in-transit-for-nfs-shares?tabs=azure-portal%2CSUSE" target="_blank" rel="noopener"&gt;How to encrypt data in transit for NFS shares&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;Availability in Azure Regions&lt;/H3&gt;
&lt;P&gt;All regions that support Azure Premium Files now support encryption in transit.&lt;/P&gt;
&lt;H3&gt;Supported Linux releases&lt;/H3&gt;
&lt;P&gt;For SAP on Azure environment, Azure Files NFS Encryption in Transit (EiT) is available for the following Operating System releases.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;SLES for SAP 15 SP4 onwards&lt;/LI&gt;
&lt;LI&gt;RHEL for SAP 8.6 onwards &lt;EM&gt;(EiT is currently not supported for file systems managed by Pacemaker clusters on RHEL.)&lt;/EM&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Refer to&amp;nbsp;&lt;A href="https://me.sap.com/notes/1928533" target="_blank" rel="noopener"&gt;SAP Note 1928533&lt;/A&gt; for Operating system supportability for SAP on Azure systems.&lt;/P&gt;
&lt;H3&gt;How to deploy Encryption in Transit (EiT) for Azure Files NFS Shares&lt;/H3&gt;
&lt;OL&gt;
&lt;LI&gt;Refer to the SAP on Azure deployment planning guide about &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/planning-guide-storage-azure-files" target="_blank" rel="noopener"&gt;Using Azure Premium Files NFS and SMB for SAP workload&lt;/A&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;As described in the planning guide, for SAP workloads, following are the supported uses of Azure Files NFS shares and EiT can be used for all the scenarios:&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;sapmnt volume for a distributed SAP systems&lt;/LI&gt;
&lt;LI&gt;transport directory for SAP landscape&lt;/LI&gt;
&lt;LI&gt;/hana/shared for HANA scale-out. Review carefully the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/hana-vm-operations-storage#considerations-for-the-hana-shared-file-system" target="_blank" rel="noopener"&gt;considerations for sizing&amp;nbsp;&lt;STRONG&gt;/hana/shared&lt;/STRONG&gt;&lt;/A&gt;, as appropriately sized&amp;nbsp;&lt;STRONG&gt;/hana/shared &lt;/STRONG&gt;volume contributes to system's stability&lt;/LI&gt;
&lt;LI&gt;file interface between your SAP landscape and other applications&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;Deploy the Azure File NFS storage account. Refer to the standard documentation for creating the Azure Files storage account, file share and private endpoint.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/files/storage-files-quick-create-use-linux" target="_blank" rel="noopener"&gt;Create an NFS Azure file share&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&amp;nbsp; &amp;nbsp; Note : We can enforce EiT for all the file shares in the Azure Storage account by enabling ‘&lt;A href="https://learn.microsoft.com/en-us/azure/storage/files/encryption-in-transit-for-nfs-shares?branch=pr-en-us-300015&amp;amp;tabs=azure-portal%2CSUSE#enforce-encryption-in-transit" target="_blank" rel="noopener"&gt;secure transfer required&lt;/A&gt;’ option.&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;Deploy the mount helper (AZNFS) package on the Linux VM.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Follow the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/files/encryption-in-transit-for-nfs-shares?branch=pr-en-us-300015&amp;amp;tabs=azure-portal%2CSUSE#step-1-check-aznfs-mount-helper-package-installation" target="_blank" rel="noopener"&gt;instructions&lt;/A&gt; for your Linux distribution to install the package.&lt;/P&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;Create the directories to mount the file shares.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;EM&gt;mkdir -p &amp;lt;full path of the directory&amp;gt;&lt;/EM&gt;&lt;/P&gt;
&lt;OL start="5"&gt;
&lt;LI&gt;Mount the NFS File share.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Refer to &lt;A href="https://learn.microsoft.com/en-us/azure/storage/files/encryption-in-transit-for-nfs-shares?branch=pr-en-us-300015&amp;amp;tabs=azure-portal%2CSUSE#step-2-mount-the-nfs-file-share" target="_blank" rel="noopener"&gt;the section&lt;/A&gt; for mounting the Azure Files NFS EiT file share in Linux VMs.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;To mount the file share permanently by adding the mount commands in ‘/etc/fstab’.&lt;/P&gt;
&lt;LI-CODE lang=""&gt;vi /etc/fstab

sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1  aznfs noresvport,vers=4,minorversion=1,sec=sys,_netdev  0  0

# Mount the file systems

mount -a&lt;/LI-CODE&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp;&amp;nbsp; File systems mentioned above are an example to explain the mount command syntax.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; When adding nfs mount entry to /etc/fstab, the fstype is "nfs". However, to use AZNFS mount helper and EiT, we need to use the fstype as "aznfs" which is not known to the Operating System, so at boot time the server tries to mount these entries before the watchdog is active, and they may fail. Users should always add "_netdev" option to their /etc/fstab entries to make sure shares are mounted on reboot only after the required services (like network) are active.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; We can add “notls” option in the mount command, if we don’t want to use the EiT but just want to use AZNFS mount helper to mount the file system. Also , we cannot mix EiT and no-EiT methods for different file systems using Azure Files NFS in the same Azure VM. Mount commands may fail to mount the file systems if EiT and no-EiT methods are used in the same VM&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; Mount helper supports private-endpoint based connections for Azure Files NFS EiT.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; If SAP VM is &lt;A href="https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/linux/custom-dns-configuration-for-azure-linux-vms?tabs=SLES" target="_blank" rel="noopener"&gt;custom domain joined&lt;/A&gt;, then we can use custom DNS FQDN OR &amp;nbsp;short names for file share in the ‘/etc/fstab’ as its defined in the DNS. To verify the hostname resolution, check using ‘nslookup &amp;lt;hostname&amp;gt;’ and ‘getent host &amp;lt;hostname&amp;gt;’ commands.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;OL start="6"&gt;
&lt;LI&gt;Mount the NFS File share as pacemaker cluster resource for SAP Central Services.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;In high availability setup of SAP Central Services, we may use file system as a resource in pacemaker cluster and it needs to be mounted using pacemaker cluster command. In the pacemaker commands to setup file system as cluster resource, we need to change the mount type to ‘&lt;STRONG&gt;aznfs&lt;/STRONG&gt;’ from ‘&lt;STRONG&gt;nfs&lt;/STRONG&gt;’. Also it’s recommended to use ‘&lt;STRONG&gt;_netdev&lt;/STRONG&gt;’ in the options parameter.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Following are the SAP Central Services setup scenarios in which Azure Files NFS is used as pacemaker resource agent, and we can use Azure Files NFS EiT.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse-nfs-azure-files?tabs=lb-portal%2Censa2" target="_blank" rel="noopener"&gt;Azure VMs high availability for SAP NW on &lt;STRONG&gt;SLES&lt;/STRONG&gt; with NFS on Azure Files&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel-nfs-azure-files?tabs=lb-portal%2Censa1" target="_blank" rel="noopener"&gt;Azure VMs high availability for SAP NW on &lt;STRONG&gt;RHEL&lt;/STRONG&gt; with NFS on Azure Files&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;For&amp;nbsp;&lt;STRONG&gt;SUSE Linux&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;SUSE 15 SP4 (for SAP) and higher releases recognise the ‘aznfs’ as file system type in the pacemaker resource agent.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;SUSE recommends using &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse-nfs-simple-mount?tabs=lb-portal%2Censa1" target="_blank" rel="noopener"&gt;simple mount approach&lt;/A&gt; for high availability setup of SAP Central services, in which all file systems are mounted using ‘/etc/fstab’ only.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;For &lt;STRONG&gt;RHEL Linux&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;RHEL 8.6 (for SAP) and higher releases will be recognising ‘aznfs’ as file system type in pacemaker resource agent.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;At the time of writing the blog, ‘aznfs’ as file system type is not yet recognised by the FileSystem resource agent(RS) on RHEL, hence this setup can’t be used at this moment.&amp;nbsp;&lt;/P&gt;
&lt;OL start="7"&gt;
&lt;LI&gt;For SAP HANA scale-out with HSR setup&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;We can use Azure Files NFS EiT for SAP HANA scale-out with HSR setup as described in the below docs.&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/sap-hana-high-availability-scale-out-hsr-suse?tabs=lb-portal#mount-the-shared-file-systems-azure-files-nfs" target="_blank" rel="noopener"&gt;SAP HANA scale-out with HSR and Pacemaker on &lt;STRONG&gt;SLES&lt;/STRONG&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel?tabs=lb-portal#mount-the-shared-file-systems-azure-files-nfs" target="_blank" rel="noopener"&gt;SAP HANA scale-out with HSR and Pacemaker on &lt;STRONG&gt;RHEL&lt;/STRONG&gt; &lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;We need to mount ‘/hana/shared’ File system with EiT by defining the filesystem type as ‘&lt;STRONG&gt;aznfs&lt;/STRONG&gt;’ in ‘/etc/fstab’. Also it’s recommended to use ‘&lt;STRONG&gt;_netdev&lt;/STRONG&gt;’ in the options parameter.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;For&amp;nbsp;&lt;STRONG&gt;SUSE Linux&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;In the &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/sap-hana-high-availability-scale-out-hsr-suse?tabs=lb-portal%2Csaphanasr-scaleout#create-sap-hana-cluster-resources" target="_blank" rel="noopener"&gt;Create File system resource&lt;/A&gt; section with SAP HANA high availability &amp;nbsp;“SAPHanaSR-ScaleOut” package, in which we create a dummy file system cluster resource, which will monitor and report failures for ‘/hana/shared’ file system, we can continue to follow the steps as it is in the above document with ‘fstype=nfs4’. ‘/hana/shared’ file system will still be using EiT as defined in ‘/etc/fstab’.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;For SAP HANA high availability “SAPHanaSR-angi”, there are no further actions needed to use Azure File NFS EiT.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;For &lt;STRONG&gt;RHEL Linux&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;In the &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel?tabs=lb-portal#create-file-system-resources" target="_blank" rel="noopener"&gt;Create File system resource&lt;/A&gt; section, we can replace the file system type to ‘aznfs’ from ‘nfs’ in the pacemaker resource configuration for ‘/hana/shared’&amp;nbsp; file systems.&lt;/P&gt;
&lt;OL start="8"&gt;
&lt;LI&gt;Validation of in-transit data Encryption for Azure Files NFS.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Refer to &lt;A href="https://learn.microsoft.com/en-us/azure/storage/files/encryption-in-transit-for-nfs-shares?branch=pr-en-us-300015&amp;amp;tabs=azure-portal%2CSUSE#step-3--verify-that-the-in-transit-data-encryption-succeeded" target="_blank" rel="noopener"&gt;Verify that the in-transit data encryption succeeded&lt;/A&gt; section to check and confirm if EiT is successfully working.&lt;/P&gt;
&lt;H3&gt;Summary&lt;/H3&gt;
&lt;P&gt;Go ahead with EiT!! Simplified deployment of Encryption in Transit of Azure Files Premium NFS (Locally redundant Storage / Zonal redundant Storage) will strengthen the security footprint of Production and non-Production SAP on Azure environments.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jun 2025 16:40:58 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/azure-files-nfs-encryption-in-transit-for-sap-on-azure-systems/ba-p/4426918</guid>
      <dc:creator>AnjanBanerjee</dc:creator>
      <dc:date>2025-06-30T16:40:58Z</dc:date>
    </item>
    <item>
      <title>SAP Web Dispatcher on Linux with High Availability Setup on Azure</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-web-dispatcher-on-linux-with-high-availability-setup-on/ba-p/4413219</link>
      <description>&lt;H2&gt;1. Introduction&lt;/H2&gt;
&lt;P&gt;The SAP Web Dispatcher component is used for load balancing SAP web HTTP(s) traffic among the SAP application servers. It works as “reverse proxy” and the entry point for HTTP(s) requests into SAP environment, which consists of one or more SAP NetWeaver system.&lt;/P&gt;
&lt;P&gt;This blog provides detailed guidance about setting up high availability of standalone SAP Web Dispatcher on Linux operating system on Azure. There are different options to set up high availability for SAP Web Dispatcher.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Active/Passive High Availability Setup&lt;/STRONG&gt; using a Linux pacemaker cluster (SUSE or Red Hat) with a virtual IP/hostname defined in Azure Load Balancer.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Active/Active High Availability Setup&lt;/STRONG&gt; by deploying multiple parallel instances of SAP Web Dispatcher across different Azure Virtual Machines (running either SUSE or Red Hat) and distributing traffic using Azure Load Balancer.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We will walk through the configuration steps for both high availability scenarios in this blog.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;2. Active/Passive HA Setup of SAP Web Dispatcher&lt;/H2&gt;
&lt;H3&gt;2.1. System Design&lt;/H3&gt;
&lt;P&gt;Following is the high level &lt;A href="https://learn.microsoft.com/en-us/azure/architecture/guide/sap/sap-s4hana" target="_blank" rel="noopener"&gt;architecture diagram of HA SAP Production environment on Azure&lt;/A&gt;. SAP Web Dispatcher (WD) standalone HA setup is highlighted in the SAP architecture design.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;In this setup as active/passive node design, primary node of the SAP Web Dispatcher will be receiving the user's requests and transferring (and load balancing) it to the backed SAP Application Servers. In case of unavailability of primary node, Linux pacemaker cluster will perform the failover of SAP Web Dispatcher to the secondary node. Users will connect to the SAP Web Dispatcher using the virtual hostname(FQDN) and virtual IP Address as defined in the Azure Loadbalancer. Azure Loadbalancer health probe port will be activated by pacemaker cluster on the primary node, so all the user connections to the virtual IP/hostname will be redirected by Azure Loadbalancer to the active SAP Web Dispatcher.&lt;/P&gt;
&lt;P&gt;Also, SAP Help documentation describes this HA architecture as “&lt;A href="https://help.sap.com/docs/SAP_S4HANA_ON-PREMISE/683d6a1797a34730a6e005d1e8de6f22/489a9a6b48c673e8e10000000a42189b.html?locale=en-US" target="_blank" rel="noopener"&gt;High Availability of SAP Web Dispatcher with External HA Software&lt;/A&gt;”.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The following are the advantages of active-passive SAP WD setup.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Linux pacemaker cluster will continuously monitor the SAP WD active node and services running on it. In case of any error scenario, the active node will be fenced by pacemaker cluster and secondary node will be made active. This will ensure best user experience round the clock.&lt;/LI&gt;
&lt;LI&gt;Complete automation of error detection and start/stop functionality of SAP WD. Its would be less challenging to define application-level SLA when pacemaker managing the SAP WD. Azure provides VM level SLA of 99.99% , if VMs are deployed in Availability Zones.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We need following components to setup HA SAP Web Dispatcher on Linux.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A pair of SAP Certified VMs on Azure with supported Linux Operating System. Cross Availability Zone deployment is recommended for higher VM level SLA.&lt;/LI&gt;
&lt;LI&gt;Azure Fileshare (Premium) for ‘sapmnt’ NFS share which will be available/mounted on both VMs for SAP Web Dispatcher.&lt;/LI&gt;
&lt;LI&gt;Azure Load Balancer for configuring virtual IP and hostname (in DNS) of the SAP Web Dispatcher.&lt;/LI&gt;
&lt;LI&gt;Configure Linux pacemaker cluster.&lt;/LI&gt;
&lt;LI&gt;Installation of SAP Web Dispatcher on both the VMs with same SID and system number. It is recommended to use the latest version of SAP Web Dispatcher.&lt;/LI&gt;
&lt;LI&gt;Configure the pacemaker resource agent for SAP Web Dispatcher application.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;2.2.&amp;nbsp; &amp;nbsp;Deployment Steps&lt;/H3&gt;
&lt;P&gt;This section provides detailed steps for HA active/passive SAP Web Dispatcher deployment for both the supported Linux operating systems (SUSE and Red Hat). Please refer to &lt;A href="https://me.sap.com/notes/1928533" target="_blank" rel="noopener"&gt;SAP Note 1928533&lt;/A&gt; for SAP on Azure certified VMs, SAPS values and supported operating systems versions for SAP environment.&lt;/P&gt;
&lt;P&gt;In the below steps, ‘&lt;SPAN class="lia-text-color-11"&gt;&lt;STRONG&gt;For SLES&lt;/STRONG&gt;&lt;/SPAN&gt;’ is applicable to SLES operating system and ‘&lt;SPAN class="lia-text-color-8"&gt;&lt;STRONG&gt;For RHEL&lt;/STRONG&gt;&lt;/SPAN&gt;’ is applicable to RHEL operating system. If for any step, operating system is not mentioned then its applicable to both the operating system.&lt;/P&gt;
&lt;P&gt;Also following items are prefixed with:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;[A]:&lt;/STRONG&gt; Applicable to all nodes.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;[1]:&lt;/STRONG&gt; Applicable to only node 1.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;[2]:&lt;/STRONG&gt; Applicable to only node 2.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SLES/RHEL for SAP. In this blog, below VM names are used:&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;Node1: webdisp01&lt;/LI&gt;
&lt;LI&gt;Node2: webdisp02&lt;/LI&gt;
&lt;LI&gt;Virtual Hostname: eitwebdispha&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Follow the standard SAP on Azure document for base pacemaker setup for the SAP Web Dispatcher VMs. We can either use SBD device or Azure fence agent for setting up fencing in the pacemaker cluster.&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-11"&gt;&lt;STRONG&gt;For SLES: &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse-pacemaker?tabs=msi" target="_blank" rel="noopener"&gt;Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="color: rgb(186, 55, 42);"&gt;For RHEL:&amp;nbsp;&lt;/STRONG&gt;&lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel-pacemaker?tabs=msi" target="_blank" rel="noopener"&gt;Set up Pacemaker on Red Hat Enterprise Linux in Azure&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;The rest of the below setup steps are derived from the below SAP ASCS/ERS HA setup document and SUSE/RHEL blog on SAP WD setup. It's highly recommended to read the following documents.&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-11"&gt;&lt;STRONG&gt;For SLES:&lt;/STRONG&gt;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A style="background-color: rgb(255, 255, 255); font-style: normal; font-weight: 400;" href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse-nfs-azure-files?tabs=lb-portal%2Censa1" target="_blank" rel="noopener"&gt;High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with NFS on Azure Files&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.suse.com/c/yes-sap-web-dispatcher-high-availability-on-premise-and-cloud/?_gl=1*1pxwa0d*_gcl_au*NjM1Nzc3ODQ0LjE3NDE1OTI4NjU.*_ga*ODI1MzcxODg2LjE3NDE1OTI4NjQ.*_ga_JEVBS2XFKK*MTc0Mzk5NjAzOS4xMy4xLjE3NDQwMDI4MjEuNTkuMC4w" target="_blank" rel="noopener"&gt;SUSE Blog: SAP Web Dispatcher High Availability on Cloud with SUSE Linux.&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;For RHEL:&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel-nfs-azure-files?tabs=lb-portal%2Censa1" target="_blank" rel="noopener"&gt;High availability for SAP NetWeaver on VMs on RHEL with NFS on Azure Files&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://access.redhat.com/articles/6962925" target="_blank" rel="noopener"&gt;RHEL Blog: How to manage standalone SAP Web Dispatcher instances using the RHEL HA Add-On - Red Hat Customer Portal&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Deploy the Azure standard load balancer for defining the virtual IP of the SAP Web Dispatcher. In this example, the following setup is used in deployment.&lt;BR /&gt;&lt;BR /&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; height: 134px; border-width: 1px;"&gt;&lt;colgroup&gt;&lt;col style="width: 25.0391%" /&gt;&lt;col style="width: 25.0391%" /&gt;&lt;col style="width: 25.0391%" /&gt;&lt;col style="width: 25.0391%" /&gt;&lt;/colgroup&gt;&lt;tbody&gt;&lt;tr style="height: 27px;"&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Frontend IP&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Backend Pool&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Health Probe Port&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Load Balancing Rule&lt;/STRONG&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 107px;"&gt;&lt;td style="height: 107px;"&gt;
&lt;P&gt;10.50.60.45&lt;/P&gt;
&lt;P&gt;(Virtual IP of SAP Web Dispatcher)&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 107px;"&gt;Node 1 &amp;amp; Node 2 VMs&lt;/td&gt;&lt;td style="height: 107px;"&gt;62320 (set probeThreshold=2)&lt;/td&gt;&lt;td style="height: 107px;"&gt;
&lt;P&gt;HA Port: Enable&lt;/P&gt;
&lt;P&gt;Floating IP: Enable&lt;/P&gt;
&lt;P&gt;Idle Timeout: 30 mins&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;BR /&gt;Don't enable TCP time stamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the “net.ipv4.tcp_timestamps” OS parameter to '0'. For details, see &lt;A href="https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-custom-probe-overview" target="_blank" rel="noopener"&gt;Load Balancer health probes&lt;/A&gt;.&lt;BR /&gt;&lt;BR /&gt;Run the following command to set this parameter, and to set up value permanently add or update the parameter in /etc/sysctl.conf.&lt;BR /&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo sysctl net.ipv4.tcp_timestamps=0&lt;/LI-CODE&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity unless you perform additional configuration to allow routing to public endpoints. For details on how to achieve outbound connectivity, see&amp;nbsp;&lt;/SPAN&gt;&lt;A style="background-color: rgb(255, 255, 255); font-style: normal; font-weight: 400;" href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-standard-load-balancer-outbound-connections" target="_blank" rel="noopener"&gt;Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-21"&gt;Configure NFS for ‘sapmnt’ and SAP WD instance Filesystem on Azure Files. Deploy &lt;/SPAN&gt;the Azure Files storage account (ZRS) and create fileshares for ‘sapmnt’ and ‘SAP WD instance (/usr/sap/SID/Wxx)’. Connect it to the vnet of the SAP VMs using private endpoint.&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-11"&gt;&lt;STRONG&gt;For SLES: &lt;/STRONG&gt;&lt;/SPAN&gt;Refer to the&amp;nbsp;&lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse-nfs-azure-files?tabs=lb-portal%2Censa1#deploy-azure-files-storage-account-and-nfs-shares" target="_blank" rel="noopener"&gt;Deploy an Azure Files storage account and NFS shares&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; section for detailed steps.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;For RHEL: &lt;/STRONG&gt;&lt;/SPAN&gt;Refer to the &lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel-nfs-azure-files?tabs=lb-portal%2Censa1#deploy-azure-files-storage-account-and-nfs-shares" target="_blank" rel="noopener"&gt;Deploy an Azure Files storage account and NFS shares&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; section for detailed steps.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;Mount NFS volumes.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-11"&gt;[A] For SLES:&lt;/SPAN&gt; &lt;/STRONG&gt;&lt;SPAN class="lia-text-color-21"&gt;NFS client and other resources come pre-installed.&lt;BR /&gt;&lt;/SPAN&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;[A] For RHEL:&lt;/STRONG&gt;&lt;/SPAN&gt; Install the NFS Client and other resources.&lt;BR /&gt;&lt;/SPAN&gt;&lt;LI-CODE lang="bash"&gt;sudo yum -y install nfs-utils resource-agents resource-agents-sap&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;STRONG&gt;[A]&lt;/STRONG&gt; Mount the NFS file system on both VMs.&amp;nbsp;&lt;/SPAN&gt;Create shared directories.&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo mkdir -p /sapmnt/WD1 
sudo mkdir -p /usr/sap/WD1/W00

sudo chattr +i /sapmnt/WD1 
sudo chattr +i /usr/sap/WD1/W00&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[A] &lt;/STRONG&gt;Mount the File system that will not be controlled by pacemaker cluster.&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;echo "sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-sapmnt /sapmnt/WD1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 2" &amp;gt;&amp;gt; /etc/fstab

mount -a&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Prepare for SAP Web Dispatcher HA Installation.&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;[A]&lt;SPAN class="lia-text-color-11"&gt; For SUSE:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; Install the latest version of the SUSE connector.&lt;BR /&gt;&lt;/SPAN&gt;&lt;LI-CODE lang="bash"&gt;sudo zypper install sap-suse-cluster-connector&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;[A]&lt;/STRONG&gt; Set up host name resolution (including virtual hostname).&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;We can either use a DNS server or modify /etc/hosts on all nodes.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[A]&lt;/STRONG&gt; Configure the SWAP file. Edit ‘/etc/waagent.conf’ file and change the following parameters.&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;ResourceDisk.Format=y 
ResourceDisk.EnableSwap=y 
ResourceDisk.SwapSizeMB=2000&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;[A] Restart the agent to activate the change&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo service waagent restart&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[A] &lt;SPAN class="lia-text-color-13"&gt;For RHEL: &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN class="lia-text-color-21"&gt;Based on RHEL OS version follow SAP Notes.&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-21"&gt;SAP Note 2002167 for RHEL 7.x&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-21"&gt;SAP Note 2772999 for RHEL 8.x&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-21"&gt;SAP Note 3108316 for RHEL 9.x&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Create the SAP WD instance Filesystem, virtual IP, and probe port resources for SAP Web Dispatcher.&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;[1] &lt;SPAN class="lia-text-color-11"&gt;For SUSE:&lt;/SPAN&gt;&lt;/STRONG&gt;&amp;nbsp;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;# Keep node 2 in standby 
sudo crm node standby webdisp02 

# Configure file system, virtual IP, and probe resource 
sudo crm configure primitive fs_WD1_W00 Filesystem device=' sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-su-usrsap' directory='/usr/sap/WD1/W00' fstype='nfs' options='noresvport,vers=4,minorversion=1,sec=sys' \ 
op start timeout=60s interval=0 \ 
op stop timeout=60s interval=0 \ 
op monitor interval=20s timeout=40s 

sudo crm configure primitive vip_WD1_W00 IPaddr2 \ 
params ip=10.50.60.45 \ 
op monitor interval=10 timeout=20 

sudo crm configure primitive nc_WD1_W00 azure-lb port=62320 \ 
op monitor timeout=20s interval=10 

sudo crm configure group g-WD1_W00 fs_WD1_W00 nc_WD1_W00 vip_WD1_W00&lt;/LI-CODE&gt;
&lt;P&gt;Make sure that all the resources in the cluster are in started status and running on Node 1. Check the status using the command ‘&lt;EM&gt;crm status’&lt;/EM&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;[1] &lt;SPAN class="lia-text-color-13"&gt;For RHEL:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;# Keep node 2 in standby 
sudo pcs node standby webdisp02 

# Create file system, virtual IP, probe resource 
sudo pcs resource create fs_WD1_W00 Filesystem device='sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-rh-usrsap' \ 
directory='/usr/sap/WD1/W00' fstype='nfs' force_unmount=safe options='sec=sys,nfsvers=4.1' \ 
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \ 
--group g-WD1_W00 

sudo pcs resource create vip_WD1_W00 IPaddr2 \ 
ip=10.50.60.45 \ 
--group g-WD1_W00 

sudo pcs resource create nc_WD1_W00 azure-lb port=62320 \ 
--group g-WD1_W00&lt;/LI-CODE&gt;
&lt;P&gt;Make sure that all the resources in the cluster are in started status and running on Node 1. Check the status using the command ‘&lt;EM&gt;pcs status’&lt;/EM&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[1]&lt;/STRONG&gt; Install SAP Web Dispatcher on the first Node.
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;For RHEL:&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;Allow access to SWPM. This rule is not permanent. If you reboot the machine, you should run the command again.&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo firewall-cmd --zone=public --add-port=4237/tcp&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Run the SWPM.&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;./sapinst SAPINST_USE_HOSTNAME=&amp;lt;virtual hostname&amp;gt;&lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;Enter the virtual hostname and Instance number.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Provide the S/4 HANA message server details for backend connections.&lt;/LI&gt;
&lt;LI&gt;Continue with SAP Web Dispatcher installation.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Check the status of SAP WD.&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[1] &lt;/STRONG&gt;Stop the SAP WD and disable the systemd service. This step is only&amp;nbsp;if SAP startup framework is managed by systemd as per &lt;A href="https://me.sap.com/notes/3115048" target="_blank" rel="noopener"&gt;SAP Note&amp;nbsp;3115048&lt;/A&gt;.&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;# login as sidadm user 
sapcontrol -nr 00 -function Stop 

# login as root user 
systemctl disable SAPWD1_00.service&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;[1] Move the Filesystem, virtual IP, and probe port resources for SAP Web Dispatcher to second Node.&lt;/SPAN&gt;&lt;EM&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/EM&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-11"&gt;&lt;STRONG&gt;For SLES:&lt;/STRONG&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;LI-CODE lang="bash"&gt;sudo crm node online webdisp02 
sudo crm node standby webdisp01&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&amp;nbsp;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;For RHEL:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;sudo pcs node unstandby webdisp02 
sudo pcs node standby webdisp01&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;NOTE: Before proceeding to the next steps, check that resources successfully moved to Node 2.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;STRONG&gt;[2] &lt;/STRONG&gt;Setup SAP Web Dispatcher on the second Node.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;To setup the SAP WD on Node 2, we can copy the following files and directories from Node 1 to Node 2. Also perform the other tasks in Node 2 as mentioned below.&lt;/LI&gt;
&lt;LI&gt;Note: Please ensure that permissions, owner, and group names are same in Node 2 for all the copied items as in Node 1. Before copying, save a copy of the existing files in Node 2.&lt;/LI&gt;
&lt;LI&gt;Files to copy&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;# For SLES and RHEL 
/usr/sap/sapservices 
/etc/system/system/SAPWD1_00.service 
/etc/polkit-1/rules.d/10-SAPWD1-00.rules 
/etc/passwd 
/etc/shadow 
/etc/group 

# For RHEL 
/etc/gshadow&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Folders to copy&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;# After copying, Rename the ‘hostname’ in the environment file names. 
/home/wd1adm 
/home/sapadm 

/usr/sap/ccms 
/usr/sap/tmp&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Create the 'SYS' directory in the /usr/sap/WD1 folder
&lt;UL&gt;
&lt;LI&gt;Create all subdirectories and soft links as available in Node 1.&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[2] &lt;/STRONG&gt;Install the saphostagent&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;Extract the SAPHOSTAGENT.SAR file&lt;/LI&gt;
&lt;LI&gt;Run the command to install it&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;./saphostexec -install&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Check if SAP hostagent is running successfully&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;/usr/sap/hostctrl/exe/saphostexec -status&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[2]&lt;/STRONG&gt; Start SAP WD on node 2 and check the status&lt;BR /&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sapcontrol -nr 00 -function StartService WD1 
sapcontrol -nr 00 -function Start 
sapcontrol -nr 00 -function GetProcessStatus&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[1] &lt;SPAN class="lia-text-color-11"&gt;For SLES:&lt;/SPAN&gt;&lt;/STRONG&gt; Update the instance profile&lt;BR /&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;vi /sapmnt/WD1/profile/WD1_W00_wd1webdispha 

# Add the following lines. 
service/halib = $(DIR_EXECUTABLE)/saphascriptco.so 
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[A] &lt;/STRONG&gt;Configure SAP users after the installation&lt;BR /&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo usermod -aG haclient wd1adm&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;[A] Configure keepalive parameter and add the parameter in /etc/sysctl.conf to set the value permanently&lt;BR /&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo sysctl net.ipv4.tcp_keepalive_time=300&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Create SAP Web Dispatcher resource in cluster&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-11"&gt;&lt;STRONG&gt;For SLES:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo crm configure property maintenance-mode="true" 

sudo crm configure primitive rsc_sap_WD1_W00 SAPInstance \ 
op monitor interval=11 timeout=60 on-fail=restart \ 
params InstanceName=WD1_W00_wd1webdispha \ 
START_PROFILE="/usr/sap/WD1/SYS/profile/WD1_W00_wd1webdispha" \ 
AUTOMATIC_RECOVER=false MONITOR_SERVICES="sapwebdisp" 

sudo crm configure modgroup g-WD1_W00 add rsc_sap_WD1_W00 

sudo crm node online webdisp01 

sudo crm configure property maintenance-mode="false"&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;For RHEL&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo pcs property set maintenance-mode=true 

sudo pcs resource create rsc_sap_WD1_W00 SAPInstance \ 
InstanceName=WD1_W00_wd1webdispha START_PROFILE="/sapmnt/WD1/profile/WD1_W00_wd1webdispha" \ 
AUTOMATIC_RECOVER=false MONITOR_SERVICES="sapwebdisp" \ 
op monitor interval=20 on-fail=restart timeout=60 \ 
--group g-WD1_W00

sudo pcs node unstandby webdisp01 

sudo pcs property set maintenance-mode=false&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;[A] &lt;/SPAN&gt;&lt;SPAN class="lia-text-color-13"&gt;For RHEL:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; Add firewall rules for SAP Web Dispatcher and Azure load balancer health probe ports on both nodes.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;LI-CODE lang="bash"&gt;sudo firewall-cmd --zone=public --add-port={62320,44300,8000}/tcp --permanent 
sudo firewall-cmd --zone=public --add-port={62320,44300,8000}/tcp&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Verify SAP Web Dispatcher Cluster is running successfully&lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;Check "insights" blade of Azure load balancer in portal. It would show connections are redirected to one of the nodes.&lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Check the backend S/4 HANA connection is working using the SAP Web Dispatcher Administration link.&lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;Run the sapwebdisp config check&lt;BR /&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sapwebdisp pf=/sapmnt/WD1/profile/WD1_W00_wd1webdispha -checkconfig&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Test the cluster setup&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-11"&gt;&lt;STRONG&gt;For SLES&lt;/STRONG&gt;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;Pacemaker cluster testing for SAP Web Dispatcher can be derived from the document &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse?tabs=lb-portal%2Censa1#test-the-cluster-setup" target="_blank" rel="noopener"&gt;Azure VMs high availability for SAP NetWeaver on SLES (for ASCS/ERS Cluster) &lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;We can run the following test cases (from the above link), which can be applicable for SAP WD component.
&lt;UL&gt;
&lt;LI&gt;Test HAGetFailoverConfig and HACheckFailoverConfig&lt;/LI&gt;
&lt;LI&gt;Manually migrate the SAP Web Dispatcher resource&lt;/LI&gt;
&lt;LI&gt;Test HAFailoverToNode&lt;/LI&gt;
&lt;LI&gt;Simulate node crash&lt;/LI&gt;
&lt;LI&gt;Blocking network communication&lt;/LI&gt;
&lt;LI&gt;Test manual restart of SAP WD instance&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;For RHEL&lt;/STRONG&gt;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;Pacemaker cluster testing for SAP Web Dispatcher can be derived from the document &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel?tabs=lb-portal%2Censa1#test-the-cluster-setup" target="_blank" rel="noopener"&gt;Azure VMs high availability for SAP NetWeaver on RHEL (for ASCS/ERS Cluster) &lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;We can run the following test cases (from the above link), which can be applicable for SAP WD component.
&lt;UL&gt;
&lt;LI&gt;Manually migrate the SAP Web Dispatcher resource&lt;/LI&gt;
&lt;LI&gt;Simulate a node crash&lt;/LI&gt;
&lt;LI&gt;Blocking network communication&lt;/LI&gt;
&lt;LI&gt;Kill the SAP WD process&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;3. Active/Active HA Setup of SAP Web Dispatcher&lt;/H2&gt;
&lt;H3&gt;3.1. System Design&lt;/H3&gt;
&lt;P&gt;In this Active/Active setup of SAP Web Dispatcher (WD), we can deploy and run parallel standalone WD on individual VMs with share nothing designs and have different SID. To connect to the SAP Web Dispatcher, Users will be using the one virtual hostname (FQDN)/IP as defined in the front-end IP of Azure Load balancer. Virtual IP to hostname/FQDN mapping needs to be performed in AD/DNS. Incoming traffic will be distributed to either of the WD by the Azure Internal Load balancer. No Operating system cluster setup is required in this scenario. This architecture can be deployed in either Linux or Windows operating systems.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;In ILB configuration,&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode?tabs=azure-portal#configure-distribution-mode" target="_blank" rel="noopener"&gt;Session persistence settings&lt;/A&gt; will ensure that user’s successive requests always be routed from Azure Load balancer to&amp;nbsp;&lt;U&gt;same&lt;/U&gt; WD as long as its active and ready to receive connections.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Also, SAP Help documentation describes this HA architecture as “&lt;A href="https://help.sap.com/docs/SAP_S4HANA_ON-PREMISE/683d6a1797a34730a6e005d1e8de6f22/489a9a6b48c673e8e10000000a42189b.html?locale=en-US" target="_blank" rel="noopener"&gt;High availability with several parallel Web Dispatchers&lt;/A&gt;”.&lt;/P&gt;
&lt;P&gt;The following are the advantages of the active-active SAP WD setup.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;Simpler design no need to set up Operating System Cluster&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;We have 2 WD instances to handle the requests and distribute the workload.&lt;/LI&gt;
&lt;LI&gt;If one of the nodes fail, Load balancer will forward request to another and stop sending requests to failed node. So, it means SAP WD setup is highly available.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We need the following components to setup active/active SAP Web Dispatcher on Linux.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A pair of SAP Certified VMs on Azure with supported Linux Operating System. Cross Availability Zone deployment is recommended for higher VM level SLA.&lt;/LI&gt;
&lt;LI&gt;Azure managed disk of required size on each VM to create Filesystems for ‘sapmnt’ and ‘/sar/sap’.&lt;/LI&gt;
&lt;LI&gt;Azure Load Balancer for configuring virtual IP and hostname (in DNS) of the SAP Web Dispatcher.&lt;/LI&gt;
&lt;LI&gt;Installation of SAP Web Dispatcher on both the VMs with different SID. It is recommended to use the latest version of SAP Web Dispatcher.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;3.2. Deployment Steps&lt;/H3&gt;
&lt;P&gt;This section provides detailed steps for HA active/active SAP Web Dispatcher deployment for both the supported Linux operating systems (SUSE Linux and Redhat Linux). Please refer to &lt;A href="https://me.sap.com/notes/1928533" target="_blank" rel="noopener"&gt;SAP Note 1928533&lt;/A&gt; for SAP on Azure certified VMs, SAPS values and supported operating systems versions for SAP environment.&lt;/P&gt;
&lt;H4&gt;3.2.1. For &lt;SPAN class="lia-text-color-11"&gt;SUSE&lt;/SPAN&gt; and &lt;SPAN class="lia-text-color-13"&gt;RHEL &lt;/SPAN&gt;Linux&lt;/H4&gt;
&lt;OL&gt;
&lt;LI&gt;Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SUSE/RHEL Linux for SAP. Add managed data disk on each of the VMs and create ‘/usr/sap’ and ‘/sapmnt/&amp;lt;SID&amp;gt; Filesystem in it.&lt;/LI&gt;
&lt;LI&gt;Install the SAP Web Dispatcher using SAP SWPM on both VMs. Both SAP WD are completely independent of each other and should have separate SID.&lt;/LI&gt;
&lt;LI&gt;Perform the &lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://help.sap.com/doc/saphelp_nw73/7.3.16/en-US/48/997375ec0973e9e10000000a42189b/content.htm?no_cache=true" target="_blank" rel="noopener"&gt;basic configuration check&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; for both SAP web dispatchers using &lt;/SPAN&gt;&lt;EM style="color: rgb(30, 30, 30);"&gt;“sapwebdisp pf=&amp;lt;profile&amp;gt; -checkconfig”. &lt;/EM&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;We should also check if SAP WD Admin URL is working for both WD.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Deploy the Azure standard load balancer for defining the virtual IP of the SAP Web Dispatcher. As a reference, the following setup is used in deployment.&lt;BR /&gt;&lt;BR /&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; height: 54px; border-width: 1px;"&gt;&lt;colgroup&gt;&lt;col style="width: 25%" /&gt;&lt;col style="width: 25%" /&gt;&lt;col style="width: 25%" /&gt;&lt;col style="width: 25%" /&gt;&lt;/colgroup&gt;&lt;tbody&gt;&lt;tr style="height: 27px;"&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Front-end IP&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Backend Pool&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Health Probe Port&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Load Balancing Rule&lt;/STRONG&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 27px;"&gt;&lt;td style="height: 27px;"&gt;
&lt;P&gt;10.50.60.99&lt;/P&gt;
&lt;P&gt;(Virtual IP of SAP Web Dispatcher)&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 27px;"&gt;Node1 &amp;amp; Node2 VM&lt;/td&gt;&lt;td style="height: 27px;"&gt;
&lt;P&gt;Protocol: HTTPS&lt;/P&gt;
&lt;P&gt;Port: 44300 (WD https port)&lt;/P&gt;
&lt;P&gt;Path: /sap/public/icman/ping&lt;/P&gt;
&lt;P&gt;Interval: 5 seconds&lt;/P&gt;
&lt;P&gt;(set probeThreshold=2 using azure CLI)&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 27px;"&gt;
&lt;P&gt;Port &amp;amp; Backend Port: 44300&lt;/P&gt;
&lt;P&gt;Floating IP: Disable,&lt;/P&gt;
&lt;P&gt;TCP Reset: Disable,&lt;/P&gt;
&lt;P&gt;Idle Timeout: Max (30 Minutes)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;BR /&gt;Icman/ping is a way to ensure that SAP web dispatcher is successfully connected to backend SAP S/4 HANA or SAP ERP based application servers. This check is also part of the &lt;A href="https://help.sap.com/doc/saphelp_nw73/7.3.16/en-US/48/997375ec0973e9e10000000a42189b/content.htm?no_cache=true" target="_blank" rel="noopener"&gt;basic configuration check&lt;/A&gt; of SAP web dispatcher using &lt;EM&gt;“sapwebdisp pf=&amp;lt;profile&amp;gt; -checkconfig”&lt;/EM&gt;.&lt;BR /&gt;If we use HTTP(s) based health probe, ILB connection will be redirected to SAP WD only when connection between SAP WD and S/4 HANA OR ERP Application is working.&lt;BR /&gt;If we have Java based SAP system as backend environment, then ‘icman/ping’ will not be available, and HTTP(S) path can’t be used in health probe. In that case, we can use TCP based health probe (protocol value as ‘tcp’) and use SAP WD tcp port (like port 8000) in the health probe configuration.&lt;BR /&gt;In this setup, we used https port 44300 as port &amp;amp; backend port value as that is the only port number used by incoming/source URL. If there are multiple ports to be used/allowed in incoming URL, then we can enable ‘HA Port’ in Load balancing rule instead of specifying the used port.&lt;BR /&gt;Note: As per&amp;nbsp;&lt;A href="https://me.sap.com/notes/2941769" target="_blank" rel="noopener"&gt;SAP Note 2941769&lt;/A&gt;, we need to set SAP web dispatcher parameter &lt;EM&gt;wdisp/filter_internal_uris=FALSE&lt;/EM&gt;.&amp;nbsp; Also we need to verify if icman ping URL is working for both the SAP Web dispatchers with their actual hostnames.&lt;BR /&gt;Define the front-end IP (virtual IP) and hostname mapping in the DNS or /etc/hosts file.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Check if Azure Loadbalancer is routing traffic to both WD. In the ‘Insights’ section for Azure loadbalancer, connection health to the VMs should be green.&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;Validate the SAP Web Dispatcher URL is accessible using virtual hostname.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Perform high availability tests for SAP WD.&lt;/LI&gt;
&lt;LI&gt;Stop first SAP WD and verify WD connections are working.&lt;/LI&gt;
&lt;LI&gt;Then start the first WD and stop the second WD and verify that the WD connections are working.&lt;/LI&gt;
&lt;LI&gt;Simulate node crash of each of the WD VMs and verify that the WD connections are working.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;3.3. SAP Web Dispatcher (active/active) for Multiple Systems&lt;/H3&gt;
&lt;P&gt;We can use the SAP WD (active/active) pair to connect to multiple backend SAP systems rather than setting up separate SAP WD for each SAP backend environment.&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;Based on the unique URL of the incoming request with different virtual hostname/FQDN and/or port of the SAP WD, user request will be directed to any one of the SAP WD and then SAP WD will determine the backend system to redirect and load balance the requests.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;SAP documents describe the design and SAP specific configurations steps for this scenario.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://help.sap.com/docs/ABAP_PLATFORM_BW4HANA/683d6a1797a34730a6e005d1e8de6f22/b0ebfa88e9164d26bdf1d21a7ef6fc25.html" target="_blank" rel="noopener"&gt;SAP Web Dispatcher for Multiple Systems&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://help.sap.com/docs/ABAP_PLATFORM_BW4HANA/683d6a1797a34730a6e005d1e8de6f22/c5ec466f5544409982c7d3ca29ce1ad3.html" target="_blank" rel="noopener"&gt;One SAP Web Dispatcher, Two Systems: Configuration Example&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In Azure environment, SAP Web Dispatcher architecture will be as below.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;We can deploy this setup by defining an Azure standard load balancer with multiple front-end IPs attached to one backend-pool of SAP WD VMs and configuring health-probe and load balancing rules to associate it.&lt;/P&gt;
&lt;P&gt;When configuring Azure Load Balancer with multiple frontend IPs pointing to the same backend pool/port, floating IP must be enabled for each load balancing rule. If floating IP is not enabled on the first rule, Azure won’t allow the configuration of additional rules with different frontend IPs on the same backend port.&amp;nbsp; Refer to the article &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fload-balancer%2Fload-balancer-multivip-overview&amp;amp;data=05%7C02%7Cbanerjee.anjan%40microsoft.com%7C6d33a8b784de4ef9938f08dd7bb063a1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638802719752685984%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=a8voWi53yOiAD8szZQRXzp2B8oRstrSHs9ofmPsPcng%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Multiple frontends - Azure Load Balancer &lt;/A&gt;&lt;/P&gt;
&lt;P&gt;With floating IPs enabled on multiple load balancing rules, the frontend IP must be added to the network interface (e.g., eth0) on both SAP Web Dispatcher VMs.&lt;/P&gt;
&lt;H4&gt;3.3.1. Deployment Steps&lt;/H4&gt;
&lt;OL&gt;
&lt;LI&gt;Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SUSE/RHEL Linux for SAP. Add managed data disk on each of the VMs and create ‘/usr/sap’ and ‘/sapmnt/&amp;lt;SID&amp;gt; Filesystem in it.&lt;/LI&gt;
&lt;LI&gt;Install the SAP Web Dispatcher using SAP SWPM on both VMs. Both SAP WD are completely independent of each other and should have separate SID.&lt;/LI&gt;
&lt;LI&gt;Deploy Azure Standard Load Balancer with configuration as below&lt;BR /&gt;&lt;BR /&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; height: 333px; border-width: 1px;"&gt;&lt;colgroup&gt;&lt;col style="width: 25.0391%" /&gt;&lt;col style="width: 25.0391%" /&gt;&lt;col style="width: 25.0391%" /&gt;&lt;col style="width: 24.8826%" /&gt;&lt;/colgroup&gt;&lt;tbody&gt;&lt;tr style="height: 27px;"&gt;&lt;td style="height: 27px;"&gt;Front-end IP&lt;/td&gt;&lt;td style="height: 27px;"&gt;Backend Pool&lt;/td&gt;&lt;td style="height: 27px;"&gt;Health Probe Port&lt;/td&gt;&lt;td style="height: 27px;"&gt;Load Balancing Rule&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 153px;"&gt;&lt;td style="height: 153px;"&gt;
&lt;P&gt;10.50.60.99&lt;/P&gt;
&lt;P&gt;(Virtual IP of SAP Web Dispatcher for redirection to S/4 or Fiori SID &lt;STRONG&gt;E10&lt;/STRONG&gt;)&lt;/P&gt;
&lt;/td&gt;&lt;td rowspan="2" style="height: 306px;"&gt;Node1 &amp;amp; Node2 VMs&lt;/td&gt;&lt;td rowspan="2" style="height: 306px;"&gt;
&lt;P&gt;Protocol: TCP&lt;/P&gt;
&lt;P&gt;Port: 8000 (WD tcp port)&lt;/P&gt;
&lt;P&gt;Interval: 5 seconds&lt;/P&gt;
&lt;P&gt;(set probeThreshold=2 using azure CLI)&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 153px;"&gt;
&lt;P&gt;Protocol: TCP&lt;/P&gt;
&lt;P&gt;Port &amp;amp; Backend Port: 44300&lt;/P&gt;
&lt;P&gt;Floating IP: Enable,&lt;/P&gt;
&lt;P&gt;TCP Reset: Disable,&lt;/P&gt;
&lt;P&gt;Idle Timeout: Max (30 Minutes)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 153px;"&gt;&lt;td style="height: 153px;"&gt;
&lt;P&gt;10.50.60.101&lt;/P&gt;
&lt;P&gt;(Virtual IP of SAP Web Dispatcher for redirection to S/4 SID or Fiori &lt;STRONG&gt;E60&lt;/STRONG&gt;)&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 153px;"&gt;
&lt;P&gt;Protocol: TCP&lt;/P&gt;
&lt;P&gt;Port &amp;amp; Backend Port: 44300&lt;/P&gt;
&lt;P&gt;Floating IP: Enable,&lt;/P&gt;
&lt;P&gt;TCP Reset: Disable,&lt;/P&gt;
&lt;P&gt;Idle Timeout: Max (30 Minutes)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
As described above, we are defining 2 front-end IPs, 2 load-balancing rules, 1 back-end pool and 1 health probe.&amp;nbsp;&lt;BR /&gt;In this setup, we used https port 44300 as port &amp;amp; backend port value as that is the only port number used by incoming/source URL. If there are multiple ports to be used/allowed in incoming URL, then we can enable ‘HA Port’ in Load balancing rule instead of specifying the used port.&lt;BR /&gt;Define the front-end IP (virtual IP) and hostname mapping in the DNS or /etc/hosts file.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Add both the virtual IPs to the SAP WD VMs network interface. Make sure the additional IPs are added permanently and do not disappear after VM reboot.&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-11"&gt;For SLES&lt;/SPAN&gt;&lt;/STRONG&gt;, refer to “alternative workaround” section in &lt;A href="https://www.suse.com/support/kb/doc/?id=000021188" target="_blank" rel="noopener"&gt;Automatic Addition of Secondary IP Addresses in Azure&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;For RHEL&lt;/STRONG&gt;&lt;/SPAN&gt;, refer to the solution provided using “nmcli” command in the &lt;A href="https://learn.redhat.com/t5/Platform-Linux/How-to-add-multiple-IP-range-in-RHEL9/m-p/38413#M2210" target="_blank" rel="noopener"&gt;How to add multiple IP range in RHEL9&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Displaying the "ip addr show" for SAP WD VM1:&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;&amp;gt;&amp;gt;ip addr show
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 60:45:bd:73:bd:14 brd ff:ff:ff:ff:ff:ff
    inet 10.50.60.87/26 brd 10.50.60.127 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.50.60.99/26 brd 10.50.60.127 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 10.50.60.101/26 brd 10.50.60.127 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::6245:bdff:fe73:bd14/64 scope link
       valid_lft forever preferred_lft forever&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Displaying the "ip addr show" for SAP WD VM2:&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;&amp;gt;&amp;gt; ip addr show
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 60:45:bd:73:b1:92 brd ff:ff:ff:ff:ff:ff
    inet 10.50.60.93/26 brd 10.50.60.127 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.50.60.99/26 brd 10.50.60.127 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 10.50.60.101/26 brd 10.50.60.127 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::6245:bdff:fe73:b192/64 scope link
       valid_lft forever preferred_lft forever&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Update the Instance profile of SAP WDs.&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;#-----------------------------------------------------------------------
# Back-end system configuration
#-----------------------------------------------------------------------
wdisp/system_0 = SID=E10, MSHOST=e10ascsha, MSPORT=8100, SSL_ENCRYPT=1, SRCSRV=10.50.60.99:*
wdisp/system_1 = SID=E60, MSHOST=e60ascsha, MSPORT=8100, SSL_ENCRYPT=1, SRCSRV=10.50.60.101:*&lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI&gt;Stop and Start the SAP WD on VM1 and VM2.&lt;/LI&gt;
&lt;LI&gt;Note: With the above SRCSRV parameter value, only incoming request from “.99 (or its hostname)” for E10 or “.101 (or its hostname)” for E60 will be sent to SAP backend environment. &amp;nbsp;If we also want to use SAP WD actual IP or hostname-based request to be also connected to SAP Backend systems, then we need to add those IP or hostnames in the value (separated by semicolon) of SRCSRV parameter.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Perform the basic configuration check for both SAP web dispatcher using &lt;EM&gt;“sapwebdisp pf=&amp;lt;profile&amp;gt; -checkconfig”&lt;/EM&gt;. We should also check if SAP WD Admin URL is working for both WD.&lt;/LI&gt;
&lt;LI&gt;In the Azure Portal, in the ‘Insights’ section of Azure load balancer, we can see that connection status to the SAP WD VMs are healthy.&lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt;&lt;/LI&gt;
&lt;/OL&gt;</description>
      <pubDate>Thu, 05 Jun 2025 03:54:07 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-web-dispatcher-on-linux-with-high-availability-setup-on/ba-p/4413219</guid>
      <dc:creator>AnjanBanerjee</dc:creator>
      <dc:date>2025-06-05T03:54:07Z</dc:date>
    </item>
    <item>
      <title>SAP on Azure Product Announcements Summary – SAP Sapphire 2025</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-product-announcements-summary-sap-sapphire-2025/ba-p/4415281</link>
      <description>&lt;H3&gt;&lt;STRONG&gt;Introduction&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Today at Sapphire, we made an array of exciting &lt;A href="https://aka.ms/sapphire25blog" target="_blank" rel="noopener"&gt;announcements&lt;/A&gt; that strengthen the Microsoft-SAP partnership. I'd like to share additional details that complement these announcements as well as give updates on further product innovation. With over three decades of close collaboration and co-innovation with SAP, we continue to deliver RISE with SAP on Azure and integrations with SAP S/4HANA Public Cloud, allowing customers to innovate with services from both SAP BTP and Microsoft. Our new integrations enhance security through multi-layer cloud protection for SAP and non-SAP workloads, while our AI and Copilot platform provides unified analytics to improve decision-making for customers.&lt;/P&gt;
&lt;P&gt;Samsung C&amp;amp;T's Engineering &amp;amp; Construction Group is a leader in both the domestic and international construction industries. It recently embarked on the ERP Cloud transformation with RISE with SAP on Azure to enhance its existing ERP System, which is optimized for local environment, to support &lt;A href="https://www.microsoft.com/en/customers/story/23265-samsung-c-and-t-power-bi" target="_blank" rel="noopener"&gt;the global business expansion by transitioning to RISE with SAP on Azure&lt;/A&gt;.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM class="lia-align-left"&gt;“Samsung C&amp;amp;T’s successful transition to RISE with SAP on Azure serves as a best practice for other Samsung Group affiliates considering cloud-based ERP adoption. It also demonstrates that even highly localized operations can be integrated into a cloud-based environment that supports global standards.”&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM class="lia-align-left"&gt;Aidan Nam, Former Vice President, Corporate System Team, Samsung C&amp;amp;T&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;SAP on Azure also offers AI, Data, and Security solutions that enhance customers' investments and help unlock valuable information stored within ERP systems. When Danfoss, a global leader in energy-efficient solutions, began searching for new security tools for business-critical SAP infrastructure, it quickly &lt;A href="https://www.microsoft.com/en/customers/story/22786-danfoss-microsoft-sentinel" target="_blank" rel="noopener"&gt;leveraged Microsoft Sentinel solution for SAP applications&lt;/A&gt;, to find potential malicious activity and deploy multilayered protection around its expanding core infrastructure thereby achieving scalable security visibility.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM&gt;“With Microsoft Sentinel and the Microsoft Sentinel solution for SAP applications, we’ve centralized our security logs and gained a single pane of glass with which we can monitor our SAP systems,”&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Kevin Cai, IT Specialist in the Security Operations Center at Danfoss&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;We are pleased to announce additional SAP on Azure product updates and details to further help customers innovate on the most trusted cloud for SAP.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Simplified onboarding of SAP BTP estate with the &lt;STRONG&gt;new agentless data connector&lt;/STRONG&gt; for Microsoft Sentinel Solution for SAP&lt;/LI&gt;
&lt;LI&gt;Microsoft Defender for Endpoint for SAP applications is now &lt;STRONG&gt;fully SAP HANA aware &lt;/STRONG&gt;offering unparalleled &lt;STRONG&gt;protection for SAP S/4HANA&lt;/STRONG&gt; environments.&lt;/LI&gt;
&lt;LI&gt;Public Preview of &lt;STRONG&gt;SAP OData as a knowledge source&lt;/STRONG&gt; making it easy to add content from SAP systems to Copilot Studio.&lt;/LI&gt;
&lt;LI&gt;The new storage and memory optimized &lt;STRONG&gt;Medium Memory Mbv3 VM Series&lt;/STRONG&gt; (Mbsv3 and Mbdsv3) is now&lt;STRONG&gt; SAP certified&lt;/STRONG&gt;, delivering compute capabilities with IOPs performance of up to 650K.&lt;/LI&gt;
&lt;LI&gt;The&amp;nbsp;&lt;STRONG&gt;Mv3 Very High Memory series&lt;/STRONG&gt; now features an expanded range of SAP-certified VM sizes, spanning from 24TB to 32TB of memory and scaling up to 1,792 vCPUs.&lt;/LI&gt;
&lt;LI&gt;General Availability of &lt;STRONG&gt;SAP ASE (Sybase) database backup &lt;/STRONG&gt;support on Azure Backup.&lt;/LI&gt;
&lt;LI&gt;SAP Deployment Automation Framework now supports validation of SAP deployments on Azure with public preview of&amp;nbsp;&lt;STRONG&gt;SAP Testing Automation Framework (STAF),&lt;/STRONG&gt; automating high availability testing process to ensure SAP systems reliability and availability.&lt;/LI&gt;
&lt;LI&gt;The&amp;nbsp;&lt;STRONG&gt;Inventory Checks&lt;/STRONG&gt; for SAP Workbook in Azure Center for SAP Tools now comes with &lt;STRONG&gt;New Dashboards&lt;/STRONG&gt; for Enhanced Visibility.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Let's dive into the summary of product updates and services.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Extend and Innovate &lt;/STRONG&gt;&lt;/H3&gt;
&lt;H5&gt;&lt;STRONG&gt;Microsoft Sentinel Solution for SAP&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Business applications pose a unique security challenge with highly sensitive information that can make them prime targets for attacks. Attackers can compromise newly discovered unprotected SAP systems within three hours. Microsoft offers best in class security solutions support for SAP business applications with Microsoft Sentinel.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The new&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/blog/microsoftsentinelblog/microsoft-sentinel-for-sap-new-security-content-goes-beyond-agentless-%F0%9F%9A%80/4407911" target="_blank" rel="noopener"&gt;agentless&lt;/A&gt; data connector is our first party solution that re-uses customers’ SAP BTP estate for drastically simplified onboarding. In addition, new strategic third-party solutions have been added to the Microsoft Sentinel content hub by SAP SE and other ISVs making Sentinel the most effective SIEM for SAP workloads:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://community.sap.com/t5/enterprise-resource-planning-blogs-by-sap/sap-logserv-integration-with-microsoft-sentinel-for-sap-rise-customers-is/ba-p/14085387" target="_blank" rel="noopener"&gt;SAP LogServ&lt;/A&gt;: RISE, addon for SAP ECS internal logs -infra, database, etc. (Generally Available)&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://community.sap.com/t5/enterprise-resource-planning-blogs-by-sap/sap-enterprise-threat-detection-cloud-edition-joins-forces-with-microsoft/ba-p/13942075" target="_blank" rel="noopener"&gt;SAP Enterprise Threat Detection&lt;/A&gt; (Preview)&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azuremarketplace.microsoft.com/de-de/marketplace/apps/securitybridge1647511278080.securitybridge-sentinel-app-1" target="_blank" rel="noopener"&gt;SecurityBridge&lt;/A&gt; (Generally Available)&lt;/LI&gt;
&lt;/UL&gt;
&lt;H5&gt;&lt;STRONG&gt;Microsoft Defender for Endpoint for SAP applications&amp;nbsp; &lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;We are thrilled to announce a major milestone made possible through the deep collaboration between SAP and Microsoft: Microsoft Defender for Endpoint (MDE) is now the first NextGen antivirus solution that is SAP HANA aware. This joint innovation allows organizations like &lt;A href="https://www.microsoft.com/en/customers/story/18744-cofco-intl-microsoft-defender" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;COFCO International&lt;/STRONG&gt;&lt;/A&gt; to protect their SAP landscapes seamlessly and securely, without disruption.&lt;/P&gt;
&lt;P&gt;This groundbreaking capability sets MDE apart in the cybersecurity landscape, offering unparalleled protection for SAP S/4HANA environments — all without interfering with mission-critical operations.&lt;BR /&gt;Thanks to close engineering collaboration, MDE has been carefully trained to recognize SAP HANA binaries and data files. Specialized detection training ensures MDE accurately identifies these critical components and treats them as known, trusted entities — combining world-class cybersecurity with SAP-native awareness.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;API Management&amp;nbsp;&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;SAP Principal Propagation (for simplicity often also referred to as SSO) is the gold standard for app integration – especially when it comes to 3rd party apps such as Microsoft Power Platform. We proudly announce that SSO is now password-less with Azure. Microsoft Entra ID Managed Identity works seamlessly with SAP workloads such as RISE, SuccessFactors and more. Cut your maintenance efforts for SAP SSO in half and become more secure in doing so.&lt;/P&gt;
&lt;P&gt;Find more details on &lt;A href="https://community.sap.com/t5/technology-blogs-by-members/sap-principal-propagation-without-secrets-how-managed-identity-in-apim/ba-p/14091769" target="_blank" rel="noopener"&gt;this blog&lt;/A&gt;.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Teams&lt;/STRONG&gt;&lt;STRONG&gt; Integration&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;In addition to the availability of the &lt;U&gt;SAP Joule agent in Teams and Copilo&lt;/U&gt;&lt;U&gt;t&lt;/U&gt;, the “classic” integration of Teams with products like SAP S/4HANA Public Cloud is available as well. What started as “Share links to the business context (apps) in chats” has now evolved to Adaptive Card-based Loop components, Chat, Voice and Video calls integrations in contact cards and To Dos in Teams.&lt;/P&gt;
&lt;P&gt;Users of SAP S/4HANA Public Cloud can stay in their flow of work and access their business-critical data from with SAP S/4HANA Public Cloud or connected Teams applications.&lt;/P&gt;
&lt;img /&gt;
&lt;H5&gt;&lt;STRONG&gt;Copilot Studio – SAP OData Support in Knowledge Sources&amp;nbsp;&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Knowledge Sources in Copilot Studio enhance generative answers by using data from Power Platform, Dynamics 365, websites, and external systems. This enables agents to offer relevant information and insights to customers.&lt;/P&gt;
&lt;P&gt;Today, we announce the Public Preview of SAP OData as a new knowledge source. Customers and partners can now add content from SAP systems to Copilot Studio. Users can query the latest status of Sales Orders in SAP S/4HANA, view pending Invoices from ECC, or query information about employees from SAP SuccessFactors.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;All you need to do is connect to the relevant SAP OData services as a knowledge source in Copilot Studio. Copilot Studio will not duplicate the data but analyze the data structure and create the relevant queries on demand whenever a user asks a related question. The user context is always kept ensuring roles and permissions in the SAP system are taken into account.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Head over to&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/microsoft-copilot-studio/knowledge-graph-connections#supported-enterprise-data-sources-using-microsoft-graph-connectors-preview" target="_blank" rel="noopener"&gt;product documentation&lt;/A&gt; to read more and get started. &amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;New SAP Certified Compute&lt;/STRONG&gt;&lt;STRONG&gt; and Storage &lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Thousands of organizations today trust the Azure M-series virtual machines to run some of their largest mission-critical SAP workloads, including SAP HANA.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Very High Memory Mv3 VM Series&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;We are excited to unveil updates to our Mv3 Very High Memory (VHM) series with the addition of a 24TB VM, a testament to our ongoing commitment to innovation. Building on our past successes, this series integrates customer insights and industry advancements to deliver unmatched performance and efficiency. It features advanced capabilities for diverse workloads, powered by the 4th generation Intel® Xeon® Platinum 8490H processors, which offer faster processing speeds and better price-performance. You can learn more about our new Mv3 VHM here&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/memory-optimized/mdsv3-very-high-memory-series" target="_blank" rel="noopener"&gt;link&lt;/A&gt;. Below is the summary of the recently released Mv3 VHM SKUs.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;(New) Standard_M896ixds_24_v3:&lt;/STRONG&gt; Designed for S/4HANA workloads, with 896 cores and SMT disabled for optimal SAP performance. It is SAP certified for OLTP (S/4HANA) Scale-Up/4-node Scale-Out, and OLAP (BW4H) Scale-Up operations.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Standard_M896ixds_32_v3:&lt;/STRONG&gt; Designed for S/4HANA workloads, with 896 cores and SMT disabled for optimal SAP performance. It is SAP certified for OLTP (S/4HANA) Scale-Up/4-node Scale-Out, and OLAP (BW4H) Scale-Up operations.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Standard_M1792ixds_32_v3: &lt;/STRONG&gt;Designed for S/4HANA workloads, with 1792 cores. It is SAP certified for OLTP (S/4HANA) Scale-Up/2-node Scale-Out, and OLAP (BW4H) Scale-Up operations.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The new VM Size provides robust memory and CPU power, ensuring exceptional handling of large-scale in-memory databases. With 200 Gbps bandwidth and adaptable storage options such as Premium Disk and Azure NetApp Files (ANF), these VMs deliver speed and flexibility for SAP HANA configurations.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Medium Memory Mbv3 VM Series&amp;nbsp; &lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;The new Mbv3 series (Mbsv3 and Mbdsv3), released in September 2024, featuring both storage-optimized and memory-optimized are now certified as SAP certified compute VM as of March 2025. The new Mbv3 VMs are based on the 4th generation Intel® Xeon® Scalable processors, scale for workloads up to 4TB, and deliver with NVMe interface for higher remote disk storage performance. It offers up to 650,000 IOPS providing a 5x improvement in network throughput over the previous M-series families and up to 10GBps of remote disk storage bandwidth, providing a 2.5x improvement in remote storage bandwidth over the previous M-series families.&lt;/P&gt;
&lt;P&gt;Details of SAP Certified Compute Mbv3 VMs are here&amp;nbsp;&lt;A href="https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;iaas;ve:24&amp;amp;sort=Latest%20Certification&amp;amp;sortDesc=true&amp;amp;id=s:3067" target="_blank" rel="noopener"&gt;link&lt;/A&gt;.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;SAP on Azure Software Products and Services&amp;nbsp;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;H5&gt;&lt;STRONG&gt;Azure Backup for SAP&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;We are pleased to announce the general availability of backup support for SAP ASE database running on Azure virtual machines using&amp;nbsp;&lt;A href="https://review.learn.microsoft.com/en-us/azure/backup/backup-overview" target="_blank" rel="noopener"&gt;Azure Backup&lt;/A&gt;.&amp;nbsp;SAP ASE databases are mission critical workloads that require a low recovery point objective (RPO) and a fast recovery time objective (RTO). This backup service offers zero-infrastructure backup and restore of SAP ASE databases with Azure Backup enterprise management capabilities.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Key benefits of SAP ASE database backup&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;15-minute RPO &lt;/STRONG&gt;with point-in-time recovery capability.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Striping to increase the backup throughput&lt;/STRONG&gt; between ASE Virtual Machine (VM) and Recovery services vault&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Support for cost-effective backup policies&lt;/STRONG&gt; and&lt;STRONG&gt; &lt;/STRONG&gt;ASE Native compression&lt;STRONG&gt; &lt;/STRONG&gt;to lower backup storage costs.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Multiple databases restore options&lt;/STRONG&gt; including Alternate Location Restore (System refresh), Original Location Restore and Restore as Files.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Recovery Services Vault&lt;/STRONG&gt; that provides security capabilities like Immutability, Soft Delete and Multiuser Authentication.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;H5&gt;&lt;STRONG&gt;SAP Testing Automation Framework (STAF)&amp;nbsp;&lt;/STRONG&gt;&amp;nbsp;&lt;/H5&gt;
&lt;P&gt;While deployment automation frameworks like SAP Deployment Automation Framework (SDAF) have streamlined system implementation, the critical testing phase has largely remained a manual bottleneck – until now. We are introducing the SAP Testing Automation Framework (STAF), a new framework (currently in public preview) that automates high-availability (HA) testing for SAP deployments on Azure. STAF currently focuses on testing HA configurations for SAP HANA and SAP Central Services. Importantly, STAF is a cross-distribution solution supporting both SUSE Linux Enterprise Server (SLES) and RedHat Enterprise Linux (RHEL), reflecting our commitment to serve the diverse SAP on Azure customer base.. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;STAF uses modular architecture with Ansible for orchestration and custom modules for validation. It ensures business continuity by validating configurations and recovery mechanisms before systems go live, reducing risks, boosting efficiency, and ensuring compliance with standards. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can start leveraging its capabilities today by visiting the project on GitHub at &lt;A href="https://github.com/azure/sap-automation-qa" target="_blank" rel="noopener"&gt;https://github.com/azure/sap-automation-qa&lt;/A&gt;.&amp;nbsp;To know more about the framework please visit our blog: &lt;A href="https://techcommunity.microsoft.com/blog/sapapplications/empowering-sap-on-azure-with-the-sap-testing-automation-framework-staf/4411976" target="_blank" rel="noopener"&gt;Introducing SAP Testing Automation Framework (STAF)&lt;/A&gt;&lt;/P&gt;
&lt;H5&gt;&amp;nbsp;&lt;STRONG&gt;Azure Center for SAP solutions Tools and Frameworks&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;We are pleased to introduce three new dashboards for &lt;STRONG&gt;Azure Inventory Checks for SAP&lt;/STRONG&gt;, enhancing visibility into Azure infrastructure and security. These dashboards offer a more structured, visual approach to monitoring health and compliance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here are the new dashboards at a glance&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;Summary Dashboard&lt;/STRONG&gt;: Offers a snapshot of your Azure landscape with results from 21 key infrastructure checks critical for SAP workloads. It highlights your environment’s readiness and identifies areas needing attention.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;Extended Report Dashboard&lt;/STRONG&gt;: This view presents the Inventory Checks for SAP in a user-friendly dashboard layout, with enhanced navigation and filtering.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;AzSecurity Dashboard&lt;/STRONG&gt;: This dashboard presents 10 key Azure security checks to provide insights into configurations and identify vulnerabilities, ensuring compliance and safety.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;These dashboards transform raw data into actionable insights, allowing customers to quickly assess SAP infrastructure on Azure, identify misconfigurations, track improvements, and prepare confidently for audits and reviews.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;SAP + Microsoft Co-Innovations&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Microsoft and SAP are continually innovating to facilitate business transformation for our customers. This year, we are strengthening our partnership in several areas including Business Suite, AI, Data, Cloud ERP, Security, SAP BTP, among others. Please ensure that you &lt;A href="https://aka.ms/sapphire25blog" target="_blank" rel="noopener"&gt;check out our blog&lt;/A&gt; to learn more about the significant announcements we are making this year at SAP Sapphire.&lt;/P&gt;</description>
      <pubDate>Tue, 20 May 2025 14:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-product-announcements-summary-sap-sapphire-2025/ba-p/4415281</guid>
      <dc:creator>Hiren_Shah_Azure</dc:creator>
      <dc:date>2025-05-20T14:00:00Z</dc:date>
    </item>
    <item>
      <title>Introducing the SAP Testing Automation Framework: Elevating SAP System Testing on Azure</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/introducing-the-sap-testing-automation-framework-elevating-sap/ba-p/4411976</link>
      <description>&lt;P&gt;In today’s fast-paced digital landscape, ensuring that enterprise systems perform flawlessly is non-negotiable. As businesses increasingly rely on SAP systems to run their critical operations, testing becomes an essential pillar of operational excellence. Traditionally, SAP system testing has been manual, time-consuming, and prone to gaps. Addressing these critical aspects, Microsoft has introduced the SAP Testing Automation Framework (STAF), an open-source orchestration tool developed to validate SAP deployments on Microsoft Azure. It enables you to assess system configurations against SAP on Azure best practices and to automate various testing scenarios, with an initial focus on high availability (HA) testing.&lt;/P&gt;
&lt;H1&gt;What is the SAP Testing Automation Framework?&lt;/H1&gt;
&lt;P&gt;The SAP Testing Automation Framework is an open-source orchestration tool engineered to validate SAP deployments on the Microsoft Azure platform. Its core purpose is to help customers ensure their SAP systems run smoothly by proactively identifying potential issues. It achieves this by simulating system failures, verifying that configurations adhere to best practices, and automating the entire testing process to save time and improve accuracy.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The framework is built on a modular, configuration‑as‑code model using standard tools. The tests are defined in version‑controlled Ansible playbooks, and custom Python modules handle in‑depth checks of both your SAP systems and Azure resources.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key Features:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Configuration Validation: It checks whether the configurations of SAP HANA scale-up or SAP Central Services align with established SAP on Azure best practices and guidelines.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;High Availability Functional Testing: It simulates multiple test cases to ensure that the failover mechanisms are effective. With SAP HANA databases and SAP Central Services as prime examples, this testing validates that when a component fails, the system can gracefully recover without disruption. This helps identify potential issues during new system deployments.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Detailed Reporting: The framework collects detailed telemetry from SAP components and test execution where it captures event sequences, detection timings, failover durations, and system responses. It compiles this data into a comprehensive HTML report with clear pass/fail outcomes and timestamps. Optionally, you can stream logs to Azure Log Analytics or Data Explorer.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Extensible and Pipeline-ready: All framework operations, Ansible playbooks, and custom python modules are defined as code, making them ideal for integration with CI/CD pipelines. You can invoke STAF immediately after your deployment and installation step via &lt;A href="https://learn.microsoft.com/en-us/azure/sap/automation/deployment-framework" target="_blank" rel="noopener"&gt;SAP Deployment Automation Framework&lt;/A&gt;, running comprehensive HA tests before promoting changes.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;SAP System High Availability Functional Testing&lt;/H1&gt;
&lt;P&gt;The initial and most prominent capability of the SAP Testing Automation Framework is its comprehensive High Availability (HA) functional testing for critical SAP components hosted on Microsoft Azure. The framework targets scenarios involving SAP HANA scale-up database and SAP Central Services (ASCS/SCS) deployed in a two-node cluster on SUSE Linux Enterprise Server (SLES) or Red Hat Enterprise Linux (RHEL), providing a cross-distribution solution for our diverse SAP on Azure customer base. For supported configuration, see&amp;nbsp;&lt;A href="https://github.com/Azure/sap-automation-qa/blob/main/docs/HIGH_AVAILABILITY.md#supported-configurations" target="_blank" rel="noopener"&gt;support matrix&lt;/A&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The SAP Testing Automation Framework employs a systematic approach to validate the robustness of SAP system HA setup. It verifies the configuration and captures the entire sequence of events in an HA scenario: from initial failure detection, through isolation of the faulty component (including fencing), to resource migration, service recovery on the standby node, and implicitly, the consistency of data upon successful recovery. Upon completion of the tests run, STAF compiles results into a clear, HTML‑based report that details configuration compliance checks and functional test outcomes, complete with timestamps and pass/fail statuses. The report also includes logs from /var/log/messages to provide context.&lt;/P&gt;
&lt;img&gt;HTML Report after completion of the run&lt;/img&gt;
&lt;P&gt;This comprehensive validation of the HA process flow is fundamental to building confidence in the resilience of the SAP system on Azure.&lt;/P&gt;
&lt;H1&gt;Getting Started with the SAP Testing Automation Framework&lt;/H1&gt;
&lt;P&gt;The SAP Testing Automation Framework is available as an open-source project on GitHub for the community to use and contribute. You can find the code and documentation in the official repository: &lt;A href="https://github.com/Azure/sap-automation-qa" target="_blank" rel="noopener"&gt;Azure/sap-automation-qa: This is the repository supporting the quality assurance for SAP systems running on Azure.&lt;/A&gt;. The project is currently in a public preview stage. So, feedback and contributions are welcome to help improve its capabilities.&lt;/P&gt;
&lt;P&gt;To start using the framework, you have a couple of options depending on your environment:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;SAP system manually deployed (not using SDAF):&lt;/STRONG&gt; If you want to validate manually configured high availability of SAP system, you can run the framework in a &lt;A href="https://github.com/Azure/sap-automation-qa/blob/main/docs/HIGH_AVAILABILITY.md" target="_blank" rel="noopener"&gt;standalone mode&lt;/A&gt;. This involves deploying the management server (for example, an Ubuntu VM that will orchestrate the tests), configuring it with details of your SAP landscape (cluster nodes, IPs, etc.), and then executing the provided playbooks or scripts to run the HA tests. The repository provides guidance on how to configure the necessary variables and run the test scenarios for a Pacemaker cluster environment.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Integration with Deployment Pipelines:&lt;/STRONG&gt; For those who already use automated deployment tools like the &lt;A href="https://github.com/Azure/sap-automation-qa/blob/main/docs/SDAF_INTEGRATION.md" target="_blank" rel="noopener"&gt;SAP Deployment Automation Framework (SDAF)&lt;/A&gt; for Azure, the testing framework can integrate directly into those pipelines. The framework is designed as a natural extension to SDAF, so it can leverage the same configuration context and Azure resources defined during deployment. This allows you to embed HA testing into your continuous delivery process, every time you deploy or update an SAP environment, the pipeline can automatically run the HA tests and surface any issues before you even hand the system over to end-users or application teams.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;Call to Action &amp;amp; Community Engagement&lt;/H1&gt;
&lt;P&gt;The &lt;A href="https://github.com/azure/sap-automation-qa" target="_blank" rel="noopener"&gt;SAP Testing Automation Framework&lt;/A&gt;, in public preview stage, is a significant step forward in reducing misconfigurations and manual effort in high availability deployment of SAP system on Azure. We encourage you to explore the framework, share your feedback, or contribute. During this public preview, we recommend using framework for new greenfield production high availability deployments that are not yet live, or on non-production environments. &amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Appendix and References&amp;nbsp;&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse-pacemaker?tabs=msi" target="_blank" rel="noopener"&gt;Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel-pacemaker?tabs=msi" target="_blank" rel="noopener"&gt;Set up Pacemaker on RHEL in Azure | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel?tabs=lb-portal%2Censa1" target="_blank" rel="noopener"&gt;Azure Virtual Machines HA for SAP NW on RHEL | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse?tabs=lb-portal%2Censa1" target="_blank" rel="noopener"&gt;Azure VMs high availability for SAP NetWeaver on SLES | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/azure/sap-automation-qa" target="_blank" rel="noopener"&gt;Azure/sap-automation-qa: This is the repository supporting the quality assurance for SAP systems running on Azure.&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/Azure/sap-automation" target="_blank" rel="noopener"&gt;Azure/sap-automation: This is the repository supporting the SAP deployment automation framework on Azure&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 19 May 2025 15:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/introducing-the-sap-testing-automation-framework-elevating-sap/ba-p/4411976</guid>
      <dc:creator>hdamecharla</dc:creator>
      <dc:date>2025-05-19T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Join Microsoft at SAP Sapphire 2025</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/join-microsoft-at-sap-sapphire-2025/ba-p/4412561</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I’m thrilled to be back at SAP Sapphire this year alongside my colleagues from Microsoft! Sapphire is an event I always look forward to as it provides a great opportunity to celebrate the successes of our customers and partners as well as share big announcements and product updates. Whether you’re joining us for pre-day events, engaging in sessions during Sapphire, or enjoying the networking opportunities, there’s something for everyone. Read on to learn more about what’s in store:&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Previewing exciting innovation for SAP on the Microsoft Cloud&lt;/STRONG&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Data &amp;amp; AI: &lt;/STRONG&gt;At Sapphire, we will be sharing key updates on SAP Business Data Cloud (BDC) on Azure and how you can use Azure Databricks with SAP BDC. We will also be sharing the progress on the joint integration between Microsoft Copilot and SAP Joule, helping accelerate business outcomes and increase end-user productivity.&lt;/LI&gt;
&lt;/UL&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;SAP BTP on Azure:&lt;/STRONG&gt;&lt;STRONG&gt; &lt;/STRONG&gt;Together with SAP, we are ensuring our customers can use the latest SAP Business Technology (BTP) services on Microsoft Azure in their preferred regions. We are excited to share the &lt;STRONG&gt;launch of two new datacenter regions for SAP BTP on Azure – Canada (Toronto) and China (Hebei). &lt;/STRONG&gt;With this announcement, SAP BTP is now available in 10 Azure datacenter regions, including Brazil, launched late last year. Thanks to incredible demand from our joint customers, SAP has also added several additional BTP services on Azure. New services include SAP Build Apps, SAP Build Code, SAP AI Core and Joule. You can view all the existing BTP services and regions on Azure on the&amp;nbsp;&lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://discovery-center.cloud.sap/serviceCatalog?provider=azure" target="_blank" rel="noopener"&gt;SAP Discovery Center&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;. &lt;/SPAN&gt;To stay up-to-date on future plans for service and region roll-out, visit the &lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://roadmaps.sap.com/board?range=CURRENT-LAST&amp;amp;PRODUCT=73555000100800002141#Q2%202025" target="_blank" rel="noopener"&gt;SAP roadmap explorer&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;RISE with SAP Customer Spotlight –&lt;/STRONG&gt;&lt;STRONG&gt; Nestlé&lt;/STRONG&gt;: This multinational organization with over 2,000 brands in 188 countries has operations that are as large as they are complex. &lt;A href="https://aka.ms/Nestle-CustomerStory" target="_blank" rel="noopener"&gt;Nestlé executed one of the largest RISE with SAP migrations in the world on Azure&lt;/A&gt; where they could build a future-ready enterprise, leveraging AI-driven solutions. Their need for a platform that could deliver innovation and reliability at scale along with a robust infrastructure made Azure the clear choice. To hear more about their transformation story, make sure to attend the session at Sapphire: &lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1740696227505001qZXo" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Nestlé’s journey from SAP on-premises to RISE with SAP on Microsoft Azure&lt;/STRONG&gt;&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;STRONG&gt;Join us at SAP Sapphire 2025&lt;/STRONG&gt;&lt;/H2&gt;
&lt;H4&gt;&lt;STRONG&gt;Sessions you don’t want to miss &lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;We’re bringing a dynamic lineup of 10 in-person sessions across both Orlando and Madrid, featuring insights from Microsoft and SAP experts. Don’t miss the chance to dive into the latest on RISE with SAP, SAP Business Suite, SAP BTP, and Data and AI on the Microsoft Cloud—plus hear real-world stories from customers who are already driving results through the Microsoft and SAP partnership. Register now using the links below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; height: 990.8px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 30.8px;"&gt;&lt;td class="lia-align-center" style="height: 30.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Session&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 30.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Number&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 30.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Date and Time&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 106.8px;"&gt;&lt;td style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1740696227505001qZXo" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Nestlé’s journey from SAP on-premises to RISE with SAP on Microsoft Azure&lt;/STRONG&gt;&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;PAR1165&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;Wednesday, May 21&lt;/P&gt;
&lt;P&gt;2:30pm-2:50pm EDT&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 106.8px;"&gt;&lt;td style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1742450677193001vppF" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Microsoft Federal’s cloud landscape transformation with SAP NS2&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&amp;nbsp; &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;SER2695&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;Tuesday, May 20&lt;/P&gt;
&lt;P&gt;4:30pm-4:50pm EDT&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 106.8px;"&gt;&lt;td style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1740696228308001qtbo" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Unlock innovation for SAP ERP with AI, SAP BTP, and more on Microsoft Azure&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&amp;nbsp; &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;PAR1166&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;Wednesday, May 21&lt;/P&gt;
&lt;P&gt;2:00pm-2:20pm EDT&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 96.8px;"&gt;&lt;td style="height: 96.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1742246241963001scz1" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Accelerating procurement transformation with SAP Ariba solutions&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&amp;nbsp; &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 96.8px; padding: 2px;"&gt;
&lt;P&gt;SPM2624&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 96.8px; padding: 2px;"&gt;
&lt;P&gt;Wednesday, May 21&lt;/P&gt;
&lt;P&gt;2:00pm-2:20pm EDT&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 96.8px;"&gt;&lt;td style="height: 96.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1742228070485001rgfC" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Joule and Microsoft 365 Copilot: AI-enabled productivity in action&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&amp;nbsp; &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 96.8px; padding: 2px;"&gt;
&lt;P&gt;BAI2594&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 96.8px; padding: 2px;"&gt;
&lt;P&gt;Tuesday, May 20&lt;/P&gt;
&lt;P&gt;11:30am-11:50am EDT&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 106.8px;"&gt;&lt;td style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/sm25/catalog-inperson/page/catalog/session/1740696746850001DVWn" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;(Madrid) - Modernizing the SAP Software Landscape at ANDRITZ&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&amp;nbsp; &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;PAR1307&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;Wednesday, May 28&lt;/P&gt;
&lt;P&gt;11:30am - 11:50am CEST&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 30.8px;"&gt;&lt;td colspan="3" style="height: 30.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;STRONG&gt;ASUG Pre-Day Sessions&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 142.8px;"&gt;&lt;td style="height: 142.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1736768115693001NSF3" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Harnessing SAP's AI Innovations: Joule, Generative AI, and Business AI&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 142.8px; padding: 2px;"&gt;
&lt;P&gt;ASUG104&lt;/P&gt;
&lt;P&gt;Location:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;S320GH&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 142.8px; padding: 2px;"&gt;
&lt;P&gt;Monday, May 19&lt;/P&gt;
&lt;P&gt;1:00pm-5:00pm EDT&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 30.8px;"&gt;&lt;td style="height: 30.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;STRONG&gt;ASUG Power Peer Group&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 30.8px; padding: 2px;"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 30.8px; padding: 2px;"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 134.8px;"&gt;&lt;td style="height: 134.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1743428714791001xLJL" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Unlock the value of SAP BTP: Lessons learned from ASUG members&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 134.8px; padding: 2px;"&gt;
&lt;P&gt;BTP3093&lt;/P&gt;
&lt;P&gt;Location:&lt;/P&gt;
&lt;P&gt;ASUG Booth Theater&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 134.8px; padding: 2px;"&gt;
&lt;P&gt;Tuesday, May 20&lt;/P&gt;
&lt;P&gt;2:00pm-2:40pm EDT&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H4&gt;&lt;STRONG&gt;Celebration night!&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;We are excited to be the&amp;nbsp;&lt;STRONG&gt;exclusive sponsor of the celebration night concert&lt;/STRONG&gt;, which is always a highlight at Sapphire. The evening will feature two special performances by the Zac Brown Band at the American Garden Theatre at Epcot®, scheduled for 8:15 PM and 9:45 PM. &amp;nbsp;Come celebrate with us!&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;&lt;STRONG&gt;Come find us at our booth!&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Microsoft and SAP are at the forefront of AI transformation and are excited to showcase the interoperability of our AI agents at Sapphire. The video below shows a sneak peek of what’s possible but if you’d like to learn more, come talk to our subject matter experts from Microsoft on-site to help address any questions and foster connections&lt;STRONG&gt;. &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Find us at Booth #409 in Orlando and Booth #9.333 in Madrid&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;&lt;STRONG&gt;Networking Events&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Beyond the sessions and booth experiences, our partners are hosting special social and networking events you can join:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://meetpwc.cventevents.com/event/PwC-at-SAP-Sapphire-2025/Home1?RefId=Personal%20Invites&amp;amp;rt=2s4G8WpRGEGtwthWCjzUGw" target="_blank" rel="noopener"&gt;Home - PwC at SAP Sapphire 2025&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://info.lemongrasscloud.com/rsvp-lemongrass-blue-martini-sapphire" target="_blank" rel="noopener"&gt;RSVP: Lemongrass Invites You for Cocktails and Apps!&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Finfo.syntax.com%2Fsap%2Fevent%2Fsapphire%2Fhowl-at-the-moon%2Fmay-2025%2Fregistration&amp;amp;data=05%7C02%7CSanjay.Satheesh%40microsoft.com%7Ce3afe2d1ff2542b32f3f08dd83acd7f2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638811500622217955%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=AeWvHY8f4%2BcvoKvzAsdEw8OlKG2qVpjo%2B2YczHmzVL4%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Syntax Annual Sapphire Party&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.ibm.com/events/reg/flow/ibm/3ypan0mb/createaccount/page/contactInfo" target="_blank" rel="noopener"&gt;IBM Sapphire Client Appreciation reception&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We are looking forward to a great Sapphire and I hope to see you there!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 13 May 2025 15:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/join-microsoft-at-sap-sapphire-2025/ba-p/4412561</guid>
      <dc:creator>Hiren_Shah_Azure</dc:creator>
      <dc:date>2025-05-13T15:00:00Z</dc:date>
    </item>
  </channel>
</rss>

