<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>rss.livelink.threads-in-node</title>
    <link>https://techcommunity.microsoft.com/t5/sap-on-microsoft/ct-p/SAPonMicrosoft</link>
    <description>rss.livelink.threads-in-node</description>
    <pubDate>Fri, 24 Apr 2026 20:04:33 GMT</pubDate>
    <dc:creator>SAPonMicrosoft</dc:creator>
    <dc:date>2026-04-24T20:04:33Z</dc:date>
    <item>
      <title>SAP + Microsoft 365: A Unified AI Experience That Works Where You Work</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-microsoft-365-a-unified-ai-experience-that-works-where-you/ba-p/4480380</link>
      <description>&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Coauthors: Angel Zhu, Senior Product Manager, M365 Copilot Agent Ecosystem and Christoph Ruehle, Principal Product Manager, Joule/SAP Business AI at SAP&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We’ve all experienced it: you’re reviewing data, a message pings, an email comes in, and suddenly you’re juggling inboxes, chats, and screens just to finish one workflow. Those small interruptions feel routine — yet employees switch between apps over 1,200 times per day, adding up to weeks of lost productivity each year (e&lt;EM&gt;stimate based on productivity and context-switching research, Harvard Business Review, 2022)&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;It’s not the work that’s slowing us down. It’s the constant friction of moving between where the data lives and where communication happens.&lt;/P&gt;
&lt;P&gt;That’s why, building on SAP and Microsoft’s long-standing partnership, the new bi-directional integration between Joule and Microsoft 365 Copilot —&amp;nbsp;&lt;STRONG&gt;now generally available &lt;/STRONG&gt;— is designed to help people stay focused and get more done in the tools they already use.&lt;/P&gt;
&lt;P&gt;By bringing SAP business context and Microsoft 365 collaboration together into a unified experience, work can now keep flowing wherever it starts. Customers are already seeing an impact.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;“The integration of Microsoft&amp;nbsp;365&amp;nbsp;Copilot and Joule is central to Vodafone’s vision of creating an end-to-end, AI-enabled user experience,”&amp;nbsp;&lt;STRONG&gt;said Andrea Schiavi, Vodafone, AI Product Lead&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;"By unifying SAP business data with the Microsoft 365 productivity tools, we will create a seamless, intelligent workflow that accelerates decision making and boosts productivity. It will elevate our agentic assistant, AskHR, with shared context across apps so it can guide employees more effectively and resolve tasks faster.”&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;One AI that follows the work — not the other way around&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;When you’re in Microsoft 365,&amp;nbsp;&lt;STRONG&gt;Copilot&lt;/STRONG&gt;&amp;nbsp;can access SAP processes and data through Joule.&lt;BR /&gt;When you’re in SAP,&amp;nbsp;&lt;STRONG&gt;Joule&lt;/STRONG&gt;&amp;nbsp;can draw on information from Microsoft 365 through Copilot. No switching apps, just continuity.&lt;/P&gt;
&lt;P&gt;For example, a finance manager working in Microsoft Excel can use Microsoft 365 Copilot and Joule to review a purchase order directly in the spreadsheet. &amp;nbsp;Joule retrieves real-time SAP business data behind the scenes, enabling a confident decision — without opening another screen.&lt;/P&gt;
&lt;P&gt;Later, reviewing that purchase order inside&amp;nbsp;SAP, the same finance manager can ask&amp;nbsp;Joule&amp;nbsp;whether any recent&amp;nbsp;emails or Teams messages&amp;nbsp;contain information that should be considered before confirming the approval. Copilot brings that context into SAP automatically, so nothing gets missed.&lt;/P&gt;
&lt;P&gt;Wherever the task begins,&amp;nbsp;&lt;STRONG&gt;the right assistant shows up with the context needed to finish it&lt;/STRONG&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;How it works&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;In Microsoft 365 Copilot&lt;/STRONG&gt; — within Teams, Word, PowerPoint, Excel, and OneNote: &lt;STRONG&gt;Joule&lt;/STRONG&gt;&amp;nbsp;to instantly access SAP insights and complete SAP tasks — with prompts, feedback, and citations.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;In&lt;/STRONG&gt; &lt;STRONG&gt;SAP apps&lt;/STRONG&gt;: ask&amp;nbsp;&lt;STRONG&gt;Joule&lt;/STRONG&gt;&amp;nbsp;your question; when Microsoft 365 context is relevant, Joule automatically routes to&amp;nbsp;&lt;STRONG&gt;Micr&lt;/STRONG&gt;&lt;STRONG&gt;osoft 365 &lt;/STRONG&gt;&lt;STRONG&gt;Copilot&lt;/STRONG&gt;&amp;nbsp;— bringing in data from email, Teams, SharePoint, or OneDrive.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;No new interface to learn. The right assistant answers based on where you are.&amp;nbsp;&lt;A href="https://aka.ms/AgentShowcaseSAP" target="_blank" rel="noopener"&gt;See Joule and Microsoft 365 Copilot integration in action!&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why this matters and why it’s here now&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Work has evolved. SAP remains the trusted source of business truth, and Microsoft 365 is where collaboration happens. Bringing AI assistance across both environments simply helps people work the way they already do — with fewer interruptions, faster decisions, and the confidence that every action is grounded in the right business and communication context.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Get started today&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To begin using the Joule and Microsoft 365 Copilot integration, start by downloading the &lt;A href="https://marketplace.microsoft.com/en-us/product/office/wa200008645?tab=overview" target="_blank" rel="noopener"&gt;SAP Joule app from Microsoft Marketplace&lt;/A&gt; and configuring the connection between SAP Cloud Identity Services on SAP BTP and Microsoft Entra.&lt;/P&gt;
&lt;P&gt;You'll need Joule Base and a Microsoft 365 license to enable the integration.&amp;nbsp;&lt;STRONG&gt;A Microsoft 365 Copilot license is required only when using Copilot skills within Joule&lt;/STRONG&gt;&lt;EM&gt;.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;To learn more about the Joule and Microsoft 365 Copilot integration, visit the &lt;A href="https://discovery-center.cloud.sap/ai-feature/4dfa3fea-c5d2-40e3-959d-317b07b6b64e/" target="_blank" rel="noopener"&gt;SAP Discovery Center&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Mon, 09 Feb 2026 18:26:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-microsoft-365-a-unified-ai-experience-that-works-where-you/ba-p/4480380</guid>
      <dc:creator>ssanjay27</dc:creator>
      <dc:date>2026-02-09T18:26:20Z</dc:date>
    </item>
    <item>
      <title>M-Series Sets a New Remote Storage on Mbv4 Demo</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/m-series-sets-a-new-remote-storage-on-mbv4-demo/ba-p/4470773</link>
      <description>&lt;P&gt;As we released &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/memory-optimized/mbsv3-series?tabs=sizebasic" target="_blank" rel="noopener"&gt;Mbv3 Series (Mbsv3 and Mbdsv3 Series)&lt;/A&gt;last year, demonstrating &lt;STRONG&gt;650K IOPS and 10GBps throughput&lt;/STRONG&gt; of remote disk storage with Premium SSD v2 and Ultra Disk on our 2-socket Azure Boost platform—already a major milestone for cloud performance, now, we’re thrilled to announce to the community another leap forward with the coming &lt;STRONG&gt;Standard_M304bs_4_v4&lt;/STRONG&gt; VM size from new Mbv4 Series on the&amp;nbsp;&lt;STRONG data-processed="true"&gt;6th generation Intel® Xeon® Scalable processors, &lt;/STRONG&gt;which delivers &lt;STRONG data-start="213" data-end="232"&gt;20% higher IOPS&lt;/STRONG&gt; and &lt;STRONG data-start="237" data-end="259"&gt;60% more bandwidth &lt;STRONG&gt;throughput &lt;/STRONG&gt;&lt;/STRONG&gt;than Mbv3, reaching &lt;STRONG data-start="270" data-end="283"&gt;780K IOPS&lt;/STRONG&gt; and &lt;STRONG data-start="288" data-end="310"&gt;16 GBps throughput &lt;/STRONG&gt;on remote storage. Below is the demo result from our lab for this new VM series,&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P data-start="107" data-end="477"&gt;The new &lt;STRONG&gt;Mbv4 series&lt;/STRONG&gt;, built on Azure Boost, delivers exceptional IOPS and throughput to power the most &lt;STRONG&gt;mission-critical enterprise workloads&lt;/STRONG&gt;. As part of the Azure M-series portfolio, Mbv4 is designed for memory-intensive and storage–intensive workloads, making it an ideal choice for relational databases, large-scale analytics, and other mission-critical data workloads. The Standard_M304bs_4_v4 preview is coming soon. Stay tuned to this blog site for the latest updates and announcements.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 20 Nov 2025 21:44:41 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/m-series-sets-a-new-remote-storage-on-mbv4-demo/ba-p/4470773</guid>
      <dc:creator>MingJiong_Zhang</dc:creator>
      <dc:date>2025-11-20T21:44:41Z</dc:date>
    </item>
    <item>
      <title>Designing, Migrating and Managing a 15+1-Node SAP BW Scale-Out Landscape on Microsoft Azure</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/designing-migrating-and-managing-a-15-1-node-sap-bw-scale-out/ba-p/3715003</link>
      <description>&lt;P class="lia-align-justify"&gt;This blog outlines the implementation of SAP BW Scale-Out with 15+1 nodes using virtual machines on the Azure platform, representing one of the early and pioneering examples of SAP BW at this scale on a hyperscale public cloud.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;It also highlights the technical considerations and work carried out by the Microsoft Customer &amp;amp; Partners team to understand and validate the performance characteristics of SAP BW, both on-premises and on Azure.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The underlying platform used memory-optimised Mv2-Series virtual machines to support large in-memory databases and demanding workloads. Specifically, the landscape comprised 16 × M416s_v2 (416 vCPU / 5.7 GiB memory), architected across:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Database nodes&lt;/LI&gt;
&lt;LI&gt;(A)SCS nodes&lt;/LI&gt;
&lt;LI&gt;Application server nodes&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;The customer had previously operated SAP HANA on-premises with 20 nodes [18+2 scale-out] and decided to move critical business systems—including SAP BW, SAP Warehouse Management and SAP IQ (Near-Line Storage)—to Azure as part of a data centre exit.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;As part of this migration, the partner proposed the modernisation of selected business processes to take advantage of Azure-native architecture components and improve the end-user experience, including:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Increased system availability by deploying the SAP system across Azure zones within the region to enhance the availability SLA.&lt;/LI&gt;
&lt;LI&gt;Automated failover of access points using Azure Standard Load Balancer.&lt;/LI&gt;
&lt;LI&gt;Optimisation of the scale-out setup using Azure NetApp Files.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;The move to Azure not only delivered high availability, it also improved how database connectivity is managed, removing the dependency on DNS for routing to the HANA database.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Implementing SAP BW Scale-Out for Very Large Databases (VLD) can be time-consuming, especially where there is limited prior experience at this scale. The architecture required careful review of critical design aspects, and the target design was tested in a lower environment to validate all technical tests. During this process, important insights were gained, particularly around Load Balancer configuration and the fine-tuning needed to align with customer business expectations.&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;1. Source Landscape: On-Premises (AS-IS)&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;The on-premises SAP BW landscape operated with 20 nodes [18+2] to support OLAP workloads.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;For any migration programme, it is essential to understand the database growth pattern to inform a 3–5-year growth plan. Unlike on-premises environments, the cloud does not require pre-allocation of infrastructure for a fixed growth trajectory. However, understanding growth trends and resource usage remains vital to:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Design an appropriate target solution.&lt;/LI&gt;
&lt;LI&gt;Avoid subsequent minor or major projects just to handle unexpected growth.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;For this system:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;The average peak consumption was assumed to be approximately 800,000 SAPS.&lt;/LI&gt;
&lt;LI&gt;The overall measured peak reached around 993,480 SAPS.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;Workspace is another critical factor when projecting memory usage and growth for SAP systems. Typically, customers estimate workspace at 50% or 70% on top of the data and code footprint.&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;2. Key AS-IS Information&lt;/H2&gt;
&lt;H3 class="lia-align-justify"&gt;2.1 User Load&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;Concurrent users must be taken into account, along with any expected changes as part of the migration roadmap. Additional capacity should be considered if significant changes in user load are anticipated on the target platform.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The User Activity view (diagram not shown here) illustrates how users interact with the system over time:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Total Users – total number of users who logged on during one week.&lt;/LI&gt;
&lt;LI&gt;Active Users – users who performed more than 400 transaction steps in one week.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 class="lia-align-justify"&gt;2.2 System Performance&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;Monitoring the size and growth of the HANA database is crucial for ongoing system stability and performance.&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;2.3 Log Throughput Requirement / Usage on the On-Premises System&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;One of the critical design considerations is the performance of the on-premises system, as this becomes the baseline for designing the target environment.&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;3. Target Architecture: 15+1 Scale-Out on Azure (To-Be)&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;The target architecture is based on SAP BW Scale-Out with 15+1 nodes in each Azure zone:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;SAP BW Scale-Out 15+1 using M416s_v2 in Zone 1.&lt;/LI&gt;
&lt;LI&gt;SAP BW Scale-Out 15+1 using M416s_v2 in Zone 2.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;To provide a consistent access experience across zones:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;A zone-redundant Standard Load Balancer is used to maintain a single point of access.&lt;/LI&gt;
&lt;LI&gt;Two frontend IP addresses are configured, with two backend pools for Production and DR nodes respectively.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 class="lia-align-justify"&gt;4. Key Target Design Consideration for Scale-Out&lt;/H2&gt;
&lt;H3 class="lia-align-justify"&gt;4.1 SAP HANA Cloud Measurement Tool (HCMT)&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;SAP mandates execution of the HANA Cloud Measurement Tool (HCMT) as part of validating the configuration.&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;4.2 Compute – SAP on Azure Certification&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;The Azure compute platform (hardware) is required to have a valid SAP HANA hardware certification at the point of deployment. The selected SKU must be listed in the Certified and Supported SAP HANA® Hardware Directory.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Microsoft Cloud offers multiple SKUs that are certified by SAP to run SAP and HANA workloads. The same directory maintains certified hardware SKUs across all providers. SAP HANA, as an in-memory database, must meet specific certification criteria to be supported by SAP. The Microsoft engineering team works closely with SAP to bring new SKUs into the list of Microsoft Cloud hardware supported for HANA workloads.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;For HANA workloads, M-Series (Mv1 &amp;amp; Mv2) are preferred SKUs, though Microsoft also offers E-Series and HLI [HANA Large Instances], which are supported for HANA workloads.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In this design:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Compute: 16 × M416s_v2 in Zone 1 and 16 × M416s_v2 in Zone 2.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 class="lia-align-justify"&gt;4.3 Network Considerations&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;Network configuration at both the OS and virtual network (VNET) layers plays a critical role in achieving the required throughput and latency between components. For Scale-Out, there must be additional focus on host-based routing and the selection of the right NIC for HSR (HANA System Replication).&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The Load Balancer configuration is designed to maintain the same logical hostname for applications and third parties connecting to the database, regardless of the database location across zones.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;There are key differences between Scale-Up and Scale-Out:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;In Scale-Up, the entire database runs on a single VM/SKU.&lt;/LI&gt;
&lt;LI&gt;In Scale-Out, the database is split and distributed across multiple VMs/SKUs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;The communication between VMs/SKUs in a scale-out configuration directly affects database performance. It is therefore highly recommended to have a dedicated NIC and subnet to support internode traffic.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Another key consideration is compute-to-storage communication. To ensure direct connectivity from compute to storage, host-based routing is recommended and is one of the design aspects to meet SAP HANA KPI targets during HCMT execution.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Different connection types can be summarised as:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;&lt;STRONG&gt;Internode traffic (scale-out communication)&lt;/STRONG&gt; – recommended to use a dedicated NIC and subnet.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Compute–storage traffic&lt;/STRONG&gt; – should use host-based routing to reach storage directly.&lt;/LI&gt;
&lt;LI style="font-weight: bold;"&gt;&lt;STRONG&gt;Client connection / user traffic.&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Additional traffic, depending on configuration, which can be merged with either client or internode traffic based on system requirements during peak periods and should be reviewed during performance and stress testing before go-live.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4 class="lia-align-justify"&gt;4.3.1 Network layout&lt;/H4&gt;
&lt;P class="lia-align-justify"&gt;Three subnets created:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;One for Client Network (NIC)&lt;/LI&gt;
&lt;LI&gt;One for Inter-Node communication &amp;amp; HSR&lt;/LI&gt;
&lt;LI&gt;One for Storage Network (NIC)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;Additionally, a delegated Azure NetApp Files (ANF) subnet.&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;4.4 Azure Standard Load Balancer with Scale-Out Design&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;A special requirement from the customer was to manage third-party connections to the HANA database across zones.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;To ensure seamless connectivity from third-party systems regardless of the active zone:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;An Azure Standard Load Balancer is configured in front of the scale-out nodes across zones.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;This Load Balancer:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Handles connections to the HANA database.&lt;/LI&gt;
&lt;LI&gt;Supports the DR failover scenario, maintaining connectivity across zones.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 class="lia-align-justify"&gt;4.5 Storage Considerations&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;Storage selection is simplified by the fact that only Azure NetApp Files is supported for scale-out configurations with a standby node. Scale-out without a standby node provides more options.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In all cases, storage must be configured to achieve the required IOPS and throughput without driving cost up unnecessarily.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;For scale-out with a standby node, Azure NetApp Files is the only supported storage as of June 2022. Alongside this, Azure NetApp Files provides several features that should be carefully evaluated in the target design.&lt;/P&gt;
&lt;H4 class="lia-align-justify"&gt;4.5.1 ANF Storage Tier&lt;/H4&gt;
&lt;P class="lia-align-justify"&gt;Ultra / Premium tiers are used as appropriate.&lt;/P&gt;
&lt;H4 class="lia-align-justify"&gt;4.5.2 ANF Features&lt;/H4&gt;
&lt;P class="lia-align-justify"&gt;Application Volume Group (AVG): The Application Volume Group for SAP HANA enables customers and partners to deploy all volumes required to install and operate an SAP HANA database according to best practices in a single, optimised workflow. It includes the use of Proximity Placement Group (PPG) with VMs to achieve automated, low-latency deployments.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Manual QoS: With manual QoS volumes, customers and partners do not need to overprovision volume quota to achieve higher throughput, because throughput can be assigned to each volume independently. Total available throughput is defined at the capacity pool level and depends on the size and type of storage.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Dynamic Tiering: Azure NetApp Files provides three performance tiers: Standard, Premium and Ultra. Dynamic Tiering allows customers and partners to use a higher service level for better performance or a lower service level for cost optimisation without waiting time.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;ANF Files Backup: The ANF backup feature allows customers and partners to offload Azure NetApp Files snapshots to Azure Blob Storage in a fast and cost-effective way, further protecting data from accidental deletion.&lt;/P&gt;
&lt;H4 class="lia-align-justify"&gt;4.5.3 Selected Layout&lt;/H4&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;ANF Ultra tier selected for HANA data, log and shared mount points.&lt;/LI&gt;
&lt;LI&gt;ANF Premium selected to host offline transaction log backups.&lt;/LI&gt;
&lt;LI&gt;Azure Blob Storage used to store AzAcSnap HANA data snapshots and offline transaction log backups.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 class="lia-align-justify"&gt;4.6 Backup and Restore Approach&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;Backup and restore runtime can be a critical blocker if it does not meet business RPO/RTO requirements.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;With Azure NetApp Files, most customers implement the AzAcSnap tool to manage HANA database snapshots, followed by AzCopy to transfer snapshots to Blob Storage for long-term retention.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In this design:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;AzAcSnap is defined as the primary backup, running every 6 hours.&lt;/LI&gt;
&lt;LI&gt;The first backup is transferred to Blob Storage for long-term retention.&lt;/LI&gt;
&lt;LI&gt;Transaction log backups run every 15 minutes and are written to an ANF Premium mount.&lt;/LI&gt;
&lt;LI&gt;AzCopy jobs then transfer these backups to Blob for long-term retention.&lt;/LI&gt;
&lt;LI&gt;A dedicated server is used to manage AzCopy transfers from ANF volumes to Blob Storage as a temporary measure until ANF Files backup is available in the required region.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 class="lia-align-justify"&gt;4.6 Run Operations and Monitoring&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;Run, or Business As Usual (BAU), marks the transition from the migration programme to the support project. It is critical that monitoring and configuration are in place to capture alerts and collect sufficient logs for investigation during issue resolution.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Key elements include:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Proper configuration of monitoring and kdump to ensure logs and dumps are available to analyse unforeseen issues related to the OS or SKU.&lt;/LI&gt;
&lt;LI&gt;Use of Zabbix together with Azure Monitor for Virtual Machines for ongoing monitoring.&lt;/LI&gt;
&lt;LI&gt;Kdump configured and enabled on all VMs to capture critical information for troubleshooting unexpected issues.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 class="lia-align-justify"&gt;5. New: Azure NetApp Files Flexible service level (2025 update)&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;Since this project was originally designed, Azure NetApp Files has introduced a new Flexible service level, which is particularly relevant for SAP BW and SAP HANA workloads on Azure.&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;5.1 What is the Flexible service level?&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;The Flexible service level is a new Azure NetApp Files throughput service level that decouples throughput from capacity. It is available for new manual QoS capacity pools. You configure pool throughput (MiB/s) and capacity (TiB) independently instead of being bound to a fixed MiB/s-per-TiB ratio.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;This makes it easier to right-size storage for high-throughput, low-capacity workloads (for example, HANA log volumes) and high-capacity, moderate-throughput workloads (for example, BW cold data or shared file systems).&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;5.2 128 MiB/s baseline throughput at no extra charge&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;A key benefit of the Flexible service level is the included baseline throughput:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;The minimum throughput you can assign to a Flexible capacity pool is 128 MiB/s, regardless of pool size.&lt;/LI&gt;
&lt;LI&gt;The first 128 MiB/s of throughput is included in the service level—often referred to as the baseline throughput—and is available at no additional performance surcharge.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;In practice, this means every Flexible capacity pool you create automatically includes 128 MiB/s of throughput, and you only pay for any additional throughput configured beyond that baseline.&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;5.3 Throughput scaling for demanding workloads&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;With Flexible service level, throughput can scale significantly. The maximum throughput is documented as up to 640 MiB/s per TiB per pool, with an upper bound defined as 5 × 128 MiB/s × pool size (TiB). Throughput can be increased when needed (for example, during peak loads or migration cutovers) and reduced later, subject to a documented cool-down period between downward adjustments.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;This flexibility is especially useful for SAP BW and SAP HANA systems with variable peak and off-peak windows, and for migration phases where temporary higher throughput is required for data loads, initial syncs or cutovers, with the option to optimise cost afterwards.&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;6. Where to learn more&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-service-levels?utm_source=chatgpt.com" target="_blank" rel="noopener"&gt;Service levels for Azure NetApp Files – detailed description of Standard, Premium, Ultra and Flexible service levels, including throughput formulas and examples&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-netapp-files/whats-new?utm_source=chatgpt.com" target="_blank" rel="noopener"&gt;What’s new in Azure NetApp Files – latest feature announcements and regional availability updates&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-service-levels#flexible-examples" target="_blank" rel="noopener"&gt;Service levels for Azure NetApp Files | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-solution-architectures#sap-hana" target="_blank" rel="noopener"&gt;Azure NetApp Files solution architectures for SAP HANA – reference architectures and best practices for SAP HANA on ANF&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 04 Dec 2025 11:36:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/designing-migrating-and-managing-a-15-1-node-sap-bw-scale-out/ba-p/3715003</guid>
      <dc:creator>jitendrasingh</dc:creator>
      <dc:date>2025-12-04T11:36:18Z</dc:date>
    </item>
    <item>
      <title>Azure delivers the first cloud VM with Intel Xeon 6 and CXL memory - now in Private Preview</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/azure-delivers-the-first-cloud-vm-with-intel-xeon-6-and-cxl/ba-p/4470067</link>
      <description>&lt;P&gt;Intel released their new Intel Xeon 6 6500/6700 series processor with P-cores this year. Intel Xeon 6 processors provide performance and scalability by delivering outstanding performance for transactional and analytical workloads and provide scale-up capacities of up to 64TB of memory.&lt;/P&gt;
&lt;P&gt;In addition, Intel Xeon 6 supports the new &lt;A href="https://community.intel.com/t5/Blogs/Tech-Innovation/Data-Center/Breaking-the-Memory-Wall-with-Compute-Express-Link-CXL/post/1594848" target="_blank" rel="noopener"&gt;Compute Express Link (CXL) &lt;/A&gt;standard that enables memory expansion to &lt;STRONG&gt;accommodate larger data sets in a cost-effective manner&lt;/STRONG&gt;. CXL Flat Memory Mode is a unique Intel Xeon 6 capability that enhances the ability to right-size the compute-to-memory ratio and improve scalability without sacrificing performance. This enhanced ability can help run SAP S/4HANA more efficiently and help enable greater flexibility for configurations so they can better align with business needs and improve the total cost of ownership.&lt;/P&gt;
&lt;P&gt;In collaboration with SAP and Intel, Microsoft is delighted to announce private preview of CXL technology on Azure M-series family of VMs. We believe that, when combined with advancements in the new Intel Xeon 6 processors, it can tackle the challenges of managing the growing volume of data in SAP software, meet the increased demand for faster compute performance and reduce overall TCO.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-20"&gt;Stefan Bäuerle, SVP, Head of BTP, HANA &amp;amp; Persistency at SAP noted:&lt;/SPAN&gt;&lt;/STRONG&gt; &lt;BR /&gt;&lt;EM&gt;“Intel Xeon 6 helps deliver system scalability to support the growing demand for high-performance computing and growing database capacity among SAP customers.”&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-20"&gt;Elyse Ge Hylander, Senior Director, Azure SAP Compute stated:&lt;/SPAN&gt;&lt;/STRONG&gt; &lt;BR /&gt;&lt;EM&gt;“At Microsoft, we are continually exploring new technological innovations to improve our customer experience. We are thrilled about the potential of Intel’s new Xeon 6 processors with CXL and Flat Memory Mode. This is a big step forward to deliver the next-level performance, reliability, and scalability to meet the growing demands of our customers.”&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-20"&gt;&lt;STRONG&gt;Bill Pearson, Vice President of Data Center and Artificial Intelligence at Intel states:&lt;/STRONG&gt;&lt;/SPAN&gt; &lt;BR /&gt;&lt;EM&gt;“Intel Xeon 6 represents a significant advancement for Intel, opening up exciting business opportunities to strengthen our collaboration with Microsoft Azure and SAP. The innovative instance architecture featuring CXL Flat Memory Mode is designed to enhance cost efficiency and performance optimization for SAP software and SAP customers.”&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you are interested in joining our CXL private preview in Azure, contact&amp;nbsp;&lt;A href="mailto:Mseries_CXL_Preview@microsoft.com" target="_blank" rel="noopener"&gt;Mseries_CXL_Preview@microsoft.com&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;###&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Co-author:&lt;/STRONG&gt; &lt;SPAN data-teams="true"&gt;Phyllis Ng - Senior Director of Hardware Strategic Planning (Memory and Storage) - Microsoft&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 20 Nov 2025 22:20:32 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/azure-delivers-the-first-cloud-vm-with-intel-xeon-6-and-cxl/ba-p/4470067</guid>
      <dc:creator>Elyse_Ge_Hylander</dc:creator>
      <dc:date>2025-11-20T22:20:32Z</dc:date>
    </item>
    <item>
      <title>SAP on Azure Product Announcements Summary – SAP TechEd 2025</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-product-announcements-summary-sap-teched-2025/ba-p/4465383</link>
      <description>&lt;P&gt;Today at SAP TechEd 2025, we are excited to share the next evolution of the Microsoft-SAP partnership. Building on decades of collaboration, we continue to advance RISE with SAP on Azure and deepen integrations with SAP S/4HANA Cloud public edition. Our latest innovations deliver enhanced security for SAP and non-SAP workloads, while unified analytics and AI-driven Copilot experiences empower customers to make smarter decisions.&lt;/P&gt;
&lt;P&gt;These advancements are designed to help customers accelerate their digital transformation, drive operational excellence, and unlock new business value.&lt;/P&gt;
&lt;H4&gt;Customer Spotlight: Medline&lt;/H4&gt;
&lt;P&gt;Medline’s SAP transformation on Microsoft Azure is &lt;A href="https://www.microsoft.com/en/customers/story/25243-medline-azure" target="_blank" rel="noopener"&gt;fueling new levels of agility and intelligence across its operations with SAP on Azure&lt;/A&gt;. The company’s migration boosted system resilience, improved key SAP workload transaction times by more than 80% and enabled real-time collaboration and predictive analytics for clinicians and business users - laying the groundwork to extend these insights through Copilot and Azure AI.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM class="lia-align-center"&gt;“When we partnered on the migration, it ushered in a completely new way in which Microsoft and Medline work together. It became a partnership, with the cloud migration becoming a stepping stone to bigger and brighter, more business-outcome–driven engagements.”&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM class="lia-align-center"&gt;— Jason Kaley, SVP, IT Operations &amp;amp; Architecture, Medline &lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H4&gt;Customer Spotlight: Commerz Real&lt;/H4&gt;
&lt;P&gt;Commerz Real, a German financial services firm specializing in real estate, infrastructure, and leasing, &lt;A href="https://www.microsoft.com/en/customers/story/25126-commerz-real-rise-with-sap" target="_blank" rel="noopener"&gt;modernized its SAP infrastructure by migrating its complete SAP landscape to SAP RISE on Azure&lt;/A&gt;. Built to address stringent regulatory, security, and performance demands, the platform delivers high scalability, real-time monitoring, and faster, more stable operations.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM&gt;“The decision to use Microsoft Azure was a deliberate one. In the past, security concerns and strict regulatory requirements kept us from moving SAP to the cloud. Today we say: If you don’t do that, you won’t survive in the market.”&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;— Nadine Felderer, Head of SAP Services, Commerz Real&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are pleased to announce additional SAP with Microsoft product updates and details to further help customers innovate on the most trusted cloud for SAP.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Bi-directional&lt;/STRONG&gt; Agent to Agent communication between &lt;STRONG&gt;Microsoft Copilot and SAP Joule&lt;/STRONG&gt;. Enterprise-ready SAP API enablement for AI through &lt;STRONG&gt;MCP in Azure API Management&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;General Availability of our &lt;STRONG&gt;agentless Sentinel for SAP&lt;/STRONG&gt; data connector with significantly simpler onboarding through SAP Integration Suite. Ready for the future.&lt;/LI&gt;
&lt;LI&gt;SAP released &lt;STRONG&gt;S/4HANA Cloud public edition&lt;/STRONG&gt; for our Sentinel Solution for SAP.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Microsoft Entra ID&lt;/STRONG&gt; advances SAP identity governance with new OAuth 2.0 support, &lt;STRONG&gt;SAP IAG&lt;/STRONG&gt; integration preview, and expanded &lt;STRONG&gt;SAP Access Control migration&lt;/STRONG&gt; for unified, secure access.&lt;/LI&gt;
&lt;LI&gt;Advanced support for &lt;STRONG&gt;High Availability with&lt;/STRONG&gt; &lt;STRONG&gt;SAP ASE (Sybase) database backup&lt;/STRONG&gt; on Azure Backup.&lt;/LI&gt;
&lt;LI&gt;SAP Deployment Automation Framework now supports &lt;STRONG&gt;highly available scale-out architectures with HANA System Replication&lt;/STRONG&gt; for large-scale resilient configurations.&lt;/LI&gt;
&lt;LI&gt;SAP Testing Automation Framework enhances high availability testing with &lt;STRONG&gt;offline Pacemaker cluster validation for RHEL/SUSE&lt;/STRONG&gt;, and native Linux-based validation tools quality checks&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Enhanced SAP Inventory and Observability Dashboard&lt;/STRONG&gt; to reduce operational risk, and supports production-ready SAP systems, along with a customizable Windows Quality Checks PowerShell template.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let's dive into the summary details of product updates and services.&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Extend and Innovate and Secure &lt;/STRONG&gt;&lt;/H2&gt;
&lt;H3&gt;&lt;STRONG&gt;Copilot Studio &lt;/STRONG&gt;&lt;STRONG&gt;and SAP Joule &lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Since the release of the Joule and Copilot integration earlier this year, we have seen great interest and adoption with customers and partners. The Joule as a host integration is planned to be released later this year. &amp;nbsp;&lt;A href="https://help.sap.com/docs/joule/integrating-joule-with-sap/integrating-joule-with-microsoft-365-copilot" target="_blank" rel="noopener"&gt;Integrating Joule with Microsoft 365 Copilot | SAP Help Portal&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For customers on their journey towards RISE and GROW, we also worked on the Azure API Management team to enable the exposure of SAP OData Services from your SAP Systems as an MCP server which then can be consumed in Copilot using Microsoft Copilot Studio. This enables the interaction of end-users with their SAP system based on any OData services. For more details, check out &lt;A href="https://learn.microsoft.com/en-us/azure/api-management/export-rest-mcp-server" target="_blank" rel="noopener"&gt;Expose REST API in API Management as MCP server &lt;/A&gt;and &lt;A href="https://www.youtube.com/watch?v=69L4UBLdi3g" target="_blank" rel="noopener"&gt;Copilot + SAP: Azure API Management, MCP and SAP OData&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;To simplify the integration and help customers and partners get started faster, we are releasing preconfigured Copilot Studio Agent that can orchestrate over other agents like SAP, Fabric and Microsoft 365. Customers can use these agents out of the box or use them as a foundation to extend and build their own Copilot Agents.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG style="color: rgb(30, 30, 30); font-size: 28px;"&gt;Microsoft Security for SAP&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Security is being&amp;nbsp;&lt;STRONG&gt;reengineered for the AI era&lt;/STRONG&gt;&amp;nbsp;-&amp;nbsp;moving beyond static, rule-bound controls and after-the-fact response toward platform-led, machine-speed defense.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Attackers think in graphs -&amp;nbsp;Microsoft does too.&amp;nbsp;We are&amp;nbsp;bringing relationship-aware context to Microsoft Security suite -&amp;nbsp;so defenders and AI can see connections, understand the impact of a potential compromise (blast radius), and act faster across pre-breach and post-breach scenarios.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;SAP S/4HANA Cloud public edition&lt;/STRONG&gt; &lt;A href="https://aka.ms/s4-pc-sentinel-release-blog" target="_blank" rel="noopener"&gt;Add-on&lt;/A&gt; for Microsoft Sentinel for SAP (preview): Enables deep, native integration of SAP telemetry with Sentinel, bringing advanced threat detection, investigation, and response to SAP workloads running in the cloud.&lt;/LI&gt;
&lt;LI&gt;Microsoft Sentinel for SAP &lt;A href="https://aka.ms/agentless-ga-blog" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Agentless Data Connector&lt;/STRONG&gt;&lt;/A&gt;: Now generally available, the agentless connector significantly simplifies deployment while delivering secure, high-fidelity ingestion of SAP audit and application logs into Sentinel.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Expanded Security Guidance&lt;/STRONG&gt;: Enhanced guidance for Microsoft Defender, Ransomware Protection, and Cyber Defense for SAP, helping customers implement best practices for hardening SAP environments and responding to evolving threats.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost-Efficient Long-Term Log Storage&lt;/STRONG&gt;: Organizations can now take advantage of &lt;A href="https://learn.microsoft.com/azure/sentinel/datalake/sentinel-lake-overview" target="_blank" rel="noopener"&gt;Sentinel Data Lake&lt;/A&gt; to retain SAP logs for 12 years at scale for compliance (NIS2, DORA) and forensic use cases - at a fraction of traditional storage costs.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Purview &lt;/STRONG&gt;shipping most requested features updates for our existing SAP connectors (SNC mode support in preview, CDS view support, and scoped metadata scanning) and a new connector for BW/4HANA.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;SAP has reiterated end of maintenance for SAP Identity Management (SAP IDM) by end of 2027 and is collaborating with Microsoft so customers can migrate identity scenarios to Microsoft Entra ID as the recommended successor approach.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Provisioning backbone in place&lt;/STRONG&gt;: Microsoft Entra released &lt;A href="https://learn.microsoft.com/entra/identity/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial" target="_blank" rel="noopener"&gt;new features&lt;/A&gt; for the built‑in connector for SAP Cloud Identity Services (CIS) to support authentication with OAuth 2.0, and provisioning of groups to streamline authorization management in downstream SAP targets like SAP S/4HANA and SAP BTP, enabling HR‑driven, &lt;A href="https://community.sap.com/t5/technology-blog-posts-by-members/identity-and-access-management-with-microsoft-entra-part-iii-successfactors/ba-p/14233747" target="_blank" rel="noopener"&gt;end‑to‑end identity lifecycles&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Private Preview: Microsoft Entra Integration with SAP IAG&lt;/STRONG&gt;: The private preview for Microsoft Entra integration with SAP Identity Access Governance (IAG) is now underway. Selected customers are testing Entra ID Governance access packages that include SAP IAG roles as resources, routing of access approvals through SAP IAG, and provisioning of roles across both systems. &lt;A href="https://forms.cloud.microsoft/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-KNzaa8WIhKvUH7PBDqQsJUOTNWS0owTk5TTU9LVVM2UE1YRkdRV0NJOS4u&amp;amp;route=shorturl" target="_blank" rel="noopener"&gt;Sign-Up here&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Enhanced Integration Scope with SAP Access Control (AC): &lt;/STRONG&gt;Driven by direct customer feedback, Microsoft and SAP are expanding the migration and integration scope to include SAP Access Control (AC). This enhancement will enable comprehensive access management, risk analysis, and policy enforcement on-premises, leveraging Microsoft Entra’s governance capabilities for improved security and compliance.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Together, these innovations give customers end-to-end visibility and protection across SAP landscapes—spanning public cloud, hybrid, and on-premises deployments.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;SAP on Azure Software Products and Services&amp;nbsp;&lt;/STRONG&gt;&lt;/H2&gt;
&lt;H3&gt;&lt;STRONG&gt;Azure Backup for SAP&amp;nbsp;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;We are committed to expanding backup support for additional SAP workloads. Following the &lt;A href="https://techcommunity.microsoft.com/blog/SAPApplications/sap-on-azure-product-announcements-summary-%E2%80%93-sap-sapphire-2025/4415281" target="_blank" rel="noopener"&gt;general availability of ASE backup&lt;/A&gt;, we have further enhanced its capabilities with the introduction of high availability configuration support. This enhancement delivers automatic backup support for SAP systems setup with Replication Server, ensuring seamless protection after failover or failback events without the need for manual intervention. As a result, users benefit from immediate and continuous data protection, along with a simplified restore process using a single backup chain.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have expanded our Snapshot backup capability for SAP HANA by adding Recovery Services Vault support. This will help customers store their snapshot backups with long term retention, while gaining protection from Ransomware attacks. Vault support brings in capabilities like immutability, soft-delete enablement, multi-user-authorization to further safeguard the data.&lt;/P&gt;
&lt;P&gt;We have also launched the preview for “Scale-out” support configurations for SAP HANA streaming backup, expanding our overall topology support.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;SAP Deployment Automation Framework&amp;nbsp;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;We are releasing updates to the SAP Deployment Automation Framework (SDAF) and SAP Testing Automation Framework (STAF) that expand testing coverage, improve reliability, and provide additional deployment flexibility for SAP environments on Azure.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;SAP Deployment Automation Framework (SDAF)&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;SDAF deployment and configuration scenarios now include scale-out architectures with HANA System Replication (HSR). This enhancement addresses resiliency requirements for large-scale deployments requiring multi-node scale-out configurations with built-in replication capabilities.&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;SDAF now supports GitHub Actions in addition to existing deployment methods including Azure DevOps pipelines, CLI scripts, and the WebApp interface. Organizations using GitHub for source control and infrastructure management can now deploy and manage SAP environments using their existing workflows and tooling preferences.&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;SAP Testing Automation Framework (STAF)&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;STAF now supports offline validation for SAP Pacemaker clusters. This capability enables testing of resource agent failover mechanisms without executing live cluster operations, reducing risk during validation cycles and allowing for pre-deployment verification of high availability configurations.&lt;/P&gt;
&lt;P&gt;The high availability testing suite has been updated to include SAPHanaSr-ANGI tests, ensuring compatibility with SUSE Linux Enterprise Server 15 and SAP HANA 2.0 SP5 environments. This update addresses the requirements of organizations running current SAP HANA releases on modern SUSE distributions.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;Configuration checks in preview, represents a rewrite of the open-source Quality Checks tool, now integrated as a native capability within STAF. This tool validates SAP on Azure installations against Microsoft reference architecture and configuration guidance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Azure Center and Azure Monitor for SAP Solutions&amp;nbsp;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;We are pleased to share that Azure Center for SAP solutions (ACSS) is now available in &lt;STRONG&gt;Italy North&lt;/STRONG&gt;, providing end-to-end SAP workload management to more customers across Europe.&lt;/P&gt;
&lt;P&gt;Additionally, Azure Monitor for SAP solutions (AMS) is now available in&lt;STRONG&gt; Italy North.&lt;/STRONG&gt; AMS continues to help SAP customers reliably monitor their mission-critical workloads on Azure with comprehensive insights.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Get started:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/center-sap-solutions/overview" target="_blank" rel="noopener"&gt;Azure Center for SAP solutions | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/monitor/about-azure-monitor-sap-solutions" target="_blank" rel="noopener"&gt;What is Azure Monitor for SAP solutions? | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/ACSSPortal" target="_blank" rel="noopener"&gt;Azure Portal&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Azure Center for SAP solutions Tools and Frameworks&amp;nbsp;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;We have refreshed our &lt;STRONG&gt;SAP on Azure Well-Architected Framework&lt;/STRONG&gt; and the accompanying &lt;STRONG&gt;SAP on Azure Assessment&lt;/STRONG&gt; to reflect the latest platform guidance. The update aligns with recent Azure innovations—including VMSS Flex, Premium SSD v2, Capacity Reservation Groups, Mv3-series, and NVMe-based SKUs—so architects and admins can plan and deploy with current best practices. The assessment is also now surfaced on the main Assessments hub for easier access and can be used as a repeatable checkpoint throughout your SAP deployment lifecycle.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Quality Checks (PowerShell) for windows:&lt;/STRONG&gt; We have published &lt;A href="https://github.com/Azure/SAP-on-Azure-Scripts-and-Utilities/tree/main/SAPOnAzureWindowsChecks" target="_blank" rel="noopener"&gt;a lightweight, read-only script&lt;/A&gt; for customers running SAP on Windows and SQL Server on Microsoft Azure. It performs post-provisioning health checks and outputs a color-coded HTML report plus JSON. Use it as a baseline template—customize the thresholds to your environment, and feel free to contribute enhancements to cover your configuration requirements.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Observability Dashboard:&lt;/STRONG&gt; Based on customer feedback, we have expanded the dashboard to surface design-impacting signals for running &lt;STRONG&gt;specialized workloads on Azure&lt;/STRONG&gt;. It now offers Overview, Security, Networking, and Inventory views, plus extended reports for managers and hands-on engineers. Updates make it easier to review VM redundancy, spot orphaned resources, see Capacity Reservation Groups with their associated VMs in the primary region, and count Public IPs on the Basic SKU—helping you stay on top of infrastructure hygiene and avoid unsupported configurations.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;SAP + Microsoft Co-Innovations&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Microsoft and SAP are always working on new solutions to help our customers adapt and grow their businesses in several areas including AI, Business Suite, Data, Cloud ERP, Security, SAP BTP, among others. Recently, we started a new era of Agentic AIOps collaboration between SAP and Microsoft with fully orchestrated multi-agent ecosystem for mission critical workload. Please &lt;A class="lia-external-url" href="https://www.sap.com/resources/sap-and-microsoft-lead-aiops-revolution" target="_blank" rel="noopener"&gt;check out this blog&lt;/A&gt; to learn more.&lt;/P&gt;</description>
      <pubDate>Tue, 04 Nov 2025 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-product-announcements-summary-sap-teched-2025/ba-p/4465383</guid>
      <dc:creator>Hiren_Shah_Azure</dc:creator>
      <dc:date>2025-11-04T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Evolving SAP Testing on Azure: What’s New in the SAP Testing Automation Framework</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/evolving-sap-testing-on-azure-what-s-new-in-the-sap-testing/ba-p/4465802</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;We're announcing the general availability (GA) of the &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sap/automation/testing-framework-architecture" target="_blank" rel="noopener"&gt;SAP Testing Automation Framework (STAF)&lt;/A&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;an &lt;A class="lia-external-url" href="https://github.com/azure/sap-automation-qa" target="_blank" rel="noopener"&gt;open-source&lt;/A&gt; orchestration tool that automates validation of SAP deployments on Microsoft Azure.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;With this release, &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;High Availability (HA) function testing&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;is now available for SAP HANA scale-up databases and SAP Central Services (ASCS/ERS) running in two-node Pacemaker clusters. In addition, we're introducing&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Configuration Checks (in preview) &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;that helps in validating SAP system configurations against Azure best practices across infrastructure, storage, OS parameters, and cluster resources.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;How STAF Works?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;STAF uses a hub-and-spoke architecture where a centralized management server orchestrates validation across your SAP landscape using ansible. The framework code resides on a management server that connects remotely to SAP systems via SSH. System-specific details (hostnames, credentials, topology) are stored in configuration files on the management server or in source-controlled configuration repository (in case of SDAF). This separation means no agents or framework components need to be installed on SAP virtual machines.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Setting up workspace configurations is a one-time activity. Once defined, the same configuration can be used repeatedly for pre-go-live validation, change verification, or periodic compliance audits without reconfiguration.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;High Availability Functional Testing&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;High Availability functional testing executes controlled failure scenarios to verify that Pacemaker clusters, HANA System Replication, and SAP Central Services respond correctly when failures occur. Each test follows a pattern: capture baseline state, inject failure, monitor cluster reaction, validate failover, and restore to stable state.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Failure Scenarios Included&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;HANA Database: &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;The functional test cases include&amp;nbsp;Indexserver&amp;nbsp;kill, network isolation, node crashes, storage freezes (in case of ANF filesystem), SBD fencing events etc. These test cases are part of the guidance outlined in the official document. &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;SAP Central Services:&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;The function test cases include A&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;SCS/ERS process termination, message server failures, network isolation, and planned resource migration etc. These test cases are part of the guidance outlined in the official document. &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Key Metrics Captured&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;For every test, the framework records:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Failure detection time: Duration from failure injection to Pacemaker detecting the issue through resource agent monitor operations.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Fencing duration: Time from failure detection to successful node isolation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Failover completion time: End-to-end duration from failure injection to full-service availability on the secondary node. For HANA, this includes takeover decision, log replay, and&amp;nbsp;indexserver&amp;nbsp;startup. For ASCS/ERS, it includes enqueue table replication and service restart.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Reports include pass/fail status, detailed timelines with millisecond-level precision, diagnostic logs for troubleshooting failed scenarios, and execution logs for troubleshooting framework failures.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;Offline Validation of High Availability Configuration&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Offline validation in STAF enables assessment of SAP HANA and Central Services high availability cluster configurations without requiring a live SSH connection to production systems. By comparing exported cluster information base (CIB) XML files from each node with best practices, STAF delivers non-intrusive validation, making it ideal for environments with restricted connectivity. To learn more about setup and usage, check out the documentation &lt;A class="lia-external-url" href="https://github.com/Azure/sap-automation-qa/blob/main/docs/HA_OFFLINE_VALIDATION.md" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Configuration Checks (Preview)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Configuration checks provide read-only validation of SAP system settings against documented Azure best practices and SAP Notes. The framework executes Azure infrastructure validations using Azure CLI from the management server, while OS and SAP-level checks use SSH connections to target systems. Integrating data from the management server and SAP virtual machine inspections enable comprehensive validation of the infrastructure deployed to run SAP systems.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;STAF automates validation of SAP infrastructure on Azure by checking VM, storage, and network configurations using Azure CLI, and ensures compliance with best practices for HANA and Azure Files/ANF. It also verifies OS/kernel parameters, SAP profile settings, high availability cluster configurations, and database-specific settings for both HANA and Db2, providing a comprehensive compliance check across all critical system layers.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Findings are categorized as Passed, Failed, Critical, Warning, or Informational with reference links to SAP Notes and Azure documentation for remediation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Getting Started with STAF&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;STAF is fully open source, available on&amp;nbsp;&lt;/SPAN&gt;&lt;A class="lia-external-url" href="https://github.com/Azure/sap-automation-qa" target="_blank" rel="noopener"&gt;GitHub&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;, providing complete transparency into test logic, Ansible playbooks, and Python modules. The repository includes comprehensive setup documentation, sample workspace configurations.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Setup Options&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;For standalone SAP systems, follow the &lt;/SPAN&gt;&lt;A class="lia-external-url" href="https://github.com/Azure/sap-automation-qa/blob/main/docs/SETUP.MD" target="_blank" rel="noopener"&gt;Setup Guide&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;to configure a management server with the framework code and workspace definitions.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;For environments already using SAP Deployment Automation Framework (SDAF), STAF integrates seamlessly into existing pipelines via the &lt;/SPAN&gt;&lt;A class="lia-external-url" href="https://github.com/Azure/sap-automation-qa/blob/main/docs/SDAF_INTEGRATION.md" target="_blank" rel="noopener"&gt;SDAF Integration Guide&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The framework supports SLES and RHEL distributions across multiple HA configurations: HANA scale-up, SAP Central Services with ENSA1/ENSA2, Azure Fence Agent or SBD-based fencing, and supports storage options including Azure Managed Disks, Azure Files, and Azure NetApp Files. For SLES environments, both SAPHanaSR and SAPHanaSR-angi topologies are supported.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Community and Contributions&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Contributions are welcome. Whether it's reporting issues, suggesting new test scenarios, or submitting pull requests, community feedback helps improve the framework for all SAP on Azure users. Visit the &lt;A class="lia-external-url" href="https://github.com/azure/sap-automation-qa" target="_blank" rel="noopener"&gt;GitHub repository&lt;/A&gt; to explore the code, review existing issues, or open new ones. For questions or discussions, engage through GitHub Issues.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;BR /&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;Start validating SAP systems today and ensure clusters are ready when it matters most.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 04 Nov 2025 06:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/evolving-sap-testing-on-azure-what-s-new-in-the-sap-testing/ba-p/4465802</guid>
      <dc:creator>DevanshJain</dc:creator>
      <dc:date>2025-11-04T06:00:00Z</dc:date>
    </item>
    <item>
      <title>SAP Business Data Cloud Now Available on Microsoft Azure</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-business-data-cloud-now-available-on-microsoft-azure/ba-p/4460551</link>
      <description>&lt;P&gt;We’re thrilled to announce that&amp;nbsp;&lt;STRONG&gt;SAP Business Data Cloud (SAP BDC) including SAP Databricks&lt;/STRONG&gt; is now available on &lt;STRONG&gt;Microsoft Azure &lt;/STRONG&gt;marking a major milestone in our strategic partnership with SAP and Databricks and our commitment to empowering customers with cutting-edge Data &amp;amp; AI capabilities.&lt;/P&gt;
&lt;P&gt;SAP BDC is a fully managed SaaS solution designed to unify, govern, and activate SAP and third-party data for advanced analytics and AI-driven decision-making. Customers can now &lt;STRONG&gt;deploy SAP BDC on Azure in US East, US West and Europe West,&lt;/STRONG&gt; with additional regions coming soon, and unlock transformative insights from their enterprise data with the scale, security, and performance of Microsoft’s trusted cloud platform.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Why SAP BDC on Azure Is a Game-Changer for Data &amp;amp; AI&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Deploying SAP BDC on Azure enables organizations to accelerate their &lt;STRONG&gt;Data &amp;amp; AI initiatives&lt;/STRONG&gt; by modernizing their SAP Business Warehouse systems and leveraging a modern data architecture that includes SAP HANA Cloud, data lake files and connectivity to Microsoft technology. Whether it’s building AI-powered intelligent applications, enabling semantically rich data products, or driving predictive analytics, SAP BDC on Azure provides the foundation for scalable, secure, and context-rich decision-making. &lt;BR /&gt;Running SAP BDC workloads on &lt;STRONG&gt;Microsoft Azure&lt;/STRONG&gt; unlocks the full potential of enterprise data by integrating SAP systems with non-SAP data using Microsoft’s powerful &lt;STRONG&gt;Data &amp;amp; AI services&lt;/STRONG&gt; - enabling customers to build intelligent applications grounded in critical business context.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why Azure is an Ideal Platform for Running SAP BDC&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Microsoft Azure stands out as a leading cloud platform for hosting SAP solutions, including SAP BDC. Azure’s global infrastructure, high-performance networking, and powerful Data &amp;amp; AI capabilities make it an ideal foundation for large-scale SAP workloads. When organizations face complex data environments and need seamless interoperability across tools, Azure’s resilient backbone and enterprise-grade services provide the scalability and reliability essential for building a robust SAP data architecture.&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Under the Hood: SAP Databricks in SAP BDC is Powered by Azure Databricks&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;A key differentiator of SAP BDC on Azure is that &lt;STRONG&gt;SAP Databricks&lt;/STRONG&gt;, a core component of BDC, runs on &lt;STRONG&gt;Azure Databricks&lt;/STRONG&gt;—Microsoft’s first-party service.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Databricks is a fully managed first party service making Microsoft Azure the optimal cloud for running Databricks workloads. &lt;/STRONG&gt;It uniquely offers:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Native integration with Microsoft Entra ID&lt;/STRONG&gt; for seamless access control.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Optimized performance with Power BI&lt;/STRONG&gt;, delivering unmatched analytics speed.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Enterprise-grade security and compliance&lt;/STRONG&gt;, inherent to Azure’s first-party services.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Joint engineering and unified support&lt;/STRONG&gt; from Microsoft and Databricks.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Zero-copy data sharing&lt;/STRONG&gt; between SAP BDC and Azure Databricks, enabling frictionless collaboration across platforms.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This deep integration ensures that customers benefit from the full power of Azure’s AI, analytics, and governance capabilities while running SAP workloads.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Expanding Global Reach: What’s Next&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;While SAP BDC is now live in three Azure regions US East, US West and Europe - we’re just getting started. Over the next few months, availability will expand to additional Azure regions such as Brazil and Canada.&lt;/P&gt;
&lt;P&gt;For the remaining regions, a continuously updated roadmap can be found on the &lt;A href="https://roadmaps.sap.com/board?range=2025Q3-2026Q3&amp;amp;q=azure&amp;amp;PRODUCT=73555000100800004851#Q3%202025" target="_blank" rel="noopener"&gt;SAP Roadmap Explorer website&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Final Thoughts&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This launch reinforces Microsoft Azure’s longstanding partnership with SAP, backed by over 30 years of trusted partnership and co-innovation. With SAP BDC now available on Azure, customers can confidently modernize their data estate, unlock AI-driven insights, and drive business transformation at scale.&lt;/P&gt;
&lt;P&gt;Stay tuned as we continue to expand availability and bring even more Data &amp;amp; AI innovations to our joint customers over the next few months.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 09 Oct 2025 23:08:48 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-business-data-cloud-now-available-on-microsoft-azure/ba-p/4460551</guid>
      <dc:creator>Hiren_Shah_Azure</dc:creator>
      <dc:date>2025-10-09T23:08:48Z</dc:date>
    </item>
    <item>
      <title>Announcing Public Preview for Business Process Solutions</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/announcing-public-preview-for-business-process-solutions/ba-p/4453658</link>
      <description>&lt;P&gt;In today’s AI powered enterprises, success hinges on access to reliable, unified business information. Whether you are deploying AI-augmented workflows or fully autonomous agentic solutions, one thing is clear: trusted, consistent data is the fuel that drives intelligent outcomes. Yet in many organizations, data remains fragmented across best of breed applications – creating blind spots in cross-functional processes and throwing roadblocks in the path of automation. Microsoft is dedicated to tackle these challenges, delivering a unified data foundation that accelerates AI adoption, simplifies automation and reduces risk – empowering businesses to unlock the full potential of unified data analytics and agentic intelligence.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;Our new solution offers cross-functional insights across previously siloed environments and includes:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Prebuilt data models for enterprise business applications in Microsoft Fabric&lt;/LI&gt;
&lt;LI&gt;Source system data mappings and transformations&lt;/LI&gt;
&lt;LI&gt;Prebuilt dashboards and reports in Power BI&lt;/LI&gt;
&lt;LI&gt;Prebuilt AI Agents in Copilot Studio (coming soon)&lt;/LI&gt;
&lt;LI&gt;Integrated Security and Compliance&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;By unifying Microsoft’s Fabric and AI solutions we can rapidly accelerate transformation and derisk AI rollout through repeatable, reliable, prebuilt solutions.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Functional Scope&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Our new solution currently supports a set of business applications and functional areas, enabling organizations to break down silos and drive actionable insights across their core processes. The platform covers key domains such as:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Finance:&lt;/STRONG&gt; Delivers a comprehensive view of financial performance, integrating data from general ledger, accounts receivable, and accounts payable systems. This enables finance teams to analyze trends, monitor compliance, and optimize cash flow management all from within Power BI. The associated Copilot agent provides not only access to this data via natural language but will also enable financial postings.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Sales:&lt;/STRONG&gt; Provides a complete perspective on customers’ opportunity to cash journeys, from initial opportunity through invoicing and payment via Power BI reports and dashboards. The associated Copilot agent can help improve revenue forecasting, by connecting structured ERP and CRM data with unstructured data from Microsoft 365, also tracking sales pipeline health and identify bottlenecks.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Procurement:&lt;/STRONG&gt; Supports strategic procurement and supplier management, consolidating purchase orders, goods receipts, and vendor invoicing data into a complete spend dashboard. This empowers procurement teams to optimize sourcing strategies, manage supplier risk, and control spend.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Manufacturing:&amp;nbsp;&lt;/STRONG&gt;(coming soon)&lt;STRONG&gt;:&lt;/STRONG&gt; Will extend coverage to manufacturing and production processes, enabling organizations to optimize resource allocation and monitor production efficiency.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Each item within Business Process Solutions is delivered as a complete, business-ready offering. These models are thoughtfully designed to ensure that organizations can move seamlessly from raw data to actionable execution. Key features include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Facts and Dimensions:&lt;/STRONG&gt; Each model is structured to capture both transactional details (facts) and contextual information (dimensions), supporting granular analysis and robust reporting across business processes.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Transformations:&lt;/STRONG&gt; Built-in transformations automatically prepare data for reporting and analytics, making it compatible with Microsoft Fabric. For example, when a business user needs to compare sales results from Europe, Asia, and North America, the solution transformations handle currency conversion behind the scenes. This ensures that results are consistent across regions, making analysis straightforward and reliable—without the need for manual intervention or complex configuration.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Insight to Action:&lt;/STRONG&gt; Customers will be able to leverage prebuilt Copilot Agents within Business Process Solutions to turn insight into action. These agents are deeply integrated not only with Microsoft Fabric and Microsoft Teams, but also connected source applications, enabling users to take direct, contextual actions across systems based on real-time insights. By connecting unstructured data sources such as emails, chats, and documents from Microsoft 365 apps, the agents can provide a holistic and contextualized view to support smarter decisions.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;With embedded triggers and intelligent agents, automated responses could be initiated based on new insights -- streamlining decision-making and enabling proactive, data-driven operations.&amp;nbsp; Ultimately, this will empower teams to not just understand what is happening on a wholistic level, but to also take faster and smarter actions, and with greater confidence.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Authorizations:&lt;/STRONG&gt; Data models are tailored to respect organizational security and access policies, ensuring that sensitive information is protected and only accessible to authorized users. The same user credential principles apply to the Copilot agents when interacting with/updating the source system in the user-context.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Behind the scenes, the solution automatically provisions the required objects and infrastructure to build the data warehouse, removing the usual complexity of bringing data together. It guarantees consistency and reliability, so organizations can focus on extracting value from their data rather than managing technical details.&amp;nbsp; This reliable data foundation serves as one of the key informants of the agentic business processes.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Accelerated Insights with Prebuilt Analytics&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Building on these robust data models, Business Process Solutions offer a suite of prebuilt Power BI reports tailored to common business processes. These reports provide immediate access to key metrics and trends, such as financial performance, sales effectiveness, and procurement efficiency.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Designed for rapid deployment, they allow organizations to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Start analyzing data from day one, without lengthy setup or customization.&lt;/LI&gt;
&lt;LI&gt;Adapt existing reports for your organization’s exact business needs.&lt;/LI&gt;
&lt;LI&gt;Demonstrate best practices for leveraging data models in analytics and decision-making.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This approach accelerates time-to-value and also empowers users to explore new analytical scenarios and drive continuous improvement.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Extensibility and Customization&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Every organization is unique and our new solution is designed to support this, allowing you to adapt analytics and data models to fit your specific processes and requirements. You can customize scope items, bring in your own tables and views, integrate new data sources as your business evolves, and combine data across Microsoft Fabric for deeper insights.&lt;/P&gt;
&lt;P&gt;Similarly, the associated agents will be customizable from Copilot Studio to adapt to your specific Enterprise apps configuration.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;This flexibility ensures that, no matter how your organization operates, Business Process Solutions helps you unlock the full value of your data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Data integration&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Business Process Solutions uses the same connectivity options as Microsoft Fabric and Copilot Studio but goes further by embedding best practices that make integration simpler and more effective. We recognize that no single pattern can address the diverse needs of all business applications. We also understand that many businesses have already invested in data extraction tools, which is why our solution supports a wide range of options, from native connectivity to third-party options that bring specialized capabilities to the table.&amp;nbsp; With Business Process Solutions we ensure data can be interacted with in a reliable and high-performant way, whether working with massive volumes or complex data structures.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;STRONG&gt;Getting started&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;If your organization is ready to unlock the value of unified analytics, getting started is simple. Just send us a request using the form at:&lt;STRONG&gt; &lt;A class="lia-external-url" href="https://aka.ms/JoinBusAnalyticsPreview" target="_blank" rel="noopener"&gt;https://aka.ms/JoinBusAnalyticsPreview&lt;/A&gt;&lt;/STRONG&gt;. Our team will guide you through the next steps and help you begin your journey.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Sep 2025 08:30:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/announcing-public-preview-for-business-process-solutions/ba-p/4453658</guid>
      <dc:creator>Hiren_Shah_Azure</dc:creator>
      <dc:date>2025-09-16T08:30:00Z</dc:date>
    </item>
    <item>
      <title>New Mbv3 Size, Standard_M416bs_v3, General Availability</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/new-mbv3-size-standard-m416bs-v3-general-availability/ba-p/4439103</link>
      <description>&lt;P&gt;As we launched memory-optimized M-series family with high remote storage performance,&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/memory-optimized/mbsv3-series?tabs=sizebasic" target="_blank" rel="noopener"&gt;Mb Family&lt;/A&gt;, in Sep ’24, there’s been strong demand for more computing optimization, especially from healthcare organizations. Today, we're excited to expand the Mbv3 portfolio to better support large-scale, mission-critical database workloads—especially for healthcare organizations operating EHR (Electronic Health Records) database on Azure.&amp;nbsp;&lt;STRONG&gt;We’re pleased to announce the general availability of the new Mbv3 size, Standard_M416bs_v3.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This new SKU offers increased vCPU cores along with enhanced memory and remote storage performance, making it ideal for high-performance database scenarios that require consistent throughput, scalability, and reliability.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key features &lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The &lt;STRONG&gt;Mbv3 series&lt;/STRONG&gt; are based on the 4th generation Intel® Xeon® Scalable processors, scale for workloads up to 4TB, and deliver with NVMe interface for higher remote disk storage performance.&lt;/LI&gt;
&lt;LI&gt;This VM size, newly added to the Mbv3 series, &lt;STRONG&gt;Standard_M416bs_v3,&lt;/STRONG&gt; offers 416 vCPU which is over 2x the vCPU of the largest launched Mbv3 VM.&lt;/LI&gt;
&lt;LI&gt;The&amp;nbsp;&lt;STRONG&gt;Standard_M416bs_v3 &lt;/STRONG&gt;offers high remote storage performance with up to 550,000 IOPS and 10 GBps of remote disk storage bandwidth.&lt;/LI&gt;
&lt;LI&gt;The increased remote storage performance of &lt;STRONG&gt;Mbv3 series&lt;/STRONG&gt; is ideal for storage throughput-intensive workloads such as relational databases and data analytics applications.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Mbsv3 series (NVMe)&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-border-color-21" border="1" style="width: 1050px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Size&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;vCPU&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Memory: GiB&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Max data disks&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Max uncached Premium SSD: IOPS/Throughput(MBps)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Max uncached Ultra Disk and Premium SSD V2 disk:&amp;nbsp; IOPS/Throughput(MBps)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Max NICs&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Max network bandwidth (Mbps)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Standard_M416bs_v3&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;416&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;3800&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;64&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;240,000/8,000&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;550,000/10,000&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;8&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;50,000&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Regional Availability and Pricing&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The VMs are now available in &lt;STRONG&gt;Central US, East US 2, East US, West US 2.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;For pricing details, please refer to &lt;A href="https://azure.microsoft.com/en-us/pricing/calculator/?ef_id=_k_Cj0KCQjw8eTFBhCXARIsAIkiuOxQ3dml1dnlGWy0a26qmaYbOoIzRnMJX1gG7r8_njzM15tFwmULQSAaAmkeEALw_wcB_k_&amp;amp;OCID=AIDcmm5edswduu_SEM__k_Cj0KCQjw8eTFBhCXARIsAIkiuOxQ3dml1dnlGWy0a26qmaYbOoIzRnMJX1gG7r8_njzM15tFwmULQSAaAmkeEALw_wcB_k_&amp;amp;gad_source=1&amp;amp;gad_campaignid=21496728177&amp;amp;gbraid=0AAAAADcJh_sfrlWAE2kw1CU2A9NHUh47y&amp;amp;gclid=Cj0KCQjw8eTFBhCXARIsAIkiuOxQ3dml1dnlGWy0a26qmaYbOoIzRnMJX1gG7r8_njzM15tFwmULQSAaAmkeEALw_wcB" target="_blank" rel="noopener"&gt;Pricing Calculator | Microsoft Azure&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 05 Sep 2025 01:20:32 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/new-mbv3-size-standard-m416bs-v3-general-availability/ba-p/4439103</guid>
      <dc:creator>MingJiong_Zhang</dc:creator>
      <dc:date>2025-09-05T01:20:32Z</dc:date>
    </item>
    <item>
      <title>Backup SAP Oracle Databases Using Azure VM Backup Snapshots</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/backup-sap-oracle-databases-using-azure-vm-backup-snapshots/ba-p/4408055</link>
      <description>&lt;P&gt;This blog article provides a comprehensive step-by-step guide for backing up SAP Oracle databases using Azure VM backup snapshots, ensuring data safety and integrity.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Installation of CIFS Utilities:&lt;/STRONG&gt; The process begins with the installation of cifs-utils on Oracle Linux, which is the recommended OS for running Oracle databases in the cloud.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Setting Up Environment Variables:&lt;/STRONG&gt; Users are instructed to define necessary environment variables for resource group and storage account names.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Creating SMB Credentials:&lt;/STRONG&gt; The guide explains how to create a folder for SMB credentials and retrieve the storage account key, emphasizing the need for appropriate permissions.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Mounting SMB File Share:&lt;/STRONG&gt; Instructions are provided for checking the accessibility of the storage account and mounting the SMB file share, which will serve as a backup location for archived logs.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Preparing Oracle Database for Backup:&lt;/STRONG&gt;Users must place the Oracle database in hot backup mode to ensure a consistent backup while allowing ongoing transactions.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Initiating Snapshot Backup:&lt;/STRONG&gt; Once the VM backup is configured, users can initiate a snapshot backup to capture the state of the virtual machine, including the Oracle database.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Restoration Process:&lt;/STRONG&gt; The document outlines the steps for restoring the Oracle database from the backup, including updating IP addresses and starting the database listener.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Final Steps and Verification:&lt;/STRONG&gt; Users are encouraged to verify the configuration and ensure that all necessary backups are completed successfully, including the SMB file share.&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 30 Jun 2025 20:35:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/backup-sap-oracle-databases-using-azure-vm-backup-snapshots/ba-p/4408055</guid>
      <dc:creator>Vamshi Polasa</dc:creator>
      <dc:date>2025-06-30T20:35:42Z</dc:date>
    </item>
    <item>
      <title>Moving Linux and Windows from SCSI to NVMe with one easy command</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/moving-linux-and-windows-from-scsi-to-nvme-with-one-easy-command/ba-p/4427954</link>
      <description>&lt;H1&gt;Introduction&lt;/H1&gt;
&lt;P&gt;In the ever-evolving world of cloud computing, maximizing performance and efficiency is crucial for businesses leveraging virtual machines (VMs) on platforms like Microsoft Azure, especially for high I/O workloads like SAP on Azure or database applications. One significant upgrade that can yield substantial performance improvements is converting your Azure VM from a SCSI (Small Computer System Interface) disk setup to NVMe (Non-Volatile Memory Express) using Azure Boost. This blog post will guide you through the process of making this conversion and explore the numerous advantages of NVMe over SCSI.&lt;/P&gt;
&lt;P&gt;In previous posts you had to prepare the OS yourself and it was a complex process for Linux and Windows.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now you can move to NVMe with just one simple easy command and the script will take care about everything including the preparation of your operating system.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Advantages of Azure Boost&lt;/H2&gt;
&lt;P&gt;Azure Boost is a powerful enhancement tool for Azure VMs, offering the following advantages:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Accelerated Disk Performance&lt;/STRONG&gt;: Azure Boost optimizes disk I/O operations, significantly increasing the speed and efficiency of your VM's storage.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Seamless Integration&lt;/STRONG&gt;: Easily integrates with existing Azure infrastructure, allowing for a smooth transition and immediate performance benefits.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost-Effective Optimization&lt;/STRONG&gt;: By enhancing the performance of existing VMs, Azure Boost helps reduce the need for more expensive hardware upgrades or additional resources.&lt;/LI&gt;
&lt;/OL&gt;
&lt;img /&gt;
&lt;H2&gt;What is changing for your VM?&lt;/H2&gt;
&lt;P&gt;Changing the host interface from SCSI to NVMe will not change the remote storage (OS disk or data disks), but change the way the operating systems sees the disks. The way the devices are shown depends on the varios VM size with v6 SKUs now also having up to 4 temporary disks using NVMe.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table style="width: 100%;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&amp;nbsp;&lt;/td&gt;&lt;td&gt;SCSI enabled VM&lt;/td&gt;&lt;td&gt;NVMe enabled VM (v5 and Mv3)&lt;/td&gt;&lt;td&gt;NVMe enabled v6&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;OS disk&lt;/td&gt;&lt;td&gt;/dev/sda&lt;/td&gt;&lt;td&gt;/dev/nvme0n1&lt;/td&gt;&lt;td&gt;/dev/nvme0n1&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Temp Disk&lt;/td&gt;&lt;td&gt;/dev/sdb&lt;/td&gt;&lt;td&gt;/dev/sda&lt;/td&gt;&lt;td&gt;/dev/nvme1n1&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;First Data Disk&lt;/td&gt;&lt;td&gt;/dev/sdc&lt;/td&gt;&lt;td&gt;/dev/nvme0n2&lt;/td&gt;&lt;td&gt;/dev/nvme0n2&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the following sections, we'll provide a step-by-step guide to converting your Azure VM from SCSI to NVMe using Azure Boost, ensuring you can take full advantage of these performance improvements and maintain a competitive edge in the cloud computing landscape.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Preparing your virtual machine (VM) from SCSI to NVMe&lt;/H2&gt;
&lt;P&gt;To migrate from SCSI to NVMe and benefit from higher performance some prerequisites need to be in place:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Your Azure VM Generation needs to be V2, you can check it using e.g. the portal on your VM&lt;BR /&gt;&lt;img&gt;Check VM generation on Azure Portal&lt;/img&gt;&lt;/LI&gt;
&lt;LI&gt;Windows
&lt;OL&gt;
&lt;LI&gt;On Windows 3rdparty software like Antivirus can influence the behavior after the migration, if you see a bluescreen please convert back to SCSI and try disabling your Antivirus/Security solution&lt;/LI&gt;
&lt;LI&gt;When you run e.g. a v6 VM you can get up to 4 tempdisks, all of them will be RAW and not preformated with NTFS&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;Linux
&lt;OL&gt;
&lt;LI&gt;Previously you were able to identify the Data Disks using LUN IDs in /dev/disk/azure/scsi1/lunX, as we migrate to NVMe those udev rules are not valid anymore. You can install the azure-vm-utils package or manually deploy a udev rule available on GitHub&lt;/LI&gt;
&lt;LI&gt;When you run e.g. a v6 VM you can get up to 4 tempdisks, all of them will be RAW, you can use e.g. cloud-init to run initializtion of those disks everytime the operating system starts&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;LI-SPOILER label="IMPORTANT"&gt;
&lt;P&gt;Conversion from VMs with tempdisk (e.g. Standard_D4ds_v5) to Intel or AMD v6 SKUs (e.g. Standard_D4ds_v6) is currently not supported. The only possible migration is through disk snapshots.&lt;/P&gt;
&lt;P&gt;You can convert VMs without tempdisk (e.g. Standard_D4s_v5) to v6 SKUs.&lt;/P&gt;
&lt;/LI-SPOILER&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Prepare your environment (required for local PowerShell)&lt;/H2&gt;
&lt;P&gt;When running local PowerShell you need to make sure to have all the requirements installed and configured&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Set-ExecutionPolicy Unrestricted&lt;/LI-CODE&gt;
&lt;P&gt;Install PowerShell modules for Azure&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Install-Module Az -Force&lt;/LI-CODE&gt;
&lt;P&gt;Download the script:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Invoke-WebRequest -Uri "https://raw.githubusercontent.com/Azure/SAP-on-Azure-Scripts-and-Utilities/refs/heads/main/Azure-NVMe-Utils/Azure-NVMe-Conversion.ps1" -OutFile "Azure-NVMe-Conversion.ps1"
&lt;/LI-CODE&gt;
&lt;P&gt;Logon to Azure and select the correct subscription&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Connect-AzAccount
Select-azsubscription -Subscription [Your-Subscription-Id]&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Migrate your VM&lt;/H2&gt;
&lt;LI-SPOILER label="IMPORTANT"&gt;
&lt;P&gt;You can always revert the migration back to SCSI&lt;/P&gt;
&lt;/LI-SPOILER&gt;
&lt;P&gt;To migrate your VM you need to know some parameters&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;ResourceGroupName &lt;/STRONG&gt;of the VM you want to convert&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;VMName &lt;/STRONG&gt;of the VM you want to convert&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;NewControllerType &lt;/STRONG&gt;will be SCSI or NVMe&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;VMSize &lt;/STRONG&gt;is the new VM SKU, can also be the same SKU if it supports both Controller Types&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Optional Parameters:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;StartVM &lt;/STRONG&gt;automatically starts withe VM after the migration&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;WriteLogFile &lt;/STRONG&gt;stores the output in a local file&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;IgnoreSKUCheck &lt;/STRONG&gt;does not check if the required VM Size is available in the region/zone&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;FixOperatingSystemSettings &lt;/STRONG&gt;automatically prepares your Windows or Linux system using Azure RunCommands&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;IgnoreOSCheck &lt;/STRONG&gt;does not run the OS check, the VM can be shutdown, you need to make sure your VM is prepared&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Sample Command and Output&lt;/H2&gt;
&lt;LI-CODE lang="powershell"&gt;PS /home/philipp&amp;gt; ./NVMe-Conversion.ps1 -ResourceGroupName testrg -VMName testvm -NewControllerType NVMe -VMSize Standard_E4bds_v5 -StartVM -FixOperatingSystemSettings                                          
00:00 - INFO      - Starting script Azure-NVMe-Conversion.ps1
00:00 - INFO      - Script started at 06/27/2025 15:41:39
00:00 - INFO      - Script version: 2025062704
00:00 - INFO      - Script parameters:
00:00 - INFO      -   ResourceGroupName -&amp;gt; testrg
00:00 - INFO      -   VMName -&amp;gt; testvm
00:00 - INFO      -   NewControllerType -&amp;gt; NVMe
00:00 - INFO      -   VMSize -&amp;gt; Standard_E4bds_v5
00:00 - INFO      -   StartVM -&amp;gt; True
00:00 - INFO      -   FixOperatingSystemSettings -&amp;gt; True
00:00 - INFO      - Script Version 2025062704                                                                           
00:00 - INFO      - Module Az.Compute is installed and the version is correct.
00:00 - INFO      - Module Az.Accounts is installed and the version is correct.
00:00 - INFO      - Module Az.Resources is installed and the version is correct.
00:00 - INFO      - Connected to Azure subscription name: AG-GE-CE-PHLEITEN
00:00 - INFO      - Connected to Azure subscription ID: 232b6759-xxxx-yyyy-zzzz-757472230e6c
00:00 - INFO      - VM testvm found in Resource Group testrg
00:01 - INFO      - VM testvm is running
00:01 - INFO      - VM testvm is running Linux
00:01 - INFO      - VM testvm is running SCSI
00:02 - INFO      - Running in Azure Cloud Shell
00:02 - INFO      - Authentication token is a SecureString
00:02 - INFO      - Authentication token received
00:02 - INFO      - Getting available SKU resources
00:02 - INFO      - This might take a while ...
00:06 - INFO      - VM SKU Standard_E4bds_v5 is available in zone 1
00:06 - INFO      - Resource disk support matches between original VM size and new VM size.
00:06 - INFO      - Found VM SKU - Checking for Capabilities
00:06 - INFO      - VM SKU has supported capabilities
00:06 - INFO      - VM supports NVMe
00:06 - INFO      - Pre-Checks completed
00:06 - INFO      - Entering Linux OS section
00:37 - INFO      -    Script output: Enable succeeded: 
00:37 - INFO      -    Script output: [stdout]
00:37 - INFO      -    Script output: [INFO] Operating system detected: sles
00:37 - INFO      -    Script output: [INFO] Checking if NVMe driver is included in initrd/initramfs...
00:37 - INFO      -    Script output: [INFO] NVMe driver found in initrd/initramfs.
00:37 - INFO      -    Script output: [INFO] Checking nvme_core.io_timeout parameter...
00:37 - INFO      -    Script output: [INFO] nvme_core.io_timeout is set to 240.
00:37 - INFO      -    Script output: [INFO] Checking /etc/fstab for deprecated device names...
00:37 - INFO      -    Script output: [INFO] /etc/fstab does not contain deprecated device names.
00:37 - INFO      -    Script output: 
00:37 - INFO      -    Script output: [stderr]
00:37 - INFO      -    Script output: 
00:37 - INFO      - Errors: 0 - Warnings: 0 - Info: 7
00:37 - INFO      - Shutting down VM testvm
01:18 - INFO      - VM testvm stopped
01:18 - INFO      - Checking if VM is stopped and deallocated
01:19 - INFO      - Setting OS Disk capabilities for testvm_OsDisk_1_165411276cbe459097929b981eb9b3e2 to new Disk Controller Type to NVMe
01:19 - INFO      - generated URL for OS disk update:
01:19 - INFO      - https://management.azure.com/subscriptions/232b6759-xxxx-yyyy-zzzz-757472230e6c/resourceGroups/testrg/providers/Microsoft.Compute/disks/testvm_OsDisk_1_165411276cbe459097929b981eb9b3e2?api-version=2023-04-02
01:19 - INFO      - OS Disk updated
01:19 - INFO      - Setting new VM Size from Standard_E4s_v3 to Standard_E4bds_v5 and Controller to NVMe
01:19 - INFO      - Updating VM testvm
01:54 - INFO      - VM testvm updated
01:54 - INFO      - Start after update enabled for VM testvm
01:54 - INFO      - Waiting for 15 seconds before starting the VM
02:09 - INFO      - Starting VM testvm
03:31 - INFO      - VM testvm started
03:31 - INFO      - As the virtual machine got started using the script you can check the operating system now
03:31 - INFO      - If you have any issues after the conversion you can revert the changes by running the script with the old settings
03:31 - IMPORTANT - Here is the command to revert the changes:
03:31 - INFO      -    .\Azure-NVMe-Conversion.ps1 -ResourceGroupName testrg -VMName testvm -NewControllerType SCSI -VMSize Standard_E4s_v3 -StartVM
03:31 - INFO      - Script ended at 06/27/2025 15:45:11
03:31 - INFO      - Exiting
PS /home/philipp&amp;gt;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Reverting back&lt;/H2&gt;
&lt;P&gt;The output shows a PowerShell command that will revert back your VM to SCSI at the end of the script:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;.\Azure-NVMe-Conversion.ps1 -ResourceGroupName testvg -VMName testvm -NewControllerType SCSI -VMSize Standard_E4s_v3 -StartVM&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Manually preparing Windows&lt;/H2&gt;
&lt;P&gt;To manually prepare Windows you just need to run one command. It will set the NVMe driver back to early start.&lt;/P&gt;
&lt;LI-SPOILER label="IMPORTANT"&gt;
&lt;P&gt;Everytime you boot Windows will evaluate the required drivers. If you set NVMe driver to the correct state, reboot and then check again, it will be started later during boot.&lt;/P&gt;
&lt;/LI-SPOILER&gt;&lt;LI-CODE lang="powerquery"&gt;sc.exe config stornvme start=boot&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Manually preparing Linux&lt;/H2&gt;
&lt;P&gt;To manually prepare Linux you need to make sure that&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;NVMe drivers are part of initrd/initramfs&lt;/LI&gt;
&lt;LI&gt;have the NVMe I/O timeout set to 240 seconds (nvme_core.io_timeout=240) in grub&lt;/LI&gt;
&lt;LI&gt;check /etc/fstab for any references to device names (e.g. /dev/sda) or old udev rule entries (e.g. /dev/disk/azure/scsi1/lun0)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Please refer to your Linux providers documentation how to adjust the required settings.&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jun 2025 18:08:43 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/moving-linux-and-windows-from-scsi-to-nvme-with-one-easy-command/ba-p/4427954</guid>
      <dc:creator>phleiten</dc:creator>
      <dc:date>2025-06-30T18:08:43Z</dc:date>
    </item>
    <item>
      <title>Azure Files NFS Encryption In Transit for SAP on Azure Systems</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/azure-files-nfs-encryption-in-transit-for-sap-on-azure-systems/ba-p/4426918</link>
      <description>&lt;P&gt;Azure Files NFS volumes now support&amp;nbsp;&lt;A href="https://aka.ms/nfs/EiT/Announcement" target="_blank" rel="noopener"&gt;encryption in-transit &lt;/A&gt;&amp;nbsp;via TLS. With this enhancement, Azure Files NFS v4.1 offers the robust security that modern enterprises require, without compromising performance by ensuring all traffic between clients and servers is fully encrypted. Now Azure Files NFS data can be encrypted end-to-end: at rest, in transit, and across the network.&lt;/P&gt;
&lt;P&gt;Using&amp;nbsp;&lt;A href="https://www.stunnel.org/" target="_blank" rel="noopener"&gt;Stunnel&lt;/A&gt;, an open-source TLS wrapper, Azure Files encrypts the TCP stream between the NFS client and Azure Files with strong encryption using AES-GCM, without needing Kerberos. This ensures data confidentiality while eliminating the need for complex setups or external authentication systems like Active Directory.&lt;/P&gt;
&lt;P&gt;The&amp;nbsp;&lt;A href="https://github.com/Azure/AZNFS-mount" target="_blank" rel="noopener"&gt;AZNFS&lt;/A&gt;&amp;nbsp;utility package simplifies encrypted mounts by installing and setting up Stunnel on the client (Azure VMs). The AZNFS mount helper mounts the NFS shares with TLS support. The mount helper initializes dedicated stunnel client process for each storage account’s IP address. The stunnel client process listens on a local port for inbound traffic and then redirects encrypted nfs client traffic to the 2049 port where NFS server is listening on.&lt;/P&gt;
&lt;P&gt;The AZNFS package runs a background job called&amp;nbsp;&lt;EM&gt;aznfswatchdog.&lt;/EM&gt; It ensures that stunnel processes are running for each storage account and cleans up after all shares from the storage account are unmounted. If for some reason a stunnel process is terminated unexpectedly, the watchdog process restarts it.&lt;/P&gt;
&lt;P&gt;For more details, refer to the following document:&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/files/encryption-in-transit-for-nfs-shares?tabs=azure-portal%2CSUSE" target="_blank" rel="noopener"&gt;How to encrypt data in transit for NFS shares&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;Availability in Azure Regions&lt;/H3&gt;
&lt;P&gt;All regions that support Azure Premium Files now support encryption in transit.&lt;/P&gt;
&lt;H3&gt;Supported Linux releases&lt;/H3&gt;
&lt;P&gt;For SAP on Azure environment, Azure Files NFS Encryption in Transit (EiT) is available for the following Operating System releases.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;SLES for SAP 15 SP4 onwards&lt;/LI&gt;
&lt;LI&gt;RHEL for SAP 8.6 onwards &lt;EM&gt;(EiT is currently not supported for file systems managed by Pacemaker clusters on RHEL.)&lt;/EM&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Refer to&amp;nbsp;&lt;A href="https://me.sap.com/notes/1928533" target="_blank" rel="noopener"&gt;SAP Note 1928533&lt;/A&gt; for Operating system supportability for SAP on Azure systems.&lt;/P&gt;
&lt;H3&gt;How to deploy Encryption in Transit (EiT) for Azure Files NFS Shares&lt;/H3&gt;
&lt;OL&gt;
&lt;LI&gt;Refer to the SAP on Azure deployment planning guide about &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/planning-guide-storage-azure-files" target="_blank" rel="noopener"&gt;Using Azure Premium Files NFS and SMB for SAP workload&lt;/A&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;As described in the planning guide, for SAP workloads, following are the supported uses of Azure Files NFS shares and EiT can be used for all the scenarios:&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;sapmnt volume for a distributed SAP systems&lt;/LI&gt;
&lt;LI&gt;transport directory for SAP landscape&lt;/LI&gt;
&lt;LI&gt;/hana/shared for HANA scale-out. Review carefully the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/hana-vm-operations-storage#considerations-for-the-hana-shared-file-system" target="_blank" rel="noopener"&gt;considerations for sizing&amp;nbsp;&lt;STRONG&gt;/hana/shared&lt;/STRONG&gt;&lt;/A&gt;, as appropriately sized&amp;nbsp;&lt;STRONG&gt;/hana/shared &lt;/STRONG&gt;volume contributes to system's stability&lt;/LI&gt;
&lt;LI&gt;file interface between your SAP landscape and other applications&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;Deploy the Azure File NFS storage account. Refer to the standard documentation for creating the Azure Files storage account, file share and private endpoint.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/files/storage-files-quick-create-use-linux" target="_blank" rel="noopener"&gt;Create an NFS Azure file share&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&amp;nbsp; &amp;nbsp; Note : We can enforce EiT for all the file shares in the Azure Storage account by enabling ‘&lt;A href="https://learn.microsoft.com/en-us/azure/storage/files/encryption-in-transit-for-nfs-shares?branch=pr-en-us-300015&amp;amp;tabs=azure-portal%2CSUSE#enforce-encryption-in-transit" target="_blank" rel="noopener"&gt;secure transfer required&lt;/A&gt;’ option.&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;Deploy the mount helper (AZNFS) package on the Linux VM.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Follow the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/storage/files/encryption-in-transit-for-nfs-shares?branch=pr-en-us-300015&amp;amp;tabs=azure-portal%2CSUSE#step-1-check-aznfs-mount-helper-package-installation" target="_blank" rel="noopener"&gt;instructions&lt;/A&gt; for your Linux distribution to install the package.&lt;/P&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;Create the directories to mount the file shares.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;EM&gt;mkdir -p &amp;lt;full path of the directory&amp;gt;&lt;/EM&gt;&lt;/P&gt;
&lt;OL start="5"&gt;
&lt;LI&gt;Mount the NFS File share.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Refer to &lt;A href="https://learn.microsoft.com/en-us/azure/storage/files/encryption-in-transit-for-nfs-shares?branch=pr-en-us-300015&amp;amp;tabs=azure-portal%2CSUSE#step-2-mount-the-nfs-file-share" target="_blank" rel="noopener"&gt;the section&lt;/A&gt; for mounting the Azure Files NFS EiT file share in Linux VMs.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;To mount the file share permanently by adding the mount commands in ‘/etc/fstab’.&lt;/P&gt;
&lt;LI-CODE lang=""&gt;vi /etc/fstab

sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1  aznfs noresvport,vers=4,minorversion=1,sec=sys,_netdev  0  0

# Mount the file systems

mount -a&lt;/LI-CODE&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp;&amp;nbsp; File systems mentioned above are an example to explain the mount command syntax.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; When adding nfs mount entry to /etc/fstab, the fstype is "nfs". However, to use AZNFS mount helper and EiT, we need to use the fstype as "aznfs" which is not known to the Operating System, so at boot time the server tries to mount these entries before the watchdog is active, and they may fail. Users should always add "_netdev" option to their /etc/fstab entries to make sure shares are mounted on reboot only after the required services (like network) are active.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; We can add “notls” option in the mount command, if we don’t want to use the EiT but just want to use AZNFS mount helper to mount the file system. Also , we cannot mix EiT and no-EiT methods for different file systems using Azure Files NFS in the same Azure VM. Mount commands may fail to mount the file systems if EiT and no-EiT methods are used in the same VM&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; Mount helper supports private-endpoint based connections for Azure Files NFS EiT.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; If SAP VM is &lt;A href="https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/linux/custom-dns-configuration-for-azure-linux-vms?tabs=SLES" target="_blank" rel="noopener"&gt;custom domain joined&lt;/A&gt;, then we can use custom DNS FQDN OR &amp;nbsp;short names for file share in the ‘/etc/fstab’ as its defined in the DNS. To verify the hostname resolution, check using ‘nslookup &amp;lt;hostname&amp;gt;’ and ‘getent host &amp;lt;hostname&amp;gt;’ commands.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;OL start="6"&gt;
&lt;LI&gt;Mount the NFS File share as pacemaker cluster resource for SAP Central Services.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;In high availability setup of SAP Central Services, we may use file system as a resource in pacemaker cluster and it needs to be mounted using pacemaker cluster command. In the pacemaker commands to setup file system as cluster resource, we need to change the mount type to ‘&lt;STRONG&gt;aznfs&lt;/STRONG&gt;’ from ‘&lt;STRONG&gt;nfs&lt;/STRONG&gt;’. Also it’s recommended to use ‘&lt;STRONG&gt;_netdev&lt;/STRONG&gt;’ in the options parameter.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Following are the SAP Central Services setup scenarios in which Azure Files NFS is used as pacemaker resource agent, and we can use Azure Files NFS EiT.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse-nfs-azure-files?tabs=lb-portal%2Censa2" target="_blank" rel="noopener"&gt;Azure VMs high availability for SAP NW on &lt;STRONG&gt;SLES&lt;/STRONG&gt; with NFS on Azure Files&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel-nfs-azure-files?tabs=lb-portal%2Censa1" target="_blank" rel="noopener"&gt;Azure VMs high availability for SAP NW on &lt;STRONG&gt;RHEL&lt;/STRONG&gt; with NFS on Azure Files&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;For&amp;nbsp;&lt;STRONG&gt;SUSE Linux&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;SUSE 15 SP4 (for SAP) and higher releases recognise the ‘aznfs’ as file system type in the pacemaker resource agent.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;SUSE recommends using &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse-nfs-simple-mount?tabs=lb-portal%2Censa1" target="_blank" rel="noopener"&gt;simple mount approach&lt;/A&gt; for high availability setup of SAP Central services, in which all file systems are mounted using ‘/etc/fstab’ only.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;For &lt;STRONG&gt;RHEL Linux&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;RHEL 8.6 (for SAP) and higher releases will be recognising ‘aznfs’ as file system type in pacemaker resource agent.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;At the time of writing the blog, ‘aznfs’ as file system type is not yet recognised by the FileSystem resource agent(RS) on RHEL, hence this setup can’t be used at this moment.&amp;nbsp;&lt;/P&gt;
&lt;OL start="7"&gt;
&lt;LI&gt;For SAP HANA scale-out with HSR setup&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;We can use Azure Files NFS EiT for SAP HANA scale-out with HSR setup as described in the below docs.&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/sap-hana-high-availability-scale-out-hsr-suse?tabs=lb-portal#mount-the-shared-file-systems-azure-files-nfs" target="_blank" rel="noopener"&gt;SAP HANA scale-out with HSR and Pacemaker on &lt;STRONG&gt;SLES&lt;/STRONG&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel?tabs=lb-portal#mount-the-shared-file-systems-azure-files-nfs" target="_blank" rel="noopener"&gt;SAP HANA scale-out with HSR and Pacemaker on &lt;STRONG&gt;RHEL&lt;/STRONG&gt; &lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;We need to mount ‘/hana/shared’ File system with EiT by defining the filesystem type as ‘&lt;STRONG&gt;aznfs&lt;/STRONG&gt;’ in ‘/etc/fstab’. Also it’s recommended to use ‘&lt;STRONG&gt;_netdev&lt;/STRONG&gt;’ in the options parameter.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;For&amp;nbsp;&lt;STRONG&gt;SUSE Linux&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;In the &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/sap-hana-high-availability-scale-out-hsr-suse?tabs=lb-portal%2Csaphanasr-scaleout#create-sap-hana-cluster-resources" target="_blank" rel="noopener"&gt;Create File system resource&lt;/A&gt; section with SAP HANA high availability &amp;nbsp;“SAPHanaSR-ScaleOut” package, in which we create a dummy file system cluster resource, which will monitor and report failures for ‘/hana/shared’ file system, we can continue to follow the steps as it is in the above document with ‘fstype=nfs4’. ‘/hana/shared’ file system will still be using EiT as defined in ‘/etc/fstab’.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;For SAP HANA high availability “SAPHanaSR-angi”, there are no further actions needed to use Azure File NFS EiT.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;For &lt;STRONG&gt;RHEL Linux&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;In the &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel?tabs=lb-portal#create-file-system-resources" target="_blank" rel="noopener"&gt;Create File system resource&lt;/A&gt; section, we can replace the file system type to ‘aznfs’ from ‘nfs’ in the pacemaker resource configuration for ‘/hana/shared’&amp;nbsp; file systems.&lt;/P&gt;
&lt;OL start="8"&gt;
&lt;LI&gt;Validation of in-transit data Encryption for Azure Files NFS.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Refer to &lt;A href="https://learn.microsoft.com/en-us/azure/storage/files/encryption-in-transit-for-nfs-shares?branch=pr-en-us-300015&amp;amp;tabs=azure-portal%2CSUSE#step-3--verify-that-the-in-transit-data-encryption-succeeded" target="_blank" rel="noopener"&gt;Verify that the in-transit data encryption succeeded&lt;/A&gt; section to check and confirm if EiT is successfully working.&lt;/P&gt;
&lt;H3&gt;Summary&lt;/H3&gt;
&lt;P&gt;Go ahead with EiT!! Simplified deployment of Encryption in Transit of Azure Files Premium NFS (Locally redundant Storage / Zonal redundant Storage) will strengthen the security footprint of Production and non-Production SAP on Azure environments.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jun 2025 16:40:58 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/azure-files-nfs-encryption-in-transit-for-sap-on-azure-systems/ba-p/4426918</guid>
      <dc:creator>AnjanBanerjee</dc:creator>
      <dc:date>2025-06-30T16:40:58Z</dc:date>
    </item>
    <item>
      <title>SAP Web Dispatcher on Linux with High Availability Setup on Azure</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-web-dispatcher-on-linux-with-high-availability-setup-on/ba-p/4413219</link>
      <description>&lt;H2&gt;1. Introduction&lt;/H2&gt;
&lt;P&gt;The SAP Web Dispatcher component is used for load balancing SAP web HTTP(s) traffic among the SAP application servers. It works as “reverse proxy” and the entry point for HTTP(s) requests into SAP environment, which consists of one or more SAP NetWeaver system.&lt;/P&gt;
&lt;P&gt;This blog provides detailed guidance about setting up high availability of standalone SAP Web Dispatcher on Linux operating system on Azure. There are different options to set up high availability for SAP Web Dispatcher.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Active/Passive High Availability Setup&lt;/STRONG&gt; using a Linux pacemaker cluster (SUSE or Red Hat) with a virtual IP/hostname defined in Azure Load Balancer.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Active/Active High Availability Setup&lt;/STRONG&gt; by deploying multiple parallel instances of SAP Web Dispatcher across different Azure Virtual Machines (running either SUSE or Red Hat) and distributing traffic using Azure Load Balancer.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We will walk through the configuration steps for both high availability scenarios in this blog.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;2. Active/Passive HA Setup of SAP Web Dispatcher&lt;/H2&gt;
&lt;H3&gt;2.1. System Design&lt;/H3&gt;
&lt;P&gt;Following is the high level &lt;A href="https://learn.microsoft.com/en-us/azure/architecture/guide/sap/sap-s4hana" target="_blank" rel="noopener"&gt;architecture diagram of HA SAP Production environment on Azure&lt;/A&gt;. SAP Web Dispatcher (WD) standalone HA setup is highlighted in the SAP architecture design.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;In this setup as active/passive node design, primary node of the SAP Web Dispatcher will be receiving the user's requests and transferring (and load balancing) it to the backed SAP Application Servers. In case of unavailability of primary node, Linux pacemaker cluster will perform the failover of SAP Web Dispatcher to the secondary node. Users will connect to the SAP Web Dispatcher using the virtual hostname(FQDN) and virtual IP Address as defined in the Azure Loadbalancer. Azure Loadbalancer health probe port will be activated by pacemaker cluster on the primary node, so all the user connections to the virtual IP/hostname will be redirected by Azure Loadbalancer to the active SAP Web Dispatcher.&lt;/P&gt;
&lt;P&gt;Also, SAP Help documentation describes this HA architecture as “&lt;A href="https://help.sap.com/docs/SAP_S4HANA_ON-PREMISE/683d6a1797a34730a6e005d1e8de6f22/489a9a6b48c673e8e10000000a42189b.html?locale=en-US" target="_blank" rel="noopener"&gt;High Availability of SAP Web Dispatcher with External HA Software&lt;/A&gt;”.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The following are the advantages of active-passive SAP WD setup.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Linux pacemaker cluster will continuously monitor the SAP WD active node and services running on it. In case of any error scenario, the active node will be fenced by pacemaker cluster and secondary node will be made active. This will ensure best user experience round the clock.&lt;/LI&gt;
&lt;LI&gt;Complete automation of error detection and start/stop functionality of SAP WD. Its would be less challenging to define application-level SLA when pacemaker managing the SAP WD. Azure provides VM level SLA of 99.99% , if VMs are deployed in Availability Zones.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We need following components to setup HA SAP Web Dispatcher on Linux.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A pair of SAP Certified VMs on Azure with supported Linux Operating System. Cross Availability Zone deployment is recommended for higher VM level SLA.&lt;/LI&gt;
&lt;LI&gt;Azure Fileshare (Premium) for ‘sapmnt’ NFS share which will be available/mounted on both VMs for SAP Web Dispatcher.&lt;/LI&gt;
&lt;LI&gt;Azure Load Balancer for configuring virtual IP and hostname (in DNS) of the SAP Web Dispatcher.&lt;/LI&gt;
&lt;LI&gt;Configure Linux pacemaker cluster.&lt;/LI&gt;
&lt;LI&gt;Installation of SAP Web Dispatcher on both the VMs with same SID and system number. It is recommended to use the latest version of SAP Web Dispatcher.&lt;/LI&gt;
&lt;LI&gt;Configure the pacemaker resource agent for SAP Web Dispatcher application.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;2.2.&amp;nbsp; &amp;nbsp;Deployment Steps&lt;/H3&gt;
&lt;P&gt;This section provides detailed steps for HA active/passive SAP Web Dispatcher deployment for both the supported Linux operating systems (SUSE and Red Hat). Please refer to &lt;A href="https://me.sap.com/notes/1928533" target="_blank" rel="noopener"&gt;SAP Note 1928533&lt;/A&gt; for SAP on Azure certified VMs, SAPS values and supported operating systems versions for SAP environment.&lt;/P&gt;
&lt;P&gt;In the below steps, ‘&lt;SPAN class="lia-text-color-11"&gt;&lt;STRONG&gt;For SLES&lt;/STRONG&gt;&lt;/SPAN&gt;’ is applicable to SLES operating system and ‘&lt;SPAN class="lia-text-color-8"&gt;&lt;STRONG&gt;For RHEL&lt;/STRONG&gt;&lt;/SPAN&gt;’ is applicable to RHEL operating system. If for any step, operating system is not mentioned then its applicable to both the operating system.&lt;/P&gt;
&lt;P&gt;Also following items are prefixed with:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;[A]:&lt;/STRONG&gt; Applicable to all nodes.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;[1]:&lt;/STRONG&gt; Applicable to only node 1.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;[2]:&lt;/STRONG&gt; Applicable to only node 2.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SLES/RHEL for SAP. In this blog, below VM names are used:&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;Node1: webdisp01&lt;/LI&gt;
&lt;LI&gt;Node2: webdisp02&lt;/LI&gt;
&lt;LI&gt;Virtual Hostname: eitwebdispha&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Follow the standard SAP on Azure document for base pacemaker setup for the SAP Web Dispatcher VMs. We can either use SBD device or Azure fence agent for setting up fencing in the pacemaker cluster.&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-11"&gt;&lt;STRONG&gt;For SLES: &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse-pacemaker?tabs=msi" target="_blank" rel="noopener"&gt;Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="color: rgb(186, 55, 42);"&gt;For RHEL:&amp;nbsp;&lt;/STRONG&gt;&lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel-pacemaker?tabs=msi" target="_blank" rel="noopener"&gt;Set up Pacemaker on Red Hat Enterprise Linux in Azure&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;The rest of the below setup steps are derived from the below SAP ASCS/ERS HA setup document and SUSE/RHEL blog on SAP WD setup. It's highly recommended to read the following documents.&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-11"&gt;&lt;STRONG&gt;For SLES:&lt;/STRONG&gt;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A style="background-color: rgb(255, 255, 255); font-style: normal; font-weight: 400;" href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse-nfs-azure-files?tabs=lb-portal%2Censa1" target="_blank" rel="noopener"&gt;High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with NFS on Azure Files&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.suse.com/c/yes-sap-web-dispatcher-high-availability-on-premise-and-cloud/?_gl=1*1pxwa0d*_gcl_au*NjM1Nzc3ODQ0LjE3NDE1OTI4NjU.*_ga*ODI1MzcxODg2LjE3NDE1OTI4NjQ.*_ga_JEVBS2XFKK*MTc0Mzk5NjAzOS4xMy4xLjE3NDQwMDI4MjEuNTkuMC4w" target="_blank" rel="noopener"&gt;SUSE Blog: SAP Web Dispatcher High Availability on Cloud with SUSE Linux.&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;For RHEL:&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel-nfs-azure-files?tabs=lb-portal%2Censa1" target="_blank" rel="noopener"&gt;High availability for SAP NetWeaver on VMs on RHEL with NFS on Azure Files&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://access.redhat.com/articles/6962925" target="_blank" rel="noopener"&gt;RHEL Blog: How to manage standalone SAP Web Dispatcher instances using the RHEL HA Add-On - Red Hat Customer Portal&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Deploy the Azure standard load balancer for defining the virtual IP of the SAP Web Dispatcher. In this example, the following setup is used in deployment.&lt;BR /&gt;&lt;BR /&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; height: 134px; border-width: 1px;"&gt;&lt;colgroup&gt;&lt;col style="width: 25.0391%" /&gt;&lt;col style="width: 25.0391%" /&gt;&lt;col style="width: 25.0391%" /&gt;&lt;col style="width: 25.0391%" /&gt;&lt;/colgroup&gt;&lt;tbody&gt;&lt;tr style="height: 27px;"&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Frontend IP&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Backend Pool&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Health Probe Port&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Load Balancing Rule&lt;/STRONG&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 107px;"&gt;&lt;td style="height: 107px;"&gt;
&lt;P&gt;10.50.60.45&lt;/P&gt;
&lt;P&gt;(Virtual IP of SAP Web Dispatcher)&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 107px;"&gt;Node 1 &amp;amp; Node 2 VMs&lt;/td&gt;&lt;td style="height: 107px;"&gt;62320 (set probeThreshold=2)&lt;/td&gt;&lt;td style="height: 107px;"&gt;
&lt;P&gt;HA Port: Enable&lt;/P&gt;
&lt;P&gt;Floating IP: Enable&lt;/P&gt;
&lt;P&gt;Idle Timeout: 30 mins&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;BR /&gt;Don't enable TCP time stamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the “net.ipv4.tcp_timestamps” OS parameter to '0'. For details, see &lt;A href="https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-custom-probe-overview" target="_blank" rel="noopener"&gt;Load Balancer health probes&lt;/A&gt;.&lt;BR /&gt;&lt;BR /&gt;Run the following command to set this parameter, and to set up value permanently add or update the parameter in /etc/sysctl.conf.&lt;BR /&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo sysctl net.ipv4.tcp_timestamps=0&lt;/LI-CODE&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity unless you perform additional configuration to allow routing to public endpoints. For details on how to achieve outbound connectivity, see&amp;nbsp;&lt;/SPAN&gt;&lt;A style="background-color: rgb(255, 255, 255); font-style: normal; font-weight: 400;" href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-standard-load-balancer-outbound-connections" target="_blank" rel="noopener"&gt;Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-21"&gt;Configure NFS for ‘sapmnt’ and SAP WD instance Filesystem on Azure Files. Deploy &lt;/SPAN&gt;the Azure Files storage account (ZRS) and create fileshares for ‘sapmnt’ and ‘SAP WD instance (/usr/sap/SID/Wxx)’. Connect it to the vnet of the SAP VMs using private endpoint.&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-11"&gt;&lt;STRONG&gt;For SLES: &lt;/STRONG&gt;&lt;/SPAN&gt;Refer to the&amp;nbsp;&lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse-nfs-azure-files?tabs=lb-portal%2Censa1#deploy-azure-files-storage-account-and-nfs-shares" target="_blank" rel="noopener"&gt;Deploy an Azure Files storage account and NFS shares&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; section for detailed steps.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;For RHEL: &lt;/STRONG&gt;&lt;/SPAN&gt;Refer to the &lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel-nfs-azure-files?tabs=lb-portal%2Censa1#deploy-azure-files-storage-account-and-nfs-shares" target="_blank" rel="noopener"&gt;Deploy an Azure Files storage account and NFS shares&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; section for detailed steps.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;Mount NFS volumes.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-11"&gt;[A] For SLES:&lt;/SPAN&gt; &lt;/STRONG&gt;&lt;SPAN class="lia-text-color-21"&gt;NFS client and other resources come pre-installed.&lt;BR /&gt;&lt;/SPAN&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;[A] For RHEL:&lt;/STRONG&gt;&lt;/SPAN&gt; Install the NFS Client and other resources.&lt;BR /&gt;&lt;/SPAN&gt;&lt;LI-CODE lang="bash"&gt;sudo yum -y install nfs-utils resource-agents resource-agents-sap&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;STRONG&gt;[A]&lt;/STRONG&gt; Mount the NFS file system on both VMs.&amp;nbsp;&lt;/SPAN&gt;Create shared directories.&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo mkdir -p /sapmnt/WD1 
sudo mkdir -p /usr/sap/WD1/W00

sudo chattr +i /sapmnt/WD1 
sudo chattr +i /usr/sap/WD1/W00&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[A] &lt;/STRONG&gt;Mount the File system that will not be controlled by pacemaker cluster.&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;echo "sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-sapmnt /sapmnt/WD1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 2" &amp;gt;&amp;gt; /etc/fstab

mount -a&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Prepare for SAP Web Dispatcher HA Installation.&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;[A]&lt;SPAN class="lia-text-color-11"&gt; For SUSE:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; Install the latest version of the SUSE connector.&lt;BR /&gt;&lt;/SPAN&gt;&lt;LI-CODE lang="bash"&gt;sudo zypper install sap-suse-cluster-connector&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;[A]&lt;/STRONG&gt; Set up host name resolution (including virtual hostname).&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;We can either use a DNS server or modify /etc/hosts on all nodes.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[A]&lt;/STRONG&gt; Configure the SWAP file. Edit ‘/etc/waagent.conf’ file and change the following parameters.&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;ResourceDisk.Format=y 
ResourceDisk.EnableSwap=y 
ResourceDisk.SwapSizeMB=2000&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;[A] Restart the agent to activate the change&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo service waagent restart&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[A] &lt;SPAN class="lia-text-color-13"&gt;For RHEL: &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN class="lia-text-color-21"&gt;Based on RHEL OS version follow SAP Notes.&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-21"&gt;SAP Note 2002167 for RHEL 7.x&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-21"&gt;SAP Note 2772999 for RHEL 8.x&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-21"&gt;SAP Note 3108316 for RHEL 9.x&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Create the SAP WD instance Filesystem, virtual IP, and probe port resources for SAP Web Dispatcher.&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;[1] &lt;SPAN class="lia-text-color-11"&gt;For SUSE:&lt;/SPAN&gt;&lt;/STRONG&gt;&amp;nbsp;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;# Keep node 2 in standby 
sudo crm node standby webdisp02 

# Configure file system, virtual IP, and probe resource 
sudo crm configure primitive fs_WD1_W00 Filesystem device=' sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-su-usrsap' directory='/usr/sap/WD1/W00' fstype='nfs' options='noresvport,vers=4,minorversion=1,sec=sys' \ 
op start timeout=60s interval=0 \ 
op stop timeout=60s interval=0 \ 
op monitor interval=20s timeout=40s 

sudo crm configure primitive vip_WD1_W00 IPaddr2 \ 
params ip=10.50.60.45 \ 
op monitor interval=10 timeout=20 

sudo crm configure primitive nc_WD1_W00 azure-lb port=62320 \ 
op monitor timeout=20s interval=10 

sudo crm configure group g-WD1_W00 fs_WD1_W00 nc_WD1_W00 vip_WD1_W00&lt;/LI-CODE&gt;
&lt;P&gt;Make sure that all the resources in the cluster are in started status and running on Node 1. Check the status using the command ‘&lt;EM&gt;crm status’&lt;/EM&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;[1] &lt;SPAN class="lia-text-color-13"&gt;For RHEL:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;# Keep node 2 in standby 
sudo pcs node standby webdisp02 

# Create file system, virtual IP, probe resource 
sudo pcs resource create fs_WD1_W00 Filesystem device='sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-rh-usrsap' \ 
directory='/usr/sap/WD1/W00' fstype='nfs' force_unmount=safe options='sec=sys,nfsvers=4.1' \ 
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \ 
--group g-WD1_W00 

sudo pcs resource create vip_WD1_W00 IPaddr2 \ 
ip=10.50.60.45 \ 
--group g-WD1_W00 

sudo pcs resource create nc_WD1_W00 azure-lb port=62320 \ 
--group g-WD1_W00&lt;/LI-CODE&gt;
&lt;P&gt;Make sure that all the resources in the cluster are in started status and running on Node 1. Check the status using the command ‘&lt;EM&gt;pcs status’&lt;/EM&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[1]&lt;/STRONG&gt; Install SAP Web Dispatcher on the first Node.
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;For RHEL:&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;Allow access to SWPM. This rule is not permanent. If you reboot the machine, you should run the command again.&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo firewall-cmd --zone=public --add-port=4237/tcp&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Run the SWPM.&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;./sapinst SAPINST_USE_HOSTNAME=&amp;lt;virtual hostname&amp;gt;&lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;Enter the virtual hostname and Instance number.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Provide the S/4 HANA message server details for backend connections.&lt;/LI&gt;
&lt;LI&gt;Continue with SAP Web Dispatcher installation.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Check the status of SAP WD.&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[1] &lt;/STRONG&gt;Stop the SAP WD and disable the systemd service. This step is only&amp;nbsp;if SAP startup framework is managed by systemd as per &lt;A href="https://me.sap.com/notes/3115048" target="_blank" rel="noopener"&gt;SAP Note&amp;nbsp;3115048&lt;/A&gt;.&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;# login as sidadm user 
sapcontrol -nr 00 -function Stop 

# login as root user 
systemctl disable SAPWD1_00.service&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;[1] Move the Filesystem, virtual IP, and probe port resources for SAP Web Dispatcher to second Node.&lt;/SPAN&gt;&lt;EM&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/EM&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-11"&gt;&lt;STRONG&gt;For SLES:&lt;/STRONG&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;LI-CODE lang="bash"&gt;sudo crm node online webdisp02 
sudo crm node standby webdisp01&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&amp;nbsp;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;For RHEL:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;sudo pcs node unstandby webdisp02 
sudo pcs node standby webdisp01&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;NOTE: Before proceeding to the next steps, check that resources successfully moved to Node 2.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;STRONG&gt;[2] &lt;/STRONG&gt;Setup SAP Web Dispatcher on the second Node.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;To setup the SAP WD on Node 2, we can copy the following files and directories from Node 1 to Node 2. Also perform the other tasks in Node 2 as mentioned below.&lt;/LI&gt;
&lt;LI&gt;Note: Please ensure that permissions, owner, and group names are same in Node 2 for all the copied items as in Node 1. Before copying, save a copy of the existing files in Node 2.&lt;/LI&gt;
&lt;LI&gt;Files to copy&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;# For SLES and RHEL 
/usr/sap/sapservices 
/etc/system/system/SAPWD1_00.service 
/etc/polkit-1/rules.d/10-SAPWD1-00.rules 
/etc/passwd 
/etc/shadow 
/etc/group 

# For RHEL 
/etc/gshadow&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Folders to copy&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;# After copying, Rename the ‘hostname’ in the environment file names. 
/home/wd1adm 
/home/sapadm 

/usr/sap/ccms 
/usr/sap/tmp&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Create the 'SYS' directory in the /usr/sap/WD1 folder
&lt;UL&gt;
&lt;LI&gt;Create all subdirectories and soft links as available in Node 1.&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[2] &lt;/STRONG&gt;Install the saphostagent&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;Extract the SAPHOSTAGENT.SAR file&lt;/LI&gt;
&lt;LI&gt;Run the command to install it&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;./saphostexec -install&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Check if SAP hostagent is running successfully&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;/usr/sap/hostctrl/exe/saphostexec -status&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[2]&lt;/STRONG&gt; Start SAP WD on node 2 and check the status&lt;BR /&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sapcontrol -nr 00 -function StartService WD1 
sapcontrol -nr 00 -function Start 
sapcontrol -nr 00 -function GetProcessStatus&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[1] &lt;SPAN class="lia-text-color-11"&gt;For SLES:&lt;/SPAN&gt;&lt;/STRONG&gt; Update the instance profile&lt;BR /&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;vi /sapmnt/WD1/profile/WD1_W00_wd1webdispha 

# Add the following lines. 
service/halib = $(DIR_EXECUTABLE)/saphascriptco.so 
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;[A] &lt;/STRONG&gt;Configure SAP users after the installation&lt;BR /&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo usermod -aG haclient wd1adm&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;[A] Configure keepalive parameter and add the parameter in /etc/sysctl.conf to set the value permanently&lt;BR /&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo sysctl net.ipv4.tcp_keepalive_time=300&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Create SAP Web Dispatcher resource in cluster&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-11"&gt;&lt;STRONG&gt;For SLES:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo crm configure property maintenance-mode="true" 

sudo crm configure primitive rsc_sap_WD1_W00 SAPInstance \ 
op monitor interval=11 timeout=60 on-fail=restart \ 
params InstanceName=WD1_W00_wd1webdispha \ 
START_PROFILE="/usr/sap/WD1/SYS/profile/WD1_W00_wd1webdispha" \ 
AUTOMATIC_RECOVER=false MONITOR_SERVICES="sapwebdisp" 

sudo crm configure modgroup g-WD1_W00 add rsc_sap_WD1_W00 

sudo crm node online webdisp01 

sudo crm configure property maintenance-mode="false"&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;For RHEL&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sudo pcs property set maintenance-mode=true 

sudo pcs resource create rsc_sap_WD1_W00 SAPInstance \ 
InstanceName=WD1_W00_wd1webdispha START_PROFILE="/sapmnt/WD1/profile/WD1_W00_wd1webdispha" \ 
AUTOMATIC_RECOVER=false MONITOR_SERVICES="sapwebdisp" \ 
op monitor interval=20 on-fail=restart timeout=60 \ 
--group g-WD1_W00

sudo pcs node unstandby webdisp01 

sudo pcs property set maintenance-mode=false&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;[A] &lt;/SPAN&gt;&lt;SPAN class="lia-text-color-13"&gt;For RHEL:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; Add firewall rules for SAP Web Dispatcher and Azure load balancer health probe ports on both nodes.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;LI-CODE lang="bash"&gt;sudo firewall-cmd --zone=public --add-port={62320,44300,8000}/tcp --permanent 
sudo firewall-cmd --zone=public --add-port={62320,44300,8000}/tcp&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Verify SAP Web Dispatcher Cluster is running successfully&lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;Check "insights" blade of Azure load balancer in portal. It would show connections are redirected to one of the nodes.&lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Check the backend S/4 HANA connection is working using the SAP Web Dispatcher Administration link.&lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;Run the sapwebdisp config check&lt;BR /&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;sapwebdisp pf=/sapmnt/WD1/profile/WD1_W00_wd1webdispha -checkconfig&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Test the cluster setup&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-11"&gt;&lt;STRONG&gt;For SLES&lt;/STRONG&gt;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;Pacemaker cluster testing for SAP Web Dispatcher can be derived from the document &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse?tabs=lb-portal%2Censa1#test-the-cluster-setup" target="_blank" rel="noopener"&gt;Azure VMs high availability for SAP NetWeaver on SLES (for ASCS/ERS Cluster) &lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;We can run the following test cases (from the above link), which can be applicable for SAP WD component.
&lt;UL&gt;
&lt;LI&gt;Test HAGetFailoverConfig and HACheckFailoverConfig&lt;/LI&gt;
&lt;LI&gt;Manually migrate the SAP Web Dispatcher resource&lt;/LI&gt;
&lt;LI&gt;Test HAFailoverToNode&lt;/LI&gt;
&lt;LI&gt;Simulate node crash&lt;/LI&gt;
&lt;LI&gt;Blocking network communication&lt;/LI&gt;
&lt;LI&gt;Test manual restart of SAP WD instance&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;For RHEL&lt;/STRONG&gt;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;Pacemaker cluster testing for SAP Web Dispatcher can be derived from the document &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel?tabs=lb-portal%2Censa1#test-the-cluster-setup" target="_blank" rel="noopener"&gt;Azure VMs high availability for SAP NetWeaver on RHEL (for ASCS/ERS Cluster) &lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;We can run the following test cases (from the above link), which can be applicable for SAP WD component.
&lt;UL&gt;
&lt;LI&gt;Manually migrate the SAP Web Dispatcher resource&lt;/LI&gt;
&lt;LI&gt;Simulate a node crash&lt;/LI&gt;
&lt;LI&gt;Blocking network communication&lt;/LI&gt;
&lt;LI&gt;Kill the SAP WD process&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;3. Active/Active HA Setup of SAP Web Dispatcher&lt;/H2&gt;
&lt;H3&gt;3.1. System Design&lt;/H3&gt;
&lt;P&gt;In this Active/Active setup of SAP Web Dispatcher (WD), we can deploy and run parallel standalone WD on individual VMs with share nothing designs and have different SID. To connect to the SAP Web Dispatcher, Users will be using the one virtual hostname (FQDN)/IP as defined in the front-end IP of Azure Load balancer. Virtual IP to hostname/FQDN mapping needs to be performed in AD/DNS. Incoming traffic will be distributed to either of the WD by the Azure Internal Load balancer. No Operating system cluster setup is required in this scenario. This architecture can be deployed in either Linux or Windows operating systems.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;In ILB configuration,&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode?tabs=azure-portal#configure-distribution-mode" target="_blank" rel="noopener"&gt;Session persistence settings&lt;/A&gt; will ensure that user’s successive requests always be routed from Azure Load balancer to&amp;nbsp;&lt;U&gt;same&lt;/U&gt; WD as long as its active and ready to receive connections.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Also, SAP Help documentation describes this HA architecture as “&lt;A href="https://help.sap.com/docs/SAP_S4HANA_ON-PREMISE/683d6a1797a34730a6e005d1e8de6f22/489a9a6b48c673e8e10000000a42189b.html?locale=en-US" target="_blank" rel="noopener"&gt;High availability with several parallel Web Dispatchers&lt;/A&gt;”.&lt;/P&gt;
&lt;P&gt;The following are the advantages of the active-active SAP WD setup.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;Simpler design no need to set up Operating System Cluster&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;We have 2 WD instances to handle the requests and distribute the workload.&lt;/LI&gt;
&lt;LI&gt;If one of the nodes fail, Load balancer will forward request to another and stop sending requests to failed node. So, it means SAP WD setup is highly available.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We need the following components to setup active/active SAP Web Dispatcher on Linux.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A pair of SAP Certified VMs on Azure with supported Linux Operating System. Cross Availability Zone deployment is recommended for higher VM level SLA.&lt;/LI&gt;
&lt;LI&gt;Azure managed disk of required size on each VM to create Filesystems for ‘sapmnt’ and ‘/sar/sap’.&lt;/LI&gt;
&lt;LI&gt;Azure Load Balancer for configuring virtual IP and hostname (in DNS) of the SAP Web Dispatcher.&lt;/LI&gt;
&lt;LI&gt;Installation of SAP Web Dispatcher on both the VMs with different SID. It is recommended to use the latest version of SAP Web Dispatcher.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;3.2. Deployment Steps&lt;/H3&gt;
&lt;P&gt;This section provides detailed steps for HA active/active SAP Web Dispatcher deployment for both the supported Linux operating systems (SUSE Linux and Redhat Linux). Please refer to &lt;A href="https://me.sap.com/notes/1928533" target="_blank" rel="noopener"&gt;SAP Note 1928533&lt;/A&gt; for SAP on Azure certified VMs, SAPS values and supported operating systems versions for SAP environment.&lt;/P&gt;
&lt;H4&gt;3.2.1. For &lt;SPAN class="lia-text-color-11"&gt;SUSE&lt;/SPAN&gt; and &lt;SPAN class="lia-text-color-13"&gt;RHEL &lt;/SPAN&gt;Linux&lt;/H4&gt;
&lt;OL&gt;
&lt;LI&gt;Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SUSE/RHEL Linux for SAP. Add managed data disk on each of the VMs and create ‘/usr/sap’ and ‘/sapmnt/&amp;lt;SID&amp;gt; Filesystem in it.&lt;/LI&gt;
&lt;LI&gt;Install the SAP Web Dispatcher using SAP SWPM on both VMs. Both SAP WD are completely independent of each other and should have separate SID.&lt;/LI&gt;
&lt;LI&gt;Perform the &lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://help.sap.com/doc/saphelp_nw73/7.3.16/en-US/48/997375ec0973e9e10000000a42189b/content.htm?no_cache=true" target="_blank" rel="noopener"&gt;basic configuration check&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; for both SAP web dispatchers using &lt;/SPAN&gt;&lt;EM style="color: rgb(30, 30, 30);"&gt;“sapwebdisp pf=&amp;lt;profile&amp;gt; -checkconfig”. &lt;/EM&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;We should also check if SAP WD Admin URL is working for both WD.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Deploy the Azure standard load balancer for defining the virtual IP of the SAP Web Dispatcher. As a reference, the following setup is used in deployment.&lt;BR /&gt;&lt;BR /&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; height: 54px; border-width: 1px;"&gt;&lt;colgroup&gt;&lt;col style="width: 25%" /&gt;&lt;col style="width: 25%" /&gt;&lt;col style="width: 25%" /&gt;&lt;col style="width: 25%" /&gt;&lt;/colgroup&gt;&lt;tbody&gt;&lt;tr style="height: 27px;"&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Front-end IP&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Backend Pool&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Health Probe Port&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 27px;"&gt;&lt;STRONG&gt;Load Balancing Rule&lt;/STRONG&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 27px;"&gt;&lt;td style="height: 27px;"&gt;
&lt;P&gt;10.50.60.99&lt;/P&gt;
&lt;P&gt;(Virtual IP of SAP Web Dispatcher)&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 27px;"&gt;Node1 &amp;amp; Node2 VM&lt;/td&gt;&lt;td style="height: 27px;"&gt;
&lt;P&gt;Protocol: HTTPS&lt;/P&gt;
&lt;P&gt;Port: 44300 (WD https port)&lt;/P&gt;
&lt;P&gt;Path: /sap/public/icman/ping&lt;/P&gt;
&lt;P&gt;Interval: 5 seconds&lt;/P&gt;
&lt;P&gt;(set probeThreshold=2 using azure CLI)&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 27px;"&gt;
&lt;P&gt;Port &amp;amp; Backend Port: 44300&lt;/P&gt;
&lt;P&gt;Floating IP: Disable,&lt;/P&gt;
&lt;P&gt;TCP Reset: Disable,&lt;/P&gt;
&lt;P&gt;Idle Timeout: Max (30 Minutes)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;BR /&gt;Icman/ping is a way to ensure that SAP web dispatcher is successfully connected to backend SAP S/4 HANA or SAP ERP based application servers. This check is also part of the &lt;A href="https://help.sap.com/doc/saphelp_nw73/7.3.16/en-US/48/997375ec0973e9e10000000a42189b/content.htm?no_cache=true" target="_blank" rel="noopener"&gt;basic configuration check&lt;/A&gt; of SAP web dispatcher using &lt;EM&gt;“sapwebdisp pf=&amp;lt;profile&amp;gt; -checkconfig”&lt;/EM&gt;.&lt;BR /&gt;If we use HTTP(s) based health probe, ILB connection will be redirected to SAP WD only when connection between SAP WD and S/4 HANA OR ERP Application is working.&lt;BR /&gt;If we have Java based SAP system as backend environment, then ‘icman/ping’ will not be available, and HTTP(S) path can’t be used in health probe. In that case, we can use TCP based health probe (protocol value as ‘tcp’) and use SAP WD tcp port (like port 8000) in the health probe configuration.&lt;BR /&gt;In this setup, we used https port 44300 as port &amp;amp; backend port value as that is the only port number used by incoming/source URL. If there are multiple ports to be used/allowed in incoming URL, then we can enable ‘HA Port’ in Load balancing rule instead of specifying the used port.&lt;BR /&gt;Note: As per&amp;nbsp;&lt;A href="https://me.sap.com/notes/2941769" target="_blank" rel="noopener"&gt;SAP Note 2941769&lt;/A&gt;, we need to set SAP web dispatcher parameter &lt;EM&gt;wdisp/filter_internal_uris=FALSE&lt;/EM&gt;.&amp;nbsp; Also we need to verify if icman ping URL is working for both the SAP Web dispatchers with their actual hostnames.&lt;BR /&gt;Define the front-end IP (virtual IP) and hostname mapping in the DNS or /etc/hosts file.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Check if Azure Loadbalancer is routing traffic to both WD. In the ‘Insights’ section for Azure loadbalancer, connection health to the VMs should be green.&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;Validate the SAP Web Dispatcher URL is accessible using virtual hostname.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Perform high availability tests for SAP WD.&lt;/LI&gt;
&lt;LI&gt;Stop first SAP WD and verify WD connections are working.&lt;/LI&gt;
&lt;LI&gt;Then start the first WD and stop the second WD and verify that the WD connections are working.&lt;/LI&gt;
&lt;LI&gt;Simulate node crash of each of the WD VMs and verify that the WD connections are working.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;3.3. SAP Web Dispatcher (active/active) for Multiple Systems&lt;/H3&gt;
&lt;P&gt;We can use the SAP WD (active/active) pair to connect to multiple backend SAP systems rather than setting up separate SAP WD for each SAP backend environment.&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;Based on the unique URL of the incoming request with different virtual hostname/FQDN and/or port of the SAP WD, user request will be directed to any one of the SAP WD and then SAP WD will determine the backend system to redirect and load balance the requests.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;SAP documents describe the design and SAP specific configurations steps for this scenario.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://help.sap.com/docs/ABAP_PLATFORM_BW4HANA/683d6a1797a34730a6e005d1e8de6f22/b0ebfa88e9164d26bdf1d21a7ef6fc25.html" target="_blank" rel="noopener"&gt;SAP Web Dispatcher for Multiple Systems&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://help.sap.com/docs/ABAP_PLATFORM_BW4HANA/683d6a1797a34730a6e005d1e8de6f22/c5ec466f5544409982c7d3ca29ce1ad3.html" target="_blank" rel="noopener"&gt;One SAP Web Dispatcher, Two Systems: Configuration Example&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In Azure environment, SAP Web Dispatcher architecture will be as below.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;We can deploy this setup by defining an Azure standard load balancer with multiple front-end IPs attached to one backend-pool of SAP WD VMs and configuring health-probe and load balancing rules to associate it.&lt;/P&gt;
&lt;P&gt;When configuring Azure Load Balancer with multiple frontend IPs pointing to the same backend pool/port, floating IP must be enabled for each load balancing rule. If floating IP is not enabled on the first rule, Azure won’t allow the configuration of additional rules with different frontend IPs on the same backend port.&amp;nbsp; Refer to the article &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fload-balancer%2Fload-balancer-multivip-overview&amp;amp;data=05%7C02%7Cbanerjee.anjan%40microsoft.com%7C6d33a8b784de4ef9938f08dd7bb063a1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638802719752685984%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=a8voWi53yOiAD8szZQRXzp2B8oRstrSHs9ofmPsPcng%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Multiple frontends - Azure Load Balancer &lt;/A&gt;&lt;/P&gt;
&lt;P&gt;With floating IPs enabled on multiple load balancing rules, the frontend IP must be added to the network interface (e.g., eth0) on both SAP Web Dispatcher VMs.&lt;/P&gt;
&lt;H4&gt;3.3.1. Deployment Steps&lt;/H4&gt;
&lt;OL&gt;
&lt;LI&gt;Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SUSE/RHEL Linux for SAP. Add managed data disk on each of the VMs and create ‘/usr/sap’ and ‘/sapmnt/&amp;lt;SID&amp;gt; Filesystem in it.&lt;/LI&gt;
&lt;LI&gt;Install the SAP Web Dispatcher using SAP SWPM on both VMs. Both SAP WD are completely independent of each other and should have separate SID.&lt;/LI&gt;
&lt;LI&gt;Deploy Azure Standard Load Balancer with configuration as below&lt;BR /&gt;&lt;BR /&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; height: 333px; border-width: 1px;"&gt;&lt;colgroup&gt;&lt;col style="width: 25.0391%" /&gt;&lt;col style="width: 25.0391%" /&gt;&lt;col style="width: 25.0391%" /&gt;&lt;col style="width: 24.8826%" /&gt;&lt;/colgroup&gt;&lt;tbody&gt;&lt;tr style="height: 27px;"&gt;&lt;td style="height: 27px;"&gt;Front-end IP&lt;/td&gt;&lt;td style="height: 27px;"&gt;Backend Pool&lt;/td&gt;&lt;td style="height: 27px;"&gt;Health Probe Port&lt;/td&gt;&lt;td style="height: 27px;"&gt;Load Balancing Rule&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 153px;"&gt;&lt;td style="height: 153px;"&gt;
&lt;P&gt;10.50.60.99&lt;/P&gt;
&lt;P&gt;(Virtual IP of SAP Web Dispatcher for redirection to S/4 or Fiori SID &lt;STRONG&gt;E10&lt;/STRONG&gt;)&lt;/P&gt;
&lt;/td&gt;&lt;td rowspan="2" style="height: 306px;"&gt;Node1 &amp;amp; Node2 VMs&lt;/td&gt;&lt;td rowspan="2" style="height: 306px;"&gt;
&lt;P&gt;Protocol: TCP&lt;/P&gt;
&lt;P&gt;Port: 8000 (WD tcp port)&lt;/P&gt;
&lt;P&gt;Interval: 5 seconds&lt;/P&gt;
&lt;P&gt;(set probeThreshold=2 using azure CLI)&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 153px;"&gt;
&lt;P&gt;Protocol: TCP&lt;/P&gt;
&lt;P&gt;Port &amp;amp; Backend Port: 44300&lt;/P&gt;
&lt;P&gt;Floating IP: Enable,&lt;/P&gt;
&lt;P&gt;TCP Reset: Disable,&lt;/P&gt;
&lt;P&gt;Idle Timeout: Max (30 Minutes)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 153px;"&gt;&lt;td style="height: 153px;"&gt;
&lt;P&gt;10.50.60.101&lt;/P&gt;
&lt;P&gt;(Virtual IP of SAP Web Dispatcher for redirection to S/4 SID or Fiori &lt;STRONG&gt;E60&lt;/STRONG&gt;)&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 153px;"&gt;
&lt;P&gt;Protocol: TCP&lt;/P&gt;
&lt;P&gt;Port &amp;amp; Backend Port: 44300&lt;/P&gt;
&lt;P&gt;Floating IP: Enable,&lt;/P&gt;
&lt;P&gt;TCP Reset: Disable,&lt;/P&gt;
&lt;P&gt;Idle Timeout: Max (30 Minutes)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
As described above, we are defining 2 front-end IPs, 2 load-balancing rules, 1 back-end pool and 1 health probe.&amp;nbsp;&lt;BR /&gt;In this setup, we used https port 44300 as port &amp;amp; backend port value as that is the only port number used by incoming/source URL. If there are multiple ports to be used/allowed in incoming URL, then we can enable ‘HA Port’ in Load balancing rule instead of specifying the used port.&lt;BR /&gt;Define the front-end IP (virtual IP) and hostname mapping in the DNS or /etc/hosts file.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Add both the virtual IPs to the SAP WD VMs network interface. Make sure the additional IPs are added permanently and do not disappear after VM reboot.&lt;BR /&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-11"&gt;For SLES&lt;/SPAN&gt;&lt;/STRONG&gt;, refer to “alternative workaround” section in &lt;A href="https://www.suse.com/support/kb/doc/?id=000021188" target="_blank" rel="noopener"&gt;Automatic Addition of Secondary IP Addresses in Azure&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;For RHEL&lt;/STRONG&gt;&lt;/SPAN&gt;, refer to the solution provided using “nmcli” command in the &lt;A href="https://learn.redhat.com/t5/Platform-Linux/How-to-add-multiple-IP-range-in-RHEL9/m-p/38413#M2210" target="_blank" rel="noopener"&gt;How to add multiple IP range in RHEL9&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Displaying the "ip addr show" for SAP WD VM1:&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;&amp;gt;&amp;gt;ip addr show
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 60:45:bd:73:bd:14 brd ff:ff:ff:ff:ff:ff
    inet 10.50.60.87/26 brd 10.50.60.127 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.50.60.99/26 brd 10.50.60.127 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 10.50.60.101/26 brd 10.50.60.127 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::6245:bdff:fe73:bd14/64 scope link
       valid_lft forever preferred_lft forever&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Displaying the "ip addr show" for SAP WD VM2:&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;&amp;gt;&amp;gt; ip addr show
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 60:45:bd:73:b1:92 brd ff:ff:ff:ff:ff:ff
    inet 10.50.60.93/26 brd 10.50.60.127 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.50.60.99/26 brd 10.50.60.127 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 10.50.60.101/26 brd 10.50.60.127 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::6245:bdff:fe73:b192/64 scope link
       valid_lft forever preferred_lft forever&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Update the Instance profile of SAP WDs.&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;#-----------------------------------------------------------------------
# Back-end system configuration
#-----------------------------------------------------------------------
wdisp/system_0 = SID=E10, MSHOST=e10ascsha, MSPORT=8100, SSL_ENCRYPT=1, SRCSRV=10.50.60.99:*
wdisp/system_1 = SID=E60, MSHOST=e60ascsha, MSPORT=8100, SSL_ENCRYPT=1, SRCSRV=10.50.60.101:*&lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI&gt;Stop and Start the SAP WD on VM1 and VM2.&lt;/LI&gt;
&lt;LI&gt;Note: With the above SRCSRV parameter value, only incoming request from “.99 (or its hostname)” for E10 or “.101 (or its hostname)” for E60 will be sent to SAP backend environment. &amp;nbsp;If we also want to use SAP WD actual IP or hostname-based request to be also connected to SAP Backend systems, then we need to add those IP or hostnames in the value (separated by semicolon) of SRCSRV parameter.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Perform the basic configuration check for both SAP web dispatcher using &lt;EM&gt;“sapwebdisp pf=&amp;lt;profile&amp;gt; -checkconfig”&lt;/EM&gt;. We should also check if SAP WD Admin URL is working for both WD.&lt;/LI&gt;
&lt;LI&gt;In the Azure Portal, in the ‘Insights’ section of Azure load balancer, we can see that connection status to the SAP WD VMs are healthy.&lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt;&lt;/LI&gt;
&lt;/OL&gt;</description>
      <pubDate>Thu, 05 Jun 2025 03:54:07 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-web-dispatcher-on-linux-with-high-availability-setup-on/ba-p/4413219</guid>
      <dc:creator>AnjanBanerjee</dc:creator>
      <dc:date>2025-06-05T03:54:07Z</dc:date>
    </item>
    <item>
      <title>SAP on Azure Product Announcements Summary – SAP Sapphire 2025</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-product-announcements-summary-sap-sapphire-2025/ba-p/4415281</link>
      <description>&lt;H3&gt;&lt;STRONG&gt;Introduction&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Today at Sapphire, we made an array of exciting &lt;A href="https://aka.ms/sapphire25blog" target="_blank" rel="noopener"&gt;announcements&lt;/A&gt; that strengthen the Microsoft-SAP partnership. I'd like to share additional details that complement these announcements as well as give updates on further product innovation. With over three decades of close collaboration and co-innovation with SAP, we continue to deliver RISE with SAP on Azure and integrations with SAP S/4HANA Public Cloud, allowing customers to innovate with services from both SAP BTP and Microsoft. Our new integrations enhance security through multi-layer cloud protection for SAP and non-SAP workloads, while our AI and Copilot platform provides unified analytics to improve decision-making for customers.&lt;/P&gt;
&lt;P&gt;Samsung C&amp;amp;T's Engineering &amp;amp; Construction Group is a leader in both the domestic and international construction industries. It recently embarked on the ERP Cloud transformation with RISE with SAP on Azure to enhance its existing ERP System, which is optimized for local environment, to support &lt;A href="https://www.microsoft.com/en/customers/story/23265-samsung-c-and-t-power-bi" target="_blank" rel="noopener"&gt;the global business expansion by transitioning to RISE with SAP on Azure&lt;/A&gt;.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM class="lia-align-left"&gt;“Samsung C&amp;amp;T’s successful transition to RISE with SAP on Azure serves as a best practice for other Samsung Group affiliates considering cloud-based ERP adoption. It also demonstrates that even highly localized operations can be integrated into a cloud-based environment that supports global standards.”&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM class="lia-align-left"&gt;Aidan Nam, Former Vice President, Corporate System Team, Samsung C&amp;amp;T&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;SAP on Azure also offers AI, Data, and Security solutions that enhance customers' investments and help unlock valuable information stored within ERP systems. When Danfoss, a global leader in energy-efficient solutions, began searching for new security tools for business-critical SAP infrastructure, it quickly &lt;A href="https://www.microsoft.com/en/customers/story/22786-danfoss-microsoft-sentinel" target="_blank" rel="noopener"&gt;leveraged Microsoft Sentinel solution for SAP applications&lt;/A&gt;, to find potential malicious activity and deploy multilayered protection around its expanding core infrastructure thereby achieving scalable security visibility.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM&gt;“With Microsoft Sentinel and the Microsoft Sentinel solution for SAP applications, we’ve centralized our security logs and gained a single pane of glass with which we can monitor our SAP systems,”&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Kevin Cai, IT Specialist in the Security Operations Center at Danfoss&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;We are pleased to announce additional SAP on Azure product updates and details to further help customers innovate on the most trusted cloud for SAP.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Simplified onboarding of SAP BTP estate with the &lt;STRONG&gt;new agentless data connector&lt;/STRONG&gt; for Microsoft Sentinel Solution for SAP&lt;/LI&gt;
&lt;LI&gt;Microsoft Defender for Endpoint for SAP applications is now &lt;STRONG&gt;fully SAP HANA aware &lt;/STRONG&gt;offering unparalleled &lt;STRONG&gt;protection for SAP S/4HANA&lt;/STRONG&gt; environments.&lt;/LI&gt;
&lt;LI&gt;Public Preview of &lt;STRONG&gt;SAP OData as a knowledge source&lt;/STRONG&gt; making it easy to add content from SAP systems to Copilot Studio.&lt;/LI&gt;
&lt;LI&gt;The new storage and memory optimized &lt;STRONG&gt;Medium Memory Mbv3 VM Series&lt;/STRONG&gt; (Mbsv3 and Mbdsv3) is now&lt;STRONG&gt; SAP certified&lt;/STRONG&gt;, delivering compute capabilities with IOPs performance of up to 650K.&lt;/LI&gt;
&lt;LI&gt;The&amp;nbsp;&lt;STRONG&gt;Mv3 Very High Memory series&lt;/STRONG&gt; now features an expanded range of SAP-certified VM sizes, spanning from 24TB to 32TB of memory and scaling up to 1,792 vCPUs.&lt;/LI&gt;
&lt;LI&gt;General Availability of &lt;STRONG&gt;SAP ASE (Sybase) database backup &lt;/STRONG&gt;support on Azure Backup.&lt;/LI&gt;
&lt;LI&gt;SAP Deployment Automation Framework now supports validation of SAP deployments on Azure with public preview of&amp;nbsp;&lt;STRONG&gt;SAP Testing Automation Framework (STAF),&lt;/STRONG&gt; automating high availability testing process to ensure SAP systems reliability and availability.&lt;/LI&gt;
&lt;LI&gt;The&amp;nbsp;&lt;STRONG&gt;Inventory Checks&lt;/STRONG&gt; for SAP Workbook in Azure Center for SAP Tools now comes with &lt;STRONG&gt;New Dashboards&lt;/STRONG&gt; for Enhanced Visibility.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Let's dive into the summary of product updates and services.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Extend and Innovate &lt;/STRONG&gt;&lt;/H3&gt;
&lt;H5&gt;&lt;STRONG&gt;Microsoft Sentinel Solution for SAP&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Business applications pose a unique security challenge with highly sensitive information that can make them prime targets for attacks. Attackers can compromise newly discovered unprotected SAP systems within three hours. Microsoft offers best in class security solutions support for SAP business applications with Microsoft Sentinel.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The new&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/blog/microsoftsentinelblog/microsoft-sentinel-for-sap-new-security-content-goes-beyond-agentless-%F0%9F%9A%80/4407911" target="_blank" rel="noopener"&gt;agentless&lt;/A&gt; data connector is our first party solution that re-uses customers’ SAP BTP estate for drastically simplified onboarding. In addition, new strategic third-party solutions have been added to the Microsoft Sentinel content hub by SAP SE and other ISVs making Sentinel the most effective SIEM for SAP workloads:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://community.sap.com/t5/enterprise-resource-planning-blogs-by-sap/sap-logserv-integration-with-microsoft-sentinel-for-sap-rise-customers-is/ba-p/14085387" target="_blank" rel="noopener"&gt;SAP LogServ&lt;/A&gt;: RISE, addon for SAP ECS internal logs -infra, database, etc. (Generally Available)&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://community.sap.com/t5/enterprise-resource-planning-blogs-by-sap/sap-enterprise-threat-detection-cloud-edition-joins-forces-with-microsoft/ba-p/13942075" target="_blank" rel="noopener"&gt;SAP Enterprise Threat Detection&lt;/A&gt; (Preview)&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azuremarketplace.microsoft.com/de-de/marketplace/apps/securitybridge1647511278080.securitybridge-sentinel-app-1" target="_blank" rel="noopener"&gt;SecurityBridge&lt;/A&gt; (Generally Available)&lt;/LI&gt;
&lt;/UL&gt;
&lt;H5&gt;&lt;STRONG&gt;Microsoft Defender for Endpoint for SAP applications&amp;nbsp; &lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;We are thrilled to announce a major milestone made possible through the deep collaboration between SAP and Microsoft: Microsoft Defender for Endpoint (MDE) is now the first NextGen antivirus solution that is SAP HANA aware. This joint innovation allows organizations like &lt;A href="https://www.microsoft.com/en/customers/story/18744-cofco-intl-microsoft-defender" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;COFCO International&lt;/STRONG&gt;&lt;/A&gt; to protect their SAP landscapes seamlessly and securely, without disruption.&lt;/P&gt;
&lt;P&gt;This groundbreaking capability sets MDE apart in the cybersecurity landscape, offering unparalleled protection for SAP S/4HANA environments — all without interfering with mission-critical operations.&lt;BR /&gt;Thanks to close engineering collaboration, MDE has been carefully trained to recognize SAP HANA binaries and data files. Specialized detection training ensures MDE accurately identifies these critical components and treats them as known, trusted entities — combining world-class cybersecurity with SAP-native awareness.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;API Management&amp;nbsp;&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;SAP Principal Propagation (for simplicity often also referred to as SSO) is the gold standard for app integration – especially when it comes to 3rd party apps such as Microsoft Power Platform. We proudly announce that SSO is now password-less with Azure. Microsoft Entra ID Managed Identity works seamlessly with SAP workloads such as RISE, SuccessFactors and more. Cut your maintenance efforts for SAP SSO in half and become more secure in doing so.&lt;/P&gt;
&lt;P&gt;Find more details on &lt;A href="https://community.sap.com/t5/technology-blogs-by-members/sap-principal-propagation-without-secrets-how-managed-identity-in-apim/ba-p/14091769" target="_blank" rel="noopener"&gt;this blog&lt;/A&gt;.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Teams&lt;/STRONG&gt;&lt;STRONG&gt; Integration&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;In addition to the availability of the &lt;U&gt;SAP Joule agent in Teams and Copilo&lt;/U&gt;&lt;U&gt;t&lt;/U&gt;, the “classic” integration of Teams with products like SAP S/4HANA Public Cloud is available as well. What started as “Share links to the business context (apps) in chats” has now evolved to Adaptive Card-based Loop components, Chat, Voice and Video calls integrations in contact cards and To Dos in Teams.&lt;/P&gt;
&lt;P&gt;Users of SAP S/4HANA Public Cloud can stay in their flow of work and access their business-critical data from with SAP S/4HANA Public Cloud or connected Teams applications.&lt;/P&gt;
&lt;img /&gt;
&lt;H5&gt;&lt;STRONG&gt;Copilot Studio – SAP OData Support in Knowledge Sources&amp;nbsp;&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Knowledge Sources in Copilot Studio enhance generative answers by using data from Power Platform, Dynamics 365, websites, and external systems. This enables agents to offer relevant information and insights to customers.&lt;/P&gt;
&lt;P&gt;Today, we announce the Public Preview of SAP OData as a new knowledge source. Customers and partners can now add content from SAP systems to Copilot Studio. Users can query the latest status of Sales Orders in SAP S/4HANA, view pending Invoices from ECC, or query information about employees from SAP SuccessFactors.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;All you need to do is connect to the relevant SAP OData services as a knowledge source in Copilot Studio. Copilot Studio will not duplicate the data but analyze the data structure and create the relevant queries on demand whenever a user asks a related question. The user context is always kept ensuring roles and permissions in the SAP system are taken into account.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Head over to&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/microsoft-copilot-studio/knowledge-graph-connections#supported-enterprise-data-sources-using-microsoft-graph-connectors-preview" target="_blank" rel="noopener"&gt;product documentation&lt;/A&gt; to read more and get started. &amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;New SAP Certified Compute&lt;/STRONG&gt;&lt;STRONG&gt; and Storage &lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Thousands of organizations today trust the Azure M-series virtual machines to run some of their largest mission-critical SAP workloads, including SAP HANA.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Very High Memory Mv3 VM Series&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;We are excited to unveil updates to our Mv3 Very High Memory (VHM) series with the addition of a 24TB VM, a testament to our ongoing commitment to innovation. Building on our past successes, this series integrates customer insights and industry advancements to deliver unmatched performance and efficiency. It features advanced capabilities for diverse workloads, powered by the 4th generation Intel® Xeon® Platinum 8490H processors, which offer faster processing speeds and better price-performance. You can learn more about our new Mv3 VHM here&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/memory-optimized/mdsv3-very-high-memory-series" target="_blank" rel="noopener"&gt;link&lt;/A&gt;. Below is the summary of the recently released Mv3 VHM SKUs.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;(New) Standard_M896ixds_24_v3:&lt;/STRONG&gt; Designed for S/4HANA workloads, with 896 cores and SMT disabled for optimal SAP performance. It is SAP certified for OLTP (S/4HANA) Scale-Up/4-node Scale-Out, and OLAP (BW4H) Scale-Up operations.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Standard_M896ixds_32_v3:&lt;/STRONG&gt; Designed for S/4HANA workloads, with 896 cores and SMT disabled for optimal SAP performance. It is SAP certified for OLTP (S/4HANA) Scale-Up/4-node Scale-Out, and OLAP (BW4H) Scale-Up operations.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Standard_M1792ixds_32_v3: &lt;/STRONG&gt;Designed for S/4HANA workloads, with 1792 cores. It is SAP certified for OLTP (S/4HANA) Scale-Up/2-node Scale-Out, and OLAP (BW4H) Scale-Up operations.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The new VM Size provides robust memory and CPU power, ensuring exceptional handling of large-scale in-memory databases. With 200 Gbps bandwidth and adaptable storage options such as Premium Disk and Azure NetApp Files (ANF), these VMs deliver speed and flexibility for SAP HANA configurations.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Medium Memory Mbv3 VM Series&amp;nbsp; &lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;The new Mbv3 series (Mbsv3 and Mbdsv3), released in September 2024, featuring both storage-optimized and memory-optimized are now certified as SAP certified compute VM as of March 2025. The new Mbv3 VMs are based on the 4th generation Intel® Xeon® Scalable processors, scale for workloads up to 4TB, and deliver with NVMe interface for higher remote disk storage performance. It offers up to 650,000 IOPS providing a 5x improvement in network throughput over the previous M-series families and up to 10GBps of remote disk storage bandwidth, providing a 2.5x improvement in remote storage bandwidth over the previous M-series families.&lt;/P&gt;
&lt;P&gt;Details of SAP Certified Compute Mbv3 VMs are here&amp;nbsp;&lt;A href="https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;iaas;ve:24&amp;amp;sort=Latest%20Certification&amp;amp;sortDesc=true&amp;amp;id=s:3067" target="_blank" rel="noopener"&gt;link&lt;/A&gt;.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;SAP on Azure Software Products and Services&amp;nbsp;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;H5&gt;&lt;STRONG&gt;Azure Backup for SAP&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;We are pleased to announce the general availability of backup support for SAP ASE database running on Azure virtual machines using&amp;nbsp;&lt;A href="https://review.learn.microsoft.com/en-us/azure/backup/backup-overview" target="_blank" rel="noopener"&gt;Azure Backup&lt;/A&gt;.&amp;nbsp;SAP ASE databases are mission critical workloads that require a low recovery point objective (RPO) and a fast recovery time objective (RTO). This backup service offers zero-infrastructure backup and restore of SAP ASE databases with Azure Backup enterprise management capabilities.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Key benefits of SAP ASE database backup&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;15-minute RPO &lt;/STRONG&gt;with point-in-time recovery capability.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Striping to increase the backup throughput&lt;/STRONG&gt; between ASE Virtual Machine (VM) and Recovery services vault&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Support for cost-effective backup policies&lt;/STRONG&gt; and&lt;STRONG&gt; &lt;/STRONG&gt;ASE Native compression&lt;STRONG&gt; &lt;/STRONG&gt;to lower backup storage costs.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Multiple databases restore options&lt;/STRONG&gt; including Alternate Location Restore (System refresh), Original Location Restore and Restore as Files.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Recovery Services Vault&lt;/STRONG&gt; that provides security capabilities like Immutability, Soft Delete and Multiuser Authentication.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;H5&gt;&lt;STRONG&gt;SAP Testing Automation Framework (STAF)&amp;nbsp;&lt;/STRONG&gt;&amp;nbsp;&lt;/H5&gt;
&lt;P&gt;While deployment automation frameworks like SAP Deployment Automation Framework (SDAF) have streamlined system implementation, the critical testing phase has largely remained a manual bottleneck – until now. We are introducing the SAP Testing Automation Framework (STAF), a new framework (currently in public preview) that automates high-availability (HA) testing for SAP deployments on Azure. STAF currently focuses on testing HA configurations for SAP HANA and SAP Central Services. Importantly, STAF is a cross-distribution solution supporting both SUSE Linux Enterprise Server (SLES) and RedHat Enterprise Linux (RHEL), reflecting our commitment to serve the diverse SAP on Azure customer base.. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;STAF uses modular architecture with Ansible for orchestration and custom modules for validation. It ensures business continuity by validating configurations and recovery mechanisms before systems go live, reducing risks, boosting efficiency, and ensuring compliance with standards. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can start leveraging its capabilities today by visiting the project on GitHub at &lt;A href="https://github.com/azure/sap-automation-qa" target="_blank" rel="noopener"&gt;https://github.com/azure/sap-automation-qa&lt;/A&gt;.&amp;nbsp;To know more about the framework please visit our blog: &lt;A href="https://techcommunity.microsoft.com/blog/sapapplications/empowering-sap-on-azure-with-the-sap-testing-automation-framework-staf/4411976" target="_blank" rel="noopener"&gt;Introducing SAP Testing Automation Framework (STAF)&lt;/A&gt;&lt;/P&gt;
&lt;H5&gt;&amp;nbsp;&lt;STRONG&gt;Azure Center for SAP solutions Tools and Frameworks&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;We are pleased to introduce three new dashboards for &lt;STRONG&gt;Azure Inventory Checks for SAP&lt;/STRONG&gt;, enhancing visibility into Azure infrastructure and security. These dashboards offer a more structured, visual approach to monitoring health and compliance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here are the new dashboards at a glance&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;Summary Dashboard&lt;/STRONG&gt;: Offers a snapshot of your Azure landscape with results from 21 key infrastructure checks critical for SAP workloads. It highlights your environment’s readiness and identifies areas needing attention.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;Extended Report Dashboard&lt;/STRONG&gt;: This view presents the Inventory Checks for SAP in a user-friendly dashboard layout, with enhanced navigation and filtering.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;AzSecurity Dashboard&lt;/STRONG&gt;: This dashboard presents 10 key Azure security checks to provide insights into configurations and identify vulnerabilities, ensuring compliance and safety.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;These dashboards transform raw data into actionable insights, allowing customers to quickly assess SAP infrastructure on Azure, identify misconfigurations, track improvements, and prepare confidently for audits and reviews.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;SAP + Microsoft Co-Innovations&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Microsoft and SAP are continually innovating to facilitate business transformation for our customers. This year, we are strengthening our partnership in several areas including Business Suite, AI, Data, Cloud ERP, Security, SAP BTP, among others. Please ensure that you &lt;A href="https://aka.ms/sapphire25blog" target="_blank" rel="noopener"&gt;check out our blog&lt;/A&gt; to learn more about the significant announcements we are making this year at SAP Sapphire.&lt;/P&gt;</description>
      <pubDate>Tue, 20 May 2025 14:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-product-announcements-summary-sap-sapphire-2025/ba-p/4415281</guid>
      <dc:creator>Hiren_Shah_Azure</dc:creator>
      <dc:date>2025-05-20T14:00:00Z</dc:date>
    </item>
    <item>
      <title>Introducing the SAP Testing Automation Framework: Elevating SAP System Testing on Azure</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/introducing-the-sap-testing-automation-framework-elevating-sap/ba-p/4411976</link>
      <description>&lt;P&gt;In today’s fast-paced digital landscape, ensuring that enterprise systems perform flawlessly is non-negotiable. As businesses increasingly rely on SAP systems to run their critical operations, testing becomes an essential pillar of operational excellence. Traditionally, SAP system testing has been manual, time-consuming, and prone to gaps. Addressing these critical aspects, Microsoft has introduced the SAP Testing Automation Framework (STAF), an open-source orchestration tool developed to validate SAP deployments on Microsoft Azure. It enables you to assess system configurations against SAP on Azure best practices and to automate various testing scenarios, with an initial focus on high availability (HA) testing.&lt;/P&gt;
&lt;H1&gt;What is the SAP Testing Automation Framework?&lt;/H1&gt;
&lt;P&gt;The SAP Testing Automation Framework is an open-source orchestration tool engineered to validate SAP deployments on the Microsoft Azure platform. Its core purpose is to help customers ensure their SAP systems run smoothly by proactively identifying potential issues. It achieves this by simulating system failures, verifying that configurations adhere to best practices, and automating the entire testing process to save time and improve accuracy.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The framework is built on a modular, configuration‑as‑code model using standard tools. The tests are defined in version‑controlled Ansible playbooks, and custom Python modules handle in‑depth checks of both your SAP systems and Azure resources.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key Features:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Configuration Validation: It checks whether the configurations of SAP HANA scale-up or SAP Central Services align with established SAP on Azure best practices and guidelines.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;High Availability Functional Testing: It simulates multiple test cases to ensure that the failover mechanisms are effective. With SAP HANA databases and SAP Central Services as prime examples, this testing validates that when a component fails, the system can gracefully recover without disruption. This helps identify potential issues during new system deployments.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Detailed Reporting: The framework collects detailed telemetry from SAP components and test execution where it captures event sequences, detection timings, failover durations, and system responses. It compiles this data into a comprehensive HTML report with clear pass/fail outcomes and timestamps. Optionally, you can stream logs to Azure Log Analytics or Data Explorer.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Extensible and Pipeline-ready: All framework operations, Ansible playbooks, and custom python modules are defined as code, making them ideal for integration with CI/CD pipelines. You can invoke STAF immediately after your deployment and installation step via &lt;A href="https://learn.microsoft.com/en-us/azure/sap/automation/deployment-framework" target="_blank" rel="noopener"&gt;SAP Deployment Automation Framework&lt;/A&gt;, running comprehensive HA tests before promoting changes.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;SAP System High Availability Functional Testing&lt;/H1&gt;
&lt;P&gt;The initial and most prominent capability of the SAP Testing Automation Framework is its comprehensive High Availability (HA) functional testing for critical SAP components hosted on Microsoft Azure. The framework targets scenarios involving SAP HANA scale-up database and SAP Central Services (ASCS/SCS) deployed in a two-node cluster on SUSE Linux Enterprise Server (SLES) or Red Hat Enterprise Linux (RHEL), providing a cross-distribution solution for our diverse SAP on Azure customer base. For supported configuration, see&amp;nbsp;&lt;A href="https://github.com/Azure/sap-automation-qa/blob/main/docs/HIGH_AVAILABILITY.md#supported-configurations" target="_blank" rel="noopener"&gt;support matrix&lt;/A&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The SAP Testing Automation Framework employs a systematic approach to validate the robustness of SAP system HA setup. It verifies the configuration and captures the entire sequence of events in an HA scenario: from initial failure detection, through isolation of the faulty component (including fencing), to resource migration, service recovery on the standby node, and implicitly, the consistency of data upon successful recovery. Upon completion of the tests run, STAF compiles results into a clear, HTML‑based report that details configuration compliance checks and functional test outcomes, complete with timestamps and pass/fail statuses. The report also includes logs from /var/log/messages to provide context.&lt;/P&gt;
&lt;img&gt;HTML Report after completion of the run&lt;/img&gt;
&lt;P&gt;This comprehensive validation of the HA process flow is fundamental to building confidence in the resilience of the SAP system on Azure.&lt;/P&gt;
&lt;H1&gt;Getting Started with the SAP Testing Automation Framework&lt;/H1&gt;
&lt;P&gt;The SAP Testing Automation Framework is available as an open-source project on GitHub for the community to use and contribute. You can find the code and documentation in the official repository: &lt;A href="https://github.com/Azure/sap-automation-qa" target="_blank" rel="noopener"&gt;Azure/sap-automation-qa: This is the repository supporting the quality assurance for SAP systems running on Azure.&lt;/A&gt;. The project is currently in a public preview stage. So, feedback and contributions are welcome to help improve its capabilities.&lt;/P&gt;
&lt;P&gt;To start using the framework, you have a couple of options depending on your environment:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;SAP system manually deployed (not using SDAF):&lt;/STRONG&gt; If you want to validate manually configured high availability of SAP system, you can run the framework in a &lt;A href="https://github.com/Azure/sap-automation-qa/blob/main/docs/HIGH_AVAILABILITY.md" target="_blank" rel="noopener"&gt;standalone mode&lt;/A&gt;. This involves deploying the management server (for example, an Ubuntu VM that will orchestrate the tests), configuring it with details of your SAP landscape (cluster nodes, IPs, etc.), and then executing the provided playbooks or scripts to run the HA tests. The repository provides guidance on how to configure the necessary variables and run the test scenarios for a Pacemaker cluster environment.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Integration with Deployment Pipelines:&lt;/STRONG&gt; For those who already use automated deployment tools like the &lt;A href="https://github.com/Azure/sap-automation-qa/blob/main/docs/SDAF_INTEGRATION.md" target="_blank" rel="noopener"&gt;SAP Deployment Automation Framework (SDAF)&lt;/A&gt; for Azure, the testing framework can integrate directly into those pipelines. The framework is designed as a natural extension to SDAF, so it can leverage the same configuration context and Azure resources defined during deployment. This allows you to embed HA testing into your continuous delivery process, every time you deploy or update an SAP environment, the pipeline can automatically run the HA tests and surface any issues before you even hand the system over to end-users or application teams.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;Call to Action &amp;amp; Community Engagement&lt;/H1&gt;
&lt;P&gt;The &lt;A href="https://github.com/azure/sap-automation-qa" target="_blank" rel="noopener"&gt;SAP Testing Automation Framework&lt;/A&gt;, in public preview stage, is a significant step forward in reducing misconfigurations and manual effort in high availability deployment of SAP system on Azure. We encourage you to explore the framework, share your feedback, or contribute. During this public preview, we recommend using framework for new greenfield production high availability deployments that are not yet live, or on non-production environments. &amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Appendix and References&amp;nbsp;&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse-pacemaker?tabs=msi" target="_blank" rel="noopener"&gt;Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel-pacemaker?tabs=msi" target="_blank" rel="noopener"&gt;Set up Pacemaker on RHEL in Azure | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel?tabs=lb-portal%2Censa1" target="_blank" rel="noopener"&gt;Azure Virtual Machines HA for SAP NW on RHEL | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-suse?tabs=lb-portal%2Censa1" target="_blank" rel="noopener"&gt;Azure VMs high availability for SAP NetWeaver on SLES | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/azure/sap-automation-qa" target="_blank" rel="noopener"&gt;Azure/sap-automation-qa: This is the repository supporting the quality assurance for SAP systems running on Azure.&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/Azure/sap-automation" target="_blank" rel="noopener"&gt;Azure/sap-automation: This is the repository supporting the SAP deployment automation framework on Azure&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 19 May 2025 15:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/introducing-the-sap-testing-automation-framework-elevating-sap/ba-p/4411976</guid>
      <dc:creator>hdamecharla</dc:creator>
      <dc:date>2025-05-19T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Join Microsoft at SAP Sapphire 2025</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/join-microsoft-at-sap-sapphire-2025/ba-p/4412561</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I’m thrilled to be back at SAP Sapphire this year alongside my colleagues from Microsoft! Sapphire is an event I always look forward to as it provides a great opportunity to celebrate the successes of our customers and partners as well as share big announcements and product updates. Whether you’re joining us for pre-day events, engaging in sessions during Sapphire, or enjoying the networking opportunities, there’s something for everyone. Read on to learn more about what’s in store:&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Previewing exciting innovation for SAP on the Microsoft Cloud&lt;/STRONG&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Data &amp;amp; AI: &lt;/STRONG&gt;At Sapphire, we will be sharing key updates on SAP Business Data Cloud (BDC) on Azure and how you can use Azure Databricks with SAP BDC. We will also be sharing the progress on the joint integration between Microsoft Copilot and SAP Joule, helping accelerate business outcomes and increase end-user productivity.&lt;/LI&gt;
&lt;/UL&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;SAP BTP on Azure:&lt;/STRONG&gt;&lt;STRONG&gt; &lt;/STRONG&gt;Together with SAP, we are ensuring our customers can use the latest SAP Business Technology (BTP) services on Microsoft Azure in their preferred regions. We are excited to share the &lt;STRONG&gt;launch of two new datacenter regions for SAP BTP on Azure – Canada (Toronto) and China (Hebei). &lt;/STRONG&gt;With this announcement, SAP BTP is now available in 10 Azure datacenter regions, including Brazil, launched late last year. Thanks to incredible demand from our joint customers, SAP has also added several additional BTP services on Azure. New services include SAP Build Apps, SAP Build Code, SAP AI Core and Joule. You can view all the existing BTP services and regions on Azure on the&amp;nbsp;&lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://discovery-center.cloud.sap/serviceCatalog?provider=azure" target="_blank" rel="noopener"&gt;SAP Discovery Center&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;. &lt;/SPAN&gt;To stay up-to-date on future plans for service and region roll-out, visit the &lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://roadmaps.sap.com/board?range=CURRENT-LAST&amp;amp;PRODUCT=73555000100800002141#Q2%202025" target="_blank" rel="noopener"&gt;SAP roadmap explorer&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;RISE with SAP Customer Spotlight –&lt;/STRONG&gt;&lt;STRONG&gt; Nestlé&lt;/STRONG&gt;: This multinational organization with over 2,000 brands in 188 countries has operations that are as large as they are complex. &lt;A href="https://aka.ms/Nestle-CustomerStory" target="_blank" rel="noopener"&gt;Nestlé executed one of the largest RISE with SAP migrations in the world on Azure&lt;/A&gt; where they could build a future-ready enterprise, leveraging AI-driven solutions. Their need for a platform that could deliver innovation and reliability at scale along with a robust infrastructure made Azure the clear choice. To hear more about their transformation story, make sure to attend the session at Sapphire: &lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1740696227505001qZXo" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Nestlé’s journey from SAP on-premises to RISE with SAP on Microsoft Azure&lt;/STRONG&gt;&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;STRONG&gt;Join us at SAP Sapphire 2025&lt;/STRONG&gt;&lt;/H2&gt;
&lt;H4&gt;&lt;STRONG&gt;Sessions you don’t want to miss &lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;We’re bringing a dynamic lineup of 10 in-person sessions across both Orlando and Madrid, featuring insights from Microsoft and SAP experts. Don’t miss the chance to dive into the latest on RISE with SAP, SAP Business Suite, SAP BTP, and Data and AI on the Microsoft Cloud—plus hear real-world stories from customers who are already driving results through the Microsoft and SAP partnership. Register now using the links below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; height: 990.8px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 30.8px;"&gt;&lt;td class="lia-align-center" style="height: 30.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Session&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 30.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Number&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 30.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Date and Time&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 106.8px;"&gt;&lt;td style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1740696227505001qZXo" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Nestlé’s journey from SAP on-premises to RISE with SAP on Microsoft Azure&lt;/STRONG&gt;&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;PAR1165&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;Wednesday, May 21&lt;/P&gt;
&lt;P&gt;2:30pm-2:50pm EDT&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 106.8px;"&gt;&lt;td style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1742450677193001vppF" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Microsoft Federal’s cloud landscape transformation with SAP NS2&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&amp;nbsp; &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;SER2695&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;Tuesday, May 20&lt;/P&gt;
&lt;P&gt;4:30pm-4:50pm EDT&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 106.8px;"&gt;&lt;td style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1740696228308001qtbo" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Unlock innovation for SAP ERP with AI, SAP BTP, and more on Microsoft Azure&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&amp;nbsp; &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;PAR1166&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;Wednesday, May 21&lt;/P&gt;
&lt;P&gt;2:00pm-2:20pm EDT&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 96.8px;"&gt;&lt;td style="height: 96.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1742246241963001scz1" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Accelerating procurement transformation with SAP Ariba solutions&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&amp;nbsp; &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 96.8px; padding: 2px;"&gt;
&lt;P&gt;SPM2624&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 96.8px; padding: 2px;"&gt;
&lt;P&gt;Wednesday, May 21&lt;/P&gt;
&lt;P&gt;2:00pm-2:20pm EDT&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 96.8px;"&gt;&lt;td style="height: 96.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1742228070485001rgfC" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Joule and Microsoft 365 Copilot: AI-enabled productivity in action&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&amp;nbsp; &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 96.8px; padding: 2px;"&gt;
&lt;P&gt;BAI2594&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 96.8px; padding: 2px;"&gt;
&lt;P&gt;Tuesday, May 20&lt;/P&gt;
&lt;P&gt;11:30am-11:50am EDT&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 106.8px;"&gt;&lt;td style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/sm25/catalog-inperson/page/catalog/session/1740696746850001DVWn" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;(Madrid) - Modernizing the SAP Software Landscape at ANDRITZ&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&amp;nbsp; &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;PAR1307&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 106.8px; padding: 2px;"&gt;
&lt;P&gt;Wednesday, May 28&lt;/P&gt;
&lt;P&gt;11:30am - 11:50am CEST&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 30.8px;"&gt;&lt;td colspan="3" style="height: 30.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;STRONG&gt;ASUG Pre-Day Sessions&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 142.8px;"&gt;&lt;td style="height: 142.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1736768115693001NSF3" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Harnessing SAP's AI Innovations: Joule, Generative AI, and Business AI&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 142.8px; padding: 2px;"&gt;
&lt;P&gt;ASUG104&lt;/P&gt;
&lt;P&gt;Location:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;S320GH&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 142.8px; padding: 2px;"&gt;
&lt;P&gt;Monday, May 19&lt;/P&gt;
&lt;P&gt;1:00pm-5:00pm EDT&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 30.8px;"&gt;&lt;td style="height: 30.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;STRONG&gt;ASUG Power Peer Group&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 30.8px; padding: 2px;"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 30.8px; padding: 2px;"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 134.8px;"&gt;&lt;td style="height: 134.8px; padding: 2px;"&gt;
&lt;P&gt;&lt;A href="https://www.sap.com/events/sapphire/flow/sap/so25/catalog-inperson/page/catalog/session/1743428714791001xLJL" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Unlock the value of SAP BTP: Lessons learned from ASUG members&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 134.8px; padding: 2px;"&gt;
&lt;P&gt;BTP3093&lt;/P&gt;
&lt;P&gt;Location:&lt;/P&gt;
&lt;P&gt;ASUG Booth Theater&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 134.8px; padding: 2px;"&gt;
&lt;P&gt;Tuesday, May 20&lt;/P&gt;
&lt;P&gt;2:00pm-2:40pm EDT&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H4&gt;&lt;STRONG&gt;Celebration night!&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;We are excited to be the&amp;nbsp;&lt;STRONG&gt;exclusive sponsor of the celebration night concert&lt;/STRONG&gt;, which is always a highlight at Sapphire. The evening will feature two special performances by the Zac Brown Band at the American Garden Theatre at Epcot®, scheduled for 8:15 PM and 9:45 PM. &amp;nbsp;Come celebrate with us!&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;&lt;STRONG&gt;Come find us at our booth!&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Microsoft and SAP are at the forefront of AI transformation and are excited to showcase the interoperability of our AI agents at Sapphire. The video below shows a sneak peek of what’s possible but if you’d like to learn more, come talk to our subject matter experts from Microsoft on-site to help address any questions and foster connections&lt;STRONG&gt;. &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Find us at Booth #409 in Orlando and Booth #9.333 in Madrid&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;&lt;STRONG&gt;Networking Events&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Beyond the sessions and booth experiences, our partners are hosting special social and networking events you can join:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://meetpwc.cventevents.com/event/PwC-at-SAP-Sapphire-2025/Home1?RefId=Personal%20Invites&amp;amp;rt=2s4G8WpRGEGtwthWCjzUGw" target="_blank" rel="noopener"&gt;Home - PwC at SAP Sapphire 2025&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://info.lemongrasscloud.com/rsvp-lemongrass-blue-martini-sapphire" target="_blank" rel="noopener"&gt;RSVP: Lemongrass Invites You for Cocktails and Apps!&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Finfo.syntax.com%2Fsap%2Fevent%2Fsapphire%2Fhowl-at-the-moon%2Fmay-2025%2Fregistration&amp;amp;data=05%7C02%7CSanjay.Satheesh%40microsoft.com%7Ce3afe2d1ff2542b32f3f08dd83acd7f2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638811500622217955%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=AeWvHY8f4%2BcvoKvzAsdEw8OlKG2qVpjo%2B2YczHmzVL4%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Syntax Annual Sapphire Party&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.ibm.com/events/reg/flow/ibm/3ypan0mb/createaccount/page/contactInfo" target="_blank" rel="noopener"&gt;IBM Sapphire Client Appreciation reception&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We are looking forward to a great Sapphire and I hope to see you there!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 13 May 2025 15:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/join-microsoft-at-sap-sapphire-2025/ba-p/4412561</guid>
      <dc:creator>Hiren_Shah_Azure</dc:creator>
      <dc:date>2025-05-13T15:00:00Z</dc:date>
    </item>
    <item>
      <title>SAP Content Server 7.54 on Windows with FileServer as Document Store</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-content-server-7-54-on-windows-with-fileserver-as-document/ba-p/4387971</link>
      <description>&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;1.&amp;nbsp;&amp;nbsp;&amp;nbsp; Introduction&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;This blog provides the deployment procedure for setting up High Availability (HA) and Disaster Recovery (DR) for SAP Content Server 7.54 on Windows Server. The SAP Content Server offers document storage either in an SAP MaxDB database or in the file system. When opting to store documents in the file system, the installation of SAP MaxDB is not required. In this guide, we will use FileServer as the document store and configure it using Azure Shared Disk.&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;2.&amp;nbsp;&amp;nbsp;&amp;nbsp; System Design&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Following is the overall high-level design of the SAP Content Server environment with HA &amp;amp; DR which is described in this blog.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The above design is based on Availability zones and provide VM level availability SLA of 99.99% for the HA pair in the Azure Primary region. &amp;nbsp;For Disaster Recovery setup, we can choose the desired secondary Azure region and setup the Azure Site Recovery to continuously replicate the primary region HA setup of SAP Content Server with Shared disk as File Server.&lt;/P&gt;
&lt;P&gt;Following are the details for each of the components of the setup:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;2.1&amp;nbsp; &lt;/STRONG&gt;&lt;STRONG&gt;SAP Content Server&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;SAP Content Server 7.54 is the latest version and used for this setup. It is recommended to always use the latest released version and patch level of SAP Content Server for new installations. Refer to SAP Note &lt;A href="https://me.sap.com/notes/719971" target="_blank" rel="noopener"&gt;719971&lt;/A&gt; to get latest information on SAP Content Server release strategy and product version.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;2.1.1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;STRONG&gt;HA Setup&lt;/STRONG&gt; – This is achieved by installing SAP Content Server in Windows WSFC Clustered environment between two VMs across Availability Zones. Virtual IP for the SAP Content Server Instance is maintained as frontend IP in Azure Load balancer. SAP Content Server will have one premium data disk for local ‘usr_sap’ files and one premium shared disk for ‘sapmnt’ and ‘global’ shares. To configure quorum for WSFC failover cluster, Cloud Witness should be created in ZRS Storage account. In the Windows firewall of both VMs, open the load balancer probe ports and SAP content server ports. Refer to&amp;nbsp;&lt;A href="https://help.sap.com/docs/Security/575a9f0e56f34c6e8138439eefc32b16/616a3c0b1cc748238de9c0341b15c63c.html" target="_blank" rel="noopener"&gt;TCP/IP ports of All SAP Products&lt;/A&gt;.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;2.1.2&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;STRONG&gt;DR Setup&lt;/STRONG&gt; – Azure Site Recovery is used to continuously replicate the VMs, all the configurations and the data in it, to another region. &amp;nbsp;To replicate VMs using ASR for DR,&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-azure-support-matrix#region-support" target="_blank" rel="noopener"&gt;review supported regions&lt;/A&gt;. Its recommended to build the azure landing zone in the DR region including Vnets, subnets, IP ranges before configuration of ASR replication. Azure load balancer and cloud witness must be pre-deployed for DR failover.&lt;/P&gt;
&lt;P&gt;As of time this blog was written , ASR for VMs with azure shared disk on Windows Server is in public preview. Please refer to &lt;A href="https://learn.microsoft.com/en-us/azure/site-recovery/tutorial-shared-disk" target="_blank" rel="noopener"&gt;the link&lt;/A&gt; for updates on GA. Using Azure Site Recovery for Azure shared disks, you can replicate and recover your WSFC-clusters as a single unit throughout the disaster recovery lifecycle, while creating cluster-consistent recovery points that are consistent across all the disks (including the shared disk) of the cluster. For more information, see&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/site-recovery/tutorial-shared-disk" target="_blank" rel="noopener"&gt;Shared disks in Azure Site Recovery&lt;/A&gt;. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;ASR create recovery points (both application and crash) that are consistent across all the virtual machines and disks within the cluster. However, during public preview only crash consistent and the latest recovery processed recovery points are supported. For more details on some of common concerns, see&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/site-recovery/tutorial-shared-disk#commonly-asked-questions" target="_blank" rel="noopener"&gt;FAQs - Shared disks in Azure Site Recovery&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Also, default settings for Defender including Realtime scan should work well with Content Server. &amp;nbsp;If high CPU utilization due to Defender is observed, open a support case with Microsoft via the Defender Portal.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;2.2&amp;nbsp; &lt;/STRONG&gt;&lt;STRONG&gt;Fileserver for Document Store&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This setup is using&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-shared" target="_blank" rel="noopener"&gt;Azure Shared disk&lt;/A&gt; – Premium SSD ZRS as Fileserver to store documents. Zone redundant storage (ZRS) replicates managed disk across all the three availability zones.&amp;nbsp; It is advised to choose the right size of the disk as Azure Shared disk resizing requires shutting down of associated VMs. &amp;nbsp;Also refer to limitations of &lt;A href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-shared#limitations" target="_blank" rel="noopener"&gt;premium Azure shared disks&lt;/A&gt; .&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;2.2.1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/STRONG&gt;&lt;STRONG&gt;HA Setup&lt;/STRONG&gt; – We will be creating File server role in WSFC Cluster for this shared disk so that its highly available as Windows Fileserver. Fileserver name will be mapped to Load balancer virtual IP, to make it available on either of the VMs. File Server ports do not need to be exposed outside the SAP Content Server cluster nodes. SAP Content Server must be able to access highly available FileServer using port 445.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;2.2.2&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/STRONG&gt;&lt;STRONG&gt;DR Setup&lt;/STRONG&gt; – ASR for Azure shared disk will be used for continuously replicating the documents to the DR region and available as highly available fileserver during DR failover. ASR will also perform the reverse copy from DR region to primary region and then failback the environment once primary region is available. Azure loadbalancer in the DR region is to be used for configuring the virtual IP.&lt;/P&gt;
&lt;H2&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc53687084"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;3.&amp;nbsp;&amp;nbsp;&amp;nbsp; Overview of Deployment Steps&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Following are the high-level steps which are described in the blog.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Preparations&lt;/LI&gt;
&lt;LI&gt;WSFC Cluster setup and configuration&lt;/LI&gt;
&lt;LI&gt;SAP Content Server Setup&lt;/LI&gt;
&lt;LI&gt;WFSC Fileserver setup&lt;/LI&gt;
&lt;LI&gt;Connect SAP Content Server with S/4 HANA or Business Suite 7&lt;/LI&gt;
&lt;LI&gt;High Availability Test&lt;/LI&gt;
&lt;LI&gt;Disaster Recovery Setup with ASR and&lt;/LI&gt;
&lt;LI&gt;DR failover and failback&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc53687085"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;4.&amp;nbsp;&amp;nbsp;&amp;nbsp; Preparations&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Read the required Installation Guide, SAP Notes, SAP on Azure docs and download the installation media.&lt;/LI&gt;
&lt;LI&gt;Deploy the VMs (of desired SKU) in Availability Zones and Choose Operating System as Windows Server 2022.&lt;/LI&gt;
&lt;LI&gt;Add data disk of required size to each of the VMs.&lt;/LI&gt;
&lt;LI&gt;Create 2 Azure Shared disks and attach it to both VMs.&lt;/LI&gt;
&lt;LI&gt;Disk 1 for SAP Content Server&lt;/LI&gt;
&lt;LI&gt;Disk 2 for FileServer as document store&lt;/LI&gt;
&lt;LI&gt;Join the SAP Content Server VMs to the Domain.&lt;/LI&gt;
&lt;LI&gt;Define Page File in Temp Disk (D Drive).&lt;/LI&gt;
&lt;LI&gt;Check that necessary Ports (including ILB Probe ports) are open in Windows firewall.&lt;/LI&gt;
&lt;LI&gt;Disable the Continuous Availability Feature in Windows using the instructions in this &lt;A href="https://support.microsoft.com/en-us/help/2820470/delayed-error-message-when-you-try-to-access-a-shared-folder-that-no-l" target="_blank" rel="noopener"&gt;link&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Create an Azure Storage account ZRS for cloud witness.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;5.&amp;nbsp;&amp;nbsp;&amp;nbsp; WSFC Cluster Setup and Configuration&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Create load balancer for virtual IPs of SAP Content Server Instance and File Server. Create A-record in DNS for virtual IPs and virtual hostnames.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Front-end IP&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Backend Pool&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Health probe port&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Load balancing rule&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;10.90.95.15&lt;/P&gt;
&lt;P&gt;(Virtual IP of SAP Content Server)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;VM1 &amp;amp; VM2 hosts&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;62400&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Enable HA Port,&lt;/P&gt;
&lt;P&gt;Enable Floating IP,&lt;/P&gt;
&lt;P&gt;Idle Timeout (30 Minutes)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;10.90.95.16&lt;/P&gt;
&lt;P&gt;(Virtual IP of Fileserver)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;VM1 &amp;amp; VM2 hosts&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;62450&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Enable HA Port,&lt;/P&gt;
&lt;P&gt;Enable Floating IP,&lt;/P&gt;
&lt;P&gt;Idle Timeout (30 Minutes)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Refer to the section &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/sap-high-availability-infrastructure-wsfc-shared-disk#create-azure-internal-load-balancer" target="_blank" rel="noopener"&gt;Create Azure Internal Load Balancer&lt;/A&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Add the KeepAlive registry entries&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Refer to the section &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/sap-high-availability-infrastructure-wsfc-shared-disk#add-registry-entries-on-both-cluster-nodes-of-the-ascsscs-instance" target="_blank" rel="noopener"&gt;Add registry entries on both cluster nodes&lt;/A&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Deploy WSFC Cluster and add cloud witness&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Refer to the section &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/sap-high-availability-infrastructure-wsfc-shared-disk#install-and-configure-windows-failover-cluster" target="_blank" rel="noopener"&gt;Install and Configure Windows Failover Cluster &lt;/A&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Specify the SID of Content Server, VM names in the PowerShell script to Install and Configure cluster.&lt;/LI&gt;
&lt;LI&gt;Specify the Azure Storage account and Access Key to Configure Cluster cloud quorum&lt;/LI&gt;
&lt;LI&gt;Alternatively, we can perform deploy WSFC Cluster and add cloud witness from Failover Cluster Manager.&lt;/LI&gt;
&lt;LI&gt;Update cluster parameters&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Adjust the following WSFC Cluster parameters&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;UL&gt;
&lt;LI&gt;SameSubNetDelay = 2000&lt;/LI&gt;
&lt;LI&gt;SameSubNetThreshold = 15&lt;/LI&gt;
&lt;LI&gt;RouteHistoryLength = 30&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Refer to section&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/sap-high-availability-infrastructure-wsfc-shared-disk#tuning-the-windows-failover-cluster-thresholds" target="_blank" rel="noopener"&gt;Tune the Windows Failover Cluster thresholds&lt;/A&gt;.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Verify that Azure Shared disk is added to the cluster and status is online.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;If the Azure Shared disk are not added to the WSFC Cluster then we can add them either by ‘Add Disk’ option in Failover Cluster Manager or by using PowerShell commands as described in&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/sap-high-availability-infrastructure-wsfc-shared-disk#format-the-shared-disk-with-powershell" target="_blank" rel="noopener"&gt;Format the shared disk with Powershell&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;6.&amp;nbsp;&amp;nbsp;&amp;nbsp; SAP Content Server Setup&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Run SWPM on 1&lt;SUP&gt;st&lt;/SUP&gt; VM&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Start the installation of SAP Content Server on First Cluster Node.&lt;/P&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI&gt;Choose the local disk’s drive letter&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Enter the virtual hostname of the content server as defined in the DNS with the virtual IP. Apart from entering SID for SAP Content Server, also select the azure shared disk for SAP Content Server which is already part of WSFC cluster disk.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Continue with other SWPM input parameters to install SAP Content Server on First Cluster Node.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Choose SAP Content Server to be installed.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI&gt;Provide the instance number of SAP Content Server&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI&gt;Specify the SAP Content Server basic parameter values&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;And complete the installation on First Node.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Add probe port to the cluster parameter&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Probe port defined in the Azure Load balancer for SAP Content Server need to be updated for SAP Content server SID Role in the WSFC Cluster configuration. This will enable virtual IP connection to the SAP Content Server. Follow the instructions in section “Add a probe port” in the link &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/sap-high-availability-installation-wsfc-shared-disk#10822f4f-32e7-4871-b63a-9b86c76ce761" target="_blank" rel="noopener"&gt;Install SAP NetWeaver HA on a Windows failover cluster and shared disk for an SAP ASCS/SCS instance in Azure&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Validate the probe port in the cluster configuration (as shown in the below example screenshot)&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Stop and Start the SAP Content Server Role in WSFC cluster.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;In case not performed earlier, Add windows firewall rule to allow inbound connection to probe port on both VMs.&lt;/LI&gt;
&lt;LI&gt;SWPM setup for SAP Content Server second Node&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI&gt;Choose the local disk’s drive letter&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;And continue with SWPM installation steps.&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;7.&amp;nbsp;&amp;nbsp;&amp;nbsp; WSFC File Server Setup&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Create the File Server in WSFC Cluster&lt;/LI&gt;
&lt;LI&gt;In the ‘Actions’ section in ‘Roles’ tab, click on ‘Configure Role’.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Choose ‘File Server’ as Role and click ‘Next’.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;In next screen for File Server Type, choose ‘File Server for general Use’ and click ‘Next’.&lt;/LI&gt;
&lt;LI&gt;Enter the name of the fileserver as defined in the DNS (along with Virtual IP). Click Next.&lt;/LI&gt;
&lt;LI&gt;In the ‘select storage’ screen, choose the cluster disk(azure shared disk) for FileServer. Click ‘Next’ and continue with the setup.&lt;/LI&gt;
&lt;LI&gt;Update the cluster settings in the File Server Role.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Change the Name to ‘&amp;lt;File Server Role Name&amp;gt; IP’. Choose the Network from drop-down. Update the static IP address field with virtual IP defined for File Server.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Stop and start the Role.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Update the probe port parameter in the Cluster settings. Follow the probe port changing steps in the ‘Add a probe port’ section in the ‘SAP Content Server Setup’ in this blog.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Make sure that in this step we change the probe port for FileServer Role/IP in the cluster and not for SAP Content Server Role/IP.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Stop and Start the File Server Role.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Validate that File Server role is online.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;8.&amp;nbsp;&amp;nbsp;&amp;nbsp; Connect the SAP Content Server with S/4 HANA System&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Update the connection details in content repository.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;In the S/4 HANA or Business Suite 7 system, run the tr. Code OAC0. Select the content repository name for SAP Content Server.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Update the HTTP Server, port numbers.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Ensure content server related inbound ports are open in windows firewall.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;For HTTPS setup, export the PSE of content server from ‘SAP Content Server Administration’ portal and import it in S/4 HANA system under certificate list in ‘System PSE’ and ‘SSL Server Standard’ section.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Update the repository settings&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Click on ‘CS Admin’ as in the previous screenshot. Add/update the parameters (as below) to specify the document storage location as File Server.&lt;/P&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI&gt;Check that connection tests are successful.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://me.sap.com/notes/2457912" target="_blank" rel="noopener"&gt;Refer to SAP Note 2457912 – How to create Content Server Repository in OAC0&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;9.&amp;nbsp;&amp;nbsp;&amp;nbsp; High Availability Tests&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;We can perform following WSFC failover tests for SAP Content Server and File Server for document storage.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;For all the below tests, we should validate that&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;UL&gt;
&lt;LI&gt;Cluster Roles are online after failover,&lt;/LI&gt;
&lt;LI&gt;SAP Content server is running using the link&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;EM&gt;https://&amp;lt;virtual hostname&amp;gt;:1091/sapcs?serverInfo&lt;/EM&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;UL&gt;
&lt;LI&gt;In S/4HANA tr code OAC0, connections test is working after failover.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Test Scenarios:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;SAP Content Server system and File Server can successfully fail over from node 1 to node 2 and vice-versa. We can use Failover Cluster Manager to perform the failover (move the role to another node) and check the status after failover.&lt;/LI&gt;
&lt;LI&gt;Restart the Windows guest operating system in which SAP Content Server is active. This initiates an automatic failover of the SAP Content Server cluster group from active node to second node. Perform the same action for active node for File Server.&lt;/LI&gt;
&lt;LI&gt;Restart the VM from Azure portal and using Azure PowerShell, in which SAP Content Server is active. This initiates an automatic failover of the SAP Content Server cluster group from active node to second node. Perform the same action for active node for File Server.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;10.&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Disaster Recovery Setup with ASR&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;To configure ASR for VMs running SAP Content Server with Azure shared disk, you need to follow below steps :&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Set up Resource Group, Virtual Network, Subnet and Recovery Service Vault in the secondary site that you would use in setting up your DR. To learn more about networking, see&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-about-networking" target="_blank" rel="noopener"&gt;prepare networking for Azure VM disaster recovery&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Before enabling ASR on SAP Content Server VMs, it is essential that WSFC is configured, and Azure shared disks are managed by the cluster.&lt;/LI&gt;
&lt;LI&gt;Configure ASR for SAP Content Server with Azure shared disk by following the steps in the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/site-recovery/tutorial-shared-disk" target="_blank" rel="noopener"&gt;Shared disks in Azure Site Recovery&lt;/A&gt;&amp;nbsp;document. Follow&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-azure-how-to-enable-replication" target="_blank" rel="noopener"&gt;Configure replication for Azure VMs in Azure Site Recovery&lt;/A&gt;&amp;nbsp;to configure ASR for SAP application servers.&lt;/LI&gt;
&lt;LI&gt;When you use ASR to set up DR for VMs, the VM’s OS, data disks, and Azure shared disk (for SAP Content Server VMs) are copied to the DR site.&lt;BR /&gt;&lt;EM&gt;NOTE: With Azure shared disk, SAP Content Server VMs will be grouped together in ASR. This way, the VMs in the group will replicate together to have consistent recovery snapshot. In the event of a failover, the VMs will fail over as a group.&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;After the VMs are replicated, the status of protected cluster and SAP Content Server VMs would turn into “Protected” and the replication health would be “Healthy”.&lt;/LI&gt;
&lt;LI&gt;Deploy Azure Load Balancer without backend pools as SAP Content Servers VMs are not available in DR region. Keep the health probe ports same as primary site health probe ports.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Front-end IP&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Backend Pool&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Health probe port&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Load balancing rule&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;10.72.76.15&lt;/P&gt;
&lt;P&gt;(Virtual IP of SAP Content Server)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;VM1 &amp;amp; VM2 hosts&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;62400&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Enable HA Port,&lt;/P&gt;
&lt;P&gt;Enable Floating IP,&lt;/P&gt;
&lt;P&gt;Idle Timeout (30 Minutes)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;10.72.76.16&lt;/P&gt;
&lt;P&gt;(Virtual IP of Fileserver)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;VM1 &amp;amp; VM2 hosts&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;62450&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Enable HA Port,&lt;/P&gt;
&lt;P&gt;Enable Floating IP,&lt;/P&gt;
&lt;P&gt;Idle Timeout (30 Minutes)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;Deploy Cloud Witness Storage account of ZRS type in the DR region.&lt;/LI&gt;
&lt;LI&gt;When VMs without public IP addresses are placed in the backend pool of the internal standard load balancer, there would not be any outbound connectivity from these VMs, unless additional configuration is performed to allow routing to public end point. For details on how to achieve outbound connectivity see &lt;A href="https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-standard-load-balancer-outbound-connections" target="_blank" rel="noopener"&gt;public endpoint connectivity for Azure VMs &amp;amp; Standard ILB in SAP HA scenarios&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;EM&gt;Tip: Based on your DR strategy, you can either execute the step to deploy load balancer and cloud witness step when you are preparing your DR site like setting up ASR or you can execute at the time of the DR failover process.&lt;/EM&gt;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;11.&amp;nbsp; &amp;nbsp; &amp;nbsp;Disaster Recovery Failover and Failback&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;We will perform the failover to the DR region of the SAP Content Server and File Server, start the cluster services for SAP Content Server and File Server, verify its configuration and connection. Then we will ‘commit’ the Failover in ASR.&lt;/P&gt;
&lt;P&gt;Afterwards we will re-protect the VMs to start the ASR replication process from DR region to primary region, Failover (failback) VMs from DR region to primary region, start the cluster services for SAP Content Server and File Server and verify its configuration and connection.&lt;/P&gt;
&lt;P&gt;Finally, with ASR, we will re-protect the VMs in primary region to continue with the ASR replication process from primary to DR region.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;11.1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Failover to DR Region&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Before triggering the failover of SAP Content server, ensure the Domain Controller could be accessed from the DR region.&lt;/LI&gt;
&lt;LI&gt;Initiate the Failover for recovery group of the SAP Content Server.&lt;/LI&gt;
&lt;LI&gt;If the VMs were online on the primary site then ASR will show same consistent recovery point for both VMs. Select the Recovery point for the cluster/recovery group and continue with the failover. Refer to the documentation for &lt;A href="https://learn.microsoft.com/en-us/azure/site-recovery/tutorial-shared-disk#recovery-point-is-consistent-across-all-the-virtual-machines" target="_blank" rel="noopener"&gt;Recovery point is consistent across all the virtual machines&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;If one of the VM in the recovery group is unavailable for some time, then Select an individual recovery point for the VM that are not part of the cluster recovery point. Refer to the documentation for &lt;A href="https://learn.microsoft.com/en-us/azure/site-recovery/tutorial-shared-disk#recovery-point-is-consistent-only-for-a-few-virtual-machines" target="_blank" rel="noopener"&gt;Recovery point is consistent only for a few virtual machines&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;After the failover, the status of replicated items will be ‘Failover Completed’. VMs will be online on the DR region and primary site VMs will be shutdown (if the shutdown checkbox is selected during failover).&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Update the backend pool of DR region’s Azure Load Balancer with the VM details.&lt;/LI&gt;
&lt;LI&gt;Update the DNS entries for SAP Content Server VMs and virtual hostname for SAP SID and File Server, with the new IP address in the DR region.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Virtual Hostname&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Description&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Virtual &lt;/STRONG&gt;&lt;STRONG&gt;IP Address in Primary region&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Virtual &lt;/STRONG&gt;&lt;STRONG&gt;IP Address in DR region&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;cs9ha&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;for SAP content server&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;10.90.95.15&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;10.72.76.15&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;cs9fileserver&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;for FileServer&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;10.90.95.16&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;10.72.76.16&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Change the Cluster quorum to the cloud witness storage account created on the DR site. Use Failover Cluster Manager tool to update the Cloud Witness details.&lt;/LI&gt;
&lt;LI&gt;Update the virtual network and virtual IP of the SAP Content SID and File Server in the Failover cluster Manager.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI&gt;Restart the Cluster Roles.&lt;/LI&gt;
&lt;LI&gt;Update the SAP Content Server virtual IP into the DNS used S/4 HANA or ECC system.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Virtual Hostname&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Description&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Virtual IP Address in Primary region&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Virtual IP Address in DR region&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;cs9ha&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;for SAP content server&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;10.90.95.15&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;10.72.76.15&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;Perform the WSFC Cluster Failover tests and connect tests in tr code OAC0. Refer to &lt;A href="https://me.sap.com/notes/2457912" target="_blank" rel="noopener"&gt;SAP Note 2457912&lt;/A&gt; for testing connections to SAP content server in tr code OAC0.&lt;/LI&gt;
&lt;LI&gt;Once all checks are successfully completed, ‘commit’ the failover in replicated items in recovery service vault.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;11.2&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Failback to Primary Region&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Once the former Primary Region is available, we can ‘Re-protect’ the recovery group in the recovery services vault. This step will start replication of all the components of ASR recovery group from DR region to former Primary region.&lt;/LI&gt;
&lt;LI&gt;Once status of the replicated items is ‘protected’, we can plan to failback the SAP Content Server. Follow the same steps as described in the previous section to ‘failover’ to former primary region like checking the DNS settings and update the cluster setting.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Once the failover to former primary region is successfully completed, ‘re-protect’ the replicated items in the recovery services vault to restart the replication from primary to DR region.&lt;/P&gt;</description>
      <pubDate>Wed, 25 Mar 2026 20:59:26 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-content-server-7-54-on-windows-with-fileserver-as-document/ba-p/4387971</guid>
      <dc:creator>AnjanBanerjee</dc:creator>
      <dc:date>2026-03-25T20:59:26Z</dc:date>
    </item>
    <item>
      <title>Simplifying     Vertex Tax Software Deployment with Oracle Database@Azure</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/simplifying-vertex-tax-software-deployment-with-oracle-database/ba-p/4393163</link>
      <description>&lt;H5&gt;&lt;STRONG&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Co-Authors&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;&lt;A href="https://linkedin.com/in/scott-gamble-5916005" target="_blank" rel="noopener"&gt;&lt;STRONG class="lia-align-center"&gt;Scott Gamble&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;, &lt;/STRONG&gt;&lt;A href="https://linkedin.com/in/frank-kassanits-b3765137" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Frank Kassanits&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;, &lt;/STRONG&gt;&lt;A href="https://linkedin.com/in/suresh-devarajan-b65a661" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Suresh Devarajan&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;, &lt;/STRONG&gt;&lt;A href="https://linkedin.com/in/momin-qureshi-13a0aa6" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Momin Qureshi&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;, &lt;/STRONG&gt;&lt;A href="https://linkedin.com/in/venkataramanr" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Venkat Ramakrishnan&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;,&lt;/STRONG&gt;&lt;STRONG&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/STRONG&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;A href="https://linkedin.com/in/rsponholtz" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Ross Sponholtz&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt; &lt;SPAN class="lia-text-color-15"&gt;&amp;amp;&lt;/SPAN&gt; &lt;/STRONG&gt;&lt;A href="https://linkedin.com/in/ganbu" target="_blank" rel="noopener"&gt;&lt;STRONG class="lia-align-center"&gt;Anbu Govindasamy&lt;/STRONG&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;In this blog, we will explore how to leverage &lt;A href="https://learn.microsoft.com/en-us/azure/oracle/oracle-db/database-overview" target="_blank" rel="noopener"&gt;Oracle Database@Azure&lt;/A&gt; for Vertex Tax Software, which is part of the SAP solution. Oracle Database@Azure is an Oracle database service that runs Oracle databases on Oracle Cloud Infrastructure (OCI) within Microsoft datacenters, ensuring the fastest possible access to Azure resources and applications.&lt;/P&gt;
&lt;P&gt;Oracle database can be deployed on Azure VM as an IaaS solution or as an Oracle Database@Azure service just like other Azure services. Each approach has its own merits. In this blog, we will discuss the Oracle Database@Azure solution and how it simplifies and accelerates the deployment and maintenance of Oracle Databases on Azure. While Oracle Database@Azure is not currently supported solution for running SAP systems, at the same time&lt;STRONG&gt;, &lt;/STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/oracle/oracle-db/database-overview" target="_blank" rel="noopener"&gt;Oracle Database@Azure&lt;/A&gt;&lt;STRONG&gt; can be leveraged with non-SAP products&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Oracle Database@Azure - Architecture of the implemented Vertex Tax solution&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Please find below the implementation architecture corresponding to the Vertex Tax solution implementation on Oracle Database@Azure – Oracle Exadata database service, followed by focus on Networking, Architectural considerations for coming up with a similar architecture for your needs.&lt;/P&gt;
&lt;img /&gt;
&lt;H5&gt;&lt;STRONG&gt;Oracle Database@Azure - Networking Consideration&lt;/STRONG&gt;&lt;STRONG&gt;s&lt;/STRONG&gt;&lt;/H5&gt;
&lt;UL&gt;
&lt;LI&gt;When you design your network topology for the Oracle Exadata Database service on Oracle Database@Azure, please refer to the Microsoft Cloud Adoption Framework – Landing Zone Accelerator.&lt;/LI&gt;
&lt;LI&gt;It is critical to understand the &lt;A href="https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/oracle-iaas/core-network-design" target="_blank" rel="noopener"&gt;core networking&lt;/A&gt; design considerations and recommendations applicable for your environment along with the &lt;A href="https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/oracle-iaas/application-connectivity-design" target="_blank" rel="noopener"&gt;best practices&lt;/A&gt; for configuring and managing the Azure network as required for connecting your applications to the database.&lt;/LI&gt;
&lt;LI&gt;In the above architecture, each application virtual network is directly peered to the database virtual network. Application and database virtual networks connect to a hub network for shared services like firewalls or Domain Name System (DNS).&lt;/LI&gt;
&lt;LI&gt;This design helps ensure low-latency direct communication while enabling centralized traffic inspection and management. Reference diagram provided below as well.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;H5&gt;&lt;STRONG&gt;Oracle Database@Azure - Architectural Considerations&lt;/STRONG&gt;&lt;/H5&gt;
&lt;UL&gt;
&lt;LI&gt;Deployment Approach: The Vertex Oracle Database is deployed on Oracle Database@Azure Exadata database service, while the Vertex Application Servers are hosted on Azure as an Infrastructure as a Service (IaaS) solution.&lt;/LI&gt;
&lt;LI&gt;Landing Zone: The Azure Landing Zone architecture implements a hub-and-spoke network pattern. Both SAP and Vertex share a single landing zone and subscription, while Oracle Database@Azure is deployed to its own separate landing zone and subscription.&lt;/LI&gt;
&lt;LI&gt;Network: The database VNET is peered with the application VNET hosting Vertex, allowing communication between the Oracle database and Vertex application. The database VNET is also peered with Hub VNET for migrating data from on-prem and setting up DR with DR region.&lt;/LI&gt;
&lt;LI&gt;Migration: Oracle Data Guard is used for one time database migration, ensuring minimal downtime and seamless transition.&lt;/LI&gt;
&lt;LI&gt;HA/DR: High Availability setup between zones by leveraging Oracle Data Guard (ODG) with FSFO. ODG is also part of DR solution.&lt;/LI&gt;
&lt;LI&gt;Existing Oracle Database software customers can use the Bring Your Own License (BYOL) option or Unlimited License Agreements (ULAs).&lt;/LI&gt;
&lt;/UL&gt;
&lt;H5&gt;&lt;STRONG&gt;Oracle Database@Azure Benefits &lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Given that you can now deploy and use Oracle database services running on OCI within the native Azure portal and APIs, you benefit from an OCI-in-Azure-like experience. Here are some of the key benefits of Oracle Database@Azure.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Optimized performance and sub millisecond latency between app and data by running Oracle Databases on Oracle Cloud Infrastructure (OCI) hosted in Azure datacenters.&lt;/LI&gt;
&lt;LI&gt;SLA, Feature and Price parity with Oracle Cloud Infrastructure, along with a collaborative support model with Oracle.&lt;/LI&gt;
&lt;LI&gt;Native integration with Azure services like Microsoft Entra ID for identity and access management, Azure Kubernetes Service (AKS), and Azure AI/ML services. Azure vnet provides secure connectivity.&lt;/LI&gt;
&lt;LI&gt;Seamless integration with existing applications and infrastructure, maintaining operational consistency.&lt;/LI&gt;
&lt;LI&gt;Exadata provides extensive automation for installation, upgrades, and patching, reducing manual efforts.&lt;/LI&gt;
&lt;LI&gt;The Exadata database service allows for co-management and scale-out of infrastructure with ultra-low lead times while the customer continues to benefit from database-optimized compute, networking, storage associated with Exadata.&lt;/LI&gt;
&lt;LI&gt;On-premises operational procedures, automation, and best practices can be ported to Azure without losing any knowledge base, ensuring quicker adoption.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H5&gt;&lt;STRONG&gt;QuickStart&lt;/STRONG&gt;&lt;STRONG&gt; - Oracle Database@Azure – Exadata service&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;To begin your Oracle Database@Azure journey, reach out to the Oracle sales team or your Oracle sales representative. They will assist you in creating a private offer for the Oracle Exadata database service on the Azure Marketplace. Once the offer is ready, you can accept it and complete the purchase directly through the Azure Marketplace in the Azure Portal.&lt;/P&gt;
&lt;P&gt;After completing the purchase, you can provision the Oracle Exadata Database service for Oracle Database@Azure using the Azure portal, Azure APIs, SDKs, and Terraform.&lt;/P&gt;
&lt;P&gt;For more detailed information on additional Azure region availability, design consideration and onboarding details please refer to &lt;A href="https://docs.oracle.com/en-us/iaas/Content/database-at-azure/getting-started.htm" target="_blank" rel="noopener"&gt;Getting Started&lt;/A&gt;. &amp;nbsp;&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;The deployment of the Exadata Database service on Oracle Database@Azure for Vertex Tax Software simplifies Oracle database deployment, management and operation. In our experience, On-premises operational procedures, automation, and best practices can be ported to Azure without losing any knowledge base, ensuring quicker adoption.&lt;/P&gt;</description>
      <pubDate>Mon, 17 Mar 2025 15:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/simplifying-vertex-tax-software-deployment-with-oracle-database/ba-p/4393163</guid>
      <dc:creator>anbugovi</dc:creator>
      <dc:date>2025-03-17T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Use Premium SSD v2 with SAP</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/use-premium-ssd-v2-with-sap/ba-p/4390144</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-clear-both"&gt;Premium SSD v2 is available to Azure customers with a broad set of features and benefits. In this article we will highlight the options available to best use Premium SSD v2 disks for SAP workloads.&lt;/P&gt;
&lt;img /&gt;
&lt;H1 id="mcetoc_1ilm53e6b_1"&gt;&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 id="mcetoc_1ilm53e6b_3"&gt;Premium SSD v2 basics&lt;/H1&gt;
&lt;P&gt;Different storage options are available depending on the requirements. &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssd-v2" target="_blank" rel="noopener"&gt;Premium SSD v2&lt;/A&gt; differs from the long-established Premium SSD by providing you with more options and modified pricing for each of these options. The main differences over Premium SSD are:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Up to 1200 MB/s throughput, low latency, up to 80k IOPS and 64 TiB in size&lt;/LI&gt;
&lt;LI&gt;Any disk size from 1 GiB to 64 TiB in 1 GiB increments with per GiB pricing&lt;/LI&gt;
&lt;LI&gt;Throughput and IOPS configurable and adjustable online&lt;/LI&gt;
&lt;LI&gt;125 MB/s and 3000 IOPS provided as baseline&lt;/LI&gt;
&lt;LI&gt;No&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/disk-bursting" target="_blank" rel="noopener"&gt;disk bursting&lt;/A&gt;, no &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-performance" target="_blank" rel="noopener"&gt;host caching&lt;/A&gt; and no &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/how-to-enable-write-accelerator" target="_blank" rel="noopener"&gt;Write Accelerator&lt;/A&gt; with Premium SSD v2&lt;/LI&gt;
&lt;LI&gt;Cost based on selected size + IOPS + throughput&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;While with Premium SSD you have set disk sizes or SKUs, which combine billed capacity with set throughput and IOPS limits, with Premium SSD v2 you can freely set each of the three parameters independently. You want a 20 GiB disk with 1000MB/s throughput and 20k IOPS? No problem. You want a 5 TiB disk with 200 MB/s and 3k IOPS? Just set the values and Azure will bill you accordingly. If you compare with Premium SSD, Premium SSD v2 will be cheaper if configured with identical size and throughput.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This means for your SAP workloads, you can provide both faster overall performance and tune the storage parameters to match your requirements, while adding flexibility. See the Azure documentation for&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssd-v2" target="_blank" rel="noopener"&gt;details on Premium SSD v2&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;One important aspect to consider is that current time Premium SSD v2 can only be attached to zonal VMs, that is VMs with a set zonal deployment. If your SAP landscape is deployed with regional VMs – with or without availability sets – this is not supported at publish time of this blog post. See the &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssd-v2" target="_blank" rel="noopener"&gt;documentation details&lt;/A&gt; for updates.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 id="mcetoc_1ilm53e6b_4"&gt;SAP use cases with Premium SSD v2&lt;/H1&gt;
&lt;P&gt;Where can you use Premium SSD v2 with your SAP workloads? In short, for all situations. Premium SSD v2 is certified for use with SAP workloads – &lt;A class="lia-external-url" href="https://me.sap.com/notes/2015553" target="_blank" rel="noopener"&gt;SAP note 2015553&lt;/A&gt; – and also noted in individual SAP &lt;A class="lia-external-url" href="https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;iaas;ve:24&amp;amp;sort=Latest%20Certification&amp;amp;sortDesc=true" target="_blank" rel="noopener"&gt;certification for SAP HANA&lt;/A&gt;. It can be used for your application layer such as (A)SCS and application servers, your database servers and any infrastructure servers.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="mcetoc_1ilm53e6b_5"&gt;Premium SSD v2 for SAP applications and infrastructure servers&lt;/H2&gt;
&lt;P&gt;For SAP application layer and any generic infrastructure servers (such as DNS, saprouter, etc.), you can utilize Premium SSD v2 for all data disks. One expected benefit is that you can configure disk sizes with 1 GiB increments, no longer x2 increases in size. If you need an 80 GiB disk containing /usr/sap physical volume, you set and pay for 80 GiB.&amp;nbsp; Set IOPS and throughput as needed, if more than baseline is required.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="mcetoc_1ilm53e6b_6"&gt;Premium SSD v2 for SAP HANA&lt;/H2&gt;
&lt;P&gt;For &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sap/workloads/hana-vm-premium-ssd-v2" target="_blank" rel="noopener"&gt;SAP HANA, Azure documentation&lt;/A&gt; provides a recommended starting configuration with Premium SSD v2. As this is a starting configuration, you might need to modify the parameters for size, throughput and IOPS to fit your needs.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As there is no host caching available with Premium SSD v2, host (VM) caching setting for disk must be set to None. This also means that write accelerator caching for HANA log disk(s) is neither available nor should it be needed to meet SAP’s HANA I/O requirements.&lt;/P&gt;
&lt;P&gt;No host caching means no write accelerator, which in turn opens a very easy way to use&amp;nbsp;&lt;A class="lia-external-url" href="https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;iaas;ve:24;range%23c:memorySize%23v:ms17%23v:ms6&amp;amp;sort=Latest%20Certification&amp;amp;sortDesc=true" target="_blank" rel="noopener"&gt;E-series VMs for SAP HANA for smaller memory sizes&lt;/A&gt; with only this disk type. Premium SSD v2 performs with low latency for IO, which fulfil SAP HANA requirements.&lt;/P&gt;
&lt;H2 id="mcetoc_1ilm53e6b_7"&gt;Premium SSD v2 for non-HANA databases&lt;/H2&gt;
&lt;P&gt;Every supported SAP database vendor is fully supported and &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sap/workloads/dbms-guide-general" target="_blank" rel="noopener"&gt;documentation is available&lt;/A&gt; with a recommended starting configuration and DB specifics, if any apply.&lt;/P&gt;
&lt;P&gt;You can right-size every filesystem of an SAP database, to the exact GiB required and without any fixed disk sizes. Same for IOPS and throughput requirements. Databases often have many different filesystems with different growth and performance requirements, such as archive logs, temporary tablespace locations, undo data , etc. depending on database deployed.&lt;/P&gt;
&lt;P&gt;Non-HANA databases with Premium SSD disks can greatly benefit in specific situations from available &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-performance" target="_blank" rel="noopener"&gt;host caching&lt;/A&gt; type readOnly. Host caching can accelerate many I/O requests on these databases. Host caching is not available for Premium SSD v2, which can require you to adjust the needed disk IOPS or throughput sizing for some disks. There is no one-size-fits-all solution. Analyze the workload, see the available Azure documentation for your database vendor and tune managed disk parameters.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure provides numerous performance metrics on the used infrastructure, giving you a very transparent view into any existing bottlenecks. For a database centric view, &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/azure-portal/azure-portal-dashboards" target="_blank" rel="noopener"&gt;create an Azure Monitor dashboard&lt;/A&gt; or &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/azure-monitor/visualize/workbooks-create-workbook" target="_blank" rel="noopener"&gt;workbook&lt;/A&gt; to view the VM and individual disk data.&lt;/P&gt;
&lt;P&gt;As an example, below is a workbook of a DB VM, with metrics for the VM and disk.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;For storage performance evaluation, look at the following&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-metrics" target="_blank" rel="noopener"&gt;metrics&lt;/A&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;VM
&lt;UL&gt;
&lt;LI&gt;IOPS and bandwidth consumed percentage for&amp;nbsp; VM overall &lt;BR /&gt;cached | uncached | disk overall&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;Disk
&lt;UL&gt;
&lt;LI&gt;Disk read/write IOPS and throughput&lt;BR /&gt;for multi-disk stripe sets, a single disk indicates same usage level of all members&amp;nbsp;&lt;BR /&gt;Several Premium SSD disks metrics include IOPS/bytes from the cache&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If you have SAP running in Azure already, it’s likely using Premium SSD and you can see caching usage. You can also leverage machine learning to analyse your disk performance. You can also &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/copilot/troubleshoot-disk-performance" target="_blank" rel="noopener"&gt;use Copilot to troubleshoot your disk performance&lt;/A&gt; and display disk bottlenecks. As with any AI generated advice, consider if the findings are appropriate.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 id="mcetoc_1ilm53e6b_8"&gt;Premium SSD v2 single disk vs striping&lt;/H1&gt;
&lt;P&gt;For SAP workloads when using Premium SSD, to reach the necessary performance for database VMs, many customers deploy multiple disks and stripe over them using lvm or Windows storage spaces. Such stripe set combines the throughput and IOPS of individual disk members. With Premium SSD, without depending on bursting, the disk SKU sets the throughput and IOPS limits and can be insufficient on some filesystems.&lt;/P&gt;
&lt;P&gt;With Premium SSD v2, you have much higher limits for a single disk – 1200 MB/s throughput and upto 80k IOPS – and can specify the values independent of the disk size. The choice of using a single disk or split the aggregate performance requirements between multiple disks in a stripe sets is yours to make.&lt;/P&gt;
&lt;P&gt;With striping, you have to correctly set logical volume manager’s volume group settings on stripe size and count and keep all disks equal in size and performance. You do have the following advantages with this&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;VM storage limits are increasing with every generation&lt;/LI&gt;
&lt;LI&gt;Premium SSD v2 gives each disk a baseline of 125 MB/s and 3000 IOPS&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As an example, consider the situation of wanting /hana/data filesystem to provide 2 TiB size, 1000 MB/s and 20000 IOPS.&lt;/P&gt;
&lt;P&gt;As option 1, you can deploy a single disk with required parameters.&lt;/P&gt;
&lt;P&gt;As option 2, using striping you:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Deploy 4 disks, set up a volume group&lt;/LI&gt;
&lt;LI&gt;Each disk 500 GiB&lt;/LI&gt;
&lt;LI&gt;As baseline over the 4 disks, you have 12k IOPS and 500 MB/s in aggregate&lt;/LI&gt;
&lt;LI&gt;Add to each of the 4 disks an additional +2k IOPS, for a total of 5k and +375 MB/s for a total of 500MB/s, on each of the 4 disks. Giving you the required performance and size.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The choice to-stripe-or-not is yours to make, as per your workload needs. Consider the support for Premium SSD v2 overall, how does this affect services like backup or disaster recovery and if using striped disks breaks such support.&lt;/P&gt;
&lt;P&gt;If striping, keep the number of disks in volume group to a sensible number. Common choice is four disks for data and two for log areas. Such disk counts should be sufficient even for large databases, for very large (&amp;gt;20TB sizes) do not use more than 8 disks for database data and 4 disks for data log areas or other disks, unless recommended by Microsoft. These are recommended maximum disk counts for any stripe set in Azure for SAP, which should only be exceeded in exceptional circumstances only.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 id="mcetoc_1ilm53e6b_9"&gt;Start using Premium SSD v2 for SAP&lt;/H1&gt;
&lt;P&gt;You want to use Premium SSD v2. Great. How do you best do it for SAP workloads?&lt;/P&gt;
&lt;P&gt;Decide first on your migration path from the three options available:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;With the first option (01) of building new VMs with Premium SSD v2, you are either new to Azure or want to build a new SAP environment for a clean start. Maybe using own automation pipeline to deploy Azure resources, such as &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sap/automation/deployment-framework" target="_blank" rel="noopener"&gt;SDAF&lt;/A&gt;. Once the new VMs are created, copy the SAP workloads through application means – backup/restore, system copy etc.&lt;/P&gt;
&lt;P&gt;This approach allows you to right-size your disk capacity and performance parameters from the start and set everything according to best practices, starting fresh with Premium SSD v2.&lt;/P&gt;
&lt;P&gt;---------------------------------------------------------------------------------------------------------&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The second option (02), you already are using Premium SSD or other storage type in Azure and want to switch to Premium SSD v2. But you want to make some architectural changes. Maybe you want to resize disks smaller than currently. As you can online increase managed disks (see &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/sapapplications/using-azure-online-disk-expansion-for-sap/3253754" target="_blank" rel="noopener" data-lia-auto-title="blog post for details" data-lia-auto-title-active="0"&gt;blog post for details&lt;/A&gt;), you can’t reduce size. Maybe you want a different storage setup.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;This approach uses the above steps and gives you the following benefits:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;OS and software configuration needs no changes&lt;/LI&gt;
&lt;LI&gt;Fix / Update on past storage architecture&lt;/LI&gt;
&lt;LI&gt;Right-size disks and disk count, often reducing storage size&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;Copying the data from old to new disks is a downtime activity for databases. Consider temporarily upsizing the Azure VM, if limited by total VM storage throughput. Once copy is completed, the source disks should not be needed. Adapt your disk mounts, ensure all data was copied and unmount and eventually delete the source disks.&lt;BR /&gt;---------------------------------------------------------------------------------------------------------&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The third option (03) is to change the existing disks into Premium SSD v2. This option has its own benefits and drawbacks. While it is a downtime activity – you can’t convert a disk when it is attached to a running VM – you get to keep the same resource name and OS setup. At same time, as you convert existing disks, you can’t reduce the disk size in any case, neither can you reduce the disk count for multi-disk volume groups.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Use the disk change option via any means – portal, API or CLI – you need to be aware of any &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-convert-types?tabs=azure-powershell#convert-premium-ssd-v2-disks" target="_blank" rel="noopener"&gt;additional limitations for Premium SSD v2&lt;/A&gt;. The key points, current at the time of publishing, are:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Limit of 50 active conversions at same time per subscription and region&lt;/LI&gt;
&lt;LI&gt;Host (VM) caching needs to be disabled for the disk&lt;/LI&gt;
&lt;LI&gt;Conversion process is a background activity. You will have degraded performance for the duration of this background copy on the specific disk. CompletionPercetage property is available to check on status on each disk.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A combination of options (02) and (03) is the creation of new Premium SSD v2 disks from snapshots. A snapshot of another disk is taken as source, be it a Standard or Premium HDD/SSD or another Premium SSD v2 disk. Then a new Premium SSD v2 disk is created, with the snapshot as the source instead of creating an empty disk. This method duplicates a disk of the same size without you needing to copy data within the OS. You can’t reduce the disk size or disk count. As a new disk is created, you have to make OS changes to unmount old and mount new disks. However, there is no data movement inside the VM/OS necessary, as the new disks contain all data from the source snapshot.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The same &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-convert-types?tabs=azure-powershell#migrate-to-premium-ssd-v2-or-ultra-disk-using-snapshots" target="_blank" rel="noopener"&gt;limitations&lt;/A&gt; apply as noted for disk conversion in option (03).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Logical sector sizes&lt;/H2&gt;
&lt;P&gt;Premium SSD v2 and Ultra Disks introduced the option to specify logical sector sizes. The default value is 4096 bytes, or 4k, both disk types support 512e sector size as well. This is important for some older database versions, as they might require the use of 512e sector size. Examples are Oracle Database version 12.1 and lower and Db2 LUW versions lower than 11.5. See respective DB documentation for any details.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A converted existing disk, whether from a snapshot or through disk type change, will result in a Premium SSD v2 disk with 512e sector size. This has, in our tests and experience, no measurable performance impact. For new Premium SSD v2 disks the recommendation is to use 4k logical sector size, where possible.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Recommended migration to Premium SSD v2&lt;/H2&gt;
&lt;P&gt;Carefully consider which migration option fits best to your requirements and specifications. Customers which want to renew their SAP landscapes in Azure, such as &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/generation-2" target="_blank" rel="noopener"&gt;VM Gen2&lt;/A&gt; deployment, OS major version updates or using &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/azure-vms-no-temp-disk" target="_blank" rel="noopener"&gt;VMs without temporary storage&lt;/A&gt; will likely consider deploying new VMs with Premium SSD v2 and migrate the application data through backup/restore or DB replication.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We expect many customers will want to adjust disk sizes or align disk striping with updated documentation recommendations. In such case, option (02) with new disks and copying data to new disks gives you the best choice. Especially for database VMs with large disks.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For smaller VMs, application VMs and infrastructure servers with often just one or two managed disks, size changes are not necessary and thousands of small files might have to be moved. Here disk conversion options are likely the most suitable tool, to move to Premium SSD v2.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Select the combination of migration options that works best for your business.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Deploying new or converting to Premium SSD v2 disks will consume Azure quota for this resource type. By default, up to 100 TiB per region and per subscription is available. A second quota type for the number of Premium SSD v2 disks exists. The technical names for the relevant quotas are PremiumV2DiskCount and PremiumV2DiskSizeInGB. Check both quotas on your Azure subscription(s) and create a support request to raise, if necessary.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Summary&lt;/H1&gt;
&lt;P&gt;Premium SSD v2 has been available in Azure for some time and is fully supported for productive SAP systems. It opens new options with its increased performance, fine grained and uncoupled sizing for disk size, throughput and IOPS. Use of &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/memory-optimized/e-family?tabs=epsv6%2Ceasv6%2Cev5%2Cedv5%2Ceasv5%2Cepsv5" target="_blank" rel="noopener"&gt;E-series&lt;/A&gt; for SAP HANA (observe SAP certified VM sizes) is simplified.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Independent settings for size | IOPS | MBps on each disk.&lt;/LI&gt;
&lt;LI&gt;While OS disks are currently not supported, all managed data disks in an SAP system can use Premium SSD v2.&lt;/LI&gt;
&lt;LI&gt;Determine which deployment / migration scenario best applies to your use case.&lt;/LI&gt;
&lt;LI&gt;Clarify if &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-deploy-premium-v2" target="_blank" rel="noopener"&gt;existing limitations&lt;/A&gt; might block your deployment scenario.&lt;/LI&gt;
&lt;LI&gt;Host caching for disk access is not available with Premium SSD v2. This includes write accelerator. Write accelerator not necessary for SAP HANA with Premium SSD v2.&lt;/LI&gt;
&lt;LI&gt;Some workloads might benefit with read cache and require additional IOPS/throughput with Premium SSD v2. Monitor usage and limits on both individual disk and VM level for storage operations before using migrating.&lt;/LI&gt;
&lt;LI&gt;In general, do not combine different storage types on single VM. Such combinations – for example few disks Premium SSD and others using Premium SSD v2, or with Ultra Disk – should be limited to exceptional workloads only.&lt;/LI&gt;
&lt;LI&gt;Logical sector size 4k | 512e available – migrated disks will show 512e but expect no application performance impact. Old database versions might require 512e.&lt;/LI&gt;
&lt;LI&gt;Converting existing disks to Premium SSD v2 uses a background copy process for both disk change and create new disk from snapshot. Check completionPercent on disk, reduced performance while active.&lt;/LI&gt;
&lt;LI&gt;Check Azure quotas for Premium SSD v2 (#disks, overall size) and supported regions.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssd-v2" target="_blank" rel="noopener"&gt;Premium SSD v2 information and limitations&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-incremental-snapshots?tabs=azure-powershell#incremental-snapshots-of-premium-ssd-v2-and-ultra-disks" target="_blank" rel="noopener"&gt;Premium SSD v2 snapshot and limitations&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-machines/premium-storage-performance#disk-caching" target="_blank" rel="noopener"&gt;Disk caching on Azure&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sap/workloads/hana-vm-premium-ssd-v2" target="_blank" rel="noopener"&gt;SAP HANA and Premium SSD v2&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sap/workloads/dbms-guide-sqlserver" target="_blank" rel="noopener"&gt;SQL Server on Azure VMs&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sap/workloads/dbms-guide-ibm" target="_blank" rel="noopener"&gt;Db2 on Azure VMs&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sap/workloads/dbms-guide-oracle" target="_blank" rel="noopener"&gt;Oracle on Azure VMs&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sap/workloads/dbms-guide-sapase" target="_blank" rel="noopener"&gt;SAP ASE on Azure VMs&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 07 Mar 2025 17:37:29 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/use-premium-ssd-v2-with-sap/ba-p/4390144</guid>
      <dc:creator>RobertBiro</dc:creator>
      <dc:date>2025-03-07T17:37:29Z</dc:date>
    </item>
    <item>
      <title>Using Azure Chaos Studio to Fortify SAP Systems Testing and Resiliency</title>
      <link>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/using-azure-chaos-studio-to-fortify-sap-systems-testing-and/ba-p/4387503</link>
      <description>&lt;P&gt;&lt;STRONG class="lia-align-center"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Co-Authors&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;A class="lia-external-url" href="https://linkedin.com/in/ganbu" target="_blank" rel="noopener"&gt;Anbu Govindasamy&lt;/A&gt;, &lt;A class="lia-external-url" href="https://linkedin.com/in/jaywitt" target="_blank" rel="noopener"&gt;Jay Witt&lt;/A&gt; &lt;A class="lia-external-url" href="https://linkedin.com/in/rosen-yanev-7569a3b5" target="_blank" rel="noopener"&gt;Rosen Yanev &lt;/A&gt;and &lt;A class="lia-external-url" href="https://linkedin.com/in/rsponholtz" target="_blank" rel="noopener"&gt;Ross Sponholtz&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/chaos-studio/" target="_blank" rel="noopener"&gt;Azure Chaos Studio&lt;/A&gt; is an Azure service that allows you to inject faults into your service to see how it responds to disruptions. In this blog, we will discuss how to introduce Azure Chaos Studio for SAP use cases by introducing &lt;STRONG&gt;additional&amp;nbsp;resource pressure or failure scenarios&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;SAP Testing Requirement and Challenges&lt;/STRONG&gt;&lt;/H5&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Qualifying Azure Environment:&amp;nbsp;&lt;/STRONG&gt;Customers often need to finalize the sizes of Azure VMs, storage, and network for specific SAP systems. Simulating production &lt;STRONG&gt;peak&lt;/STRONG&gt; workloads to test and finalize the Azure environment and SKU is an important step in this process.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Application Test Plans:&amp;nbsp;&lt;/STRONG&gt;Developing and executing end-to-end application testing for one-time migrations, additional country onboarding, or business transformation projects can be complex. While some customers have established test plans, others are still looking for ways to achieve. &lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Time-Consuming Process: &lt;/STRONG&gt;Testing is time-intensive and requires significant effort from various SAP teams. It involves preparing test cases, developing test data, and conducting the tests. Repeated testing for fine-tuning further increases the required man-hours.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H5&gt;&lt;STRONG&gt;SAP Traditional Testing Approaches&lt;/STRONG&gt;&lt;/H5&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Custom Test Scripts and Partner Solutions:&lt;/STRONG&gt; Customers often rely on custom-written test scripts or partner solutions to generate volume and perform stress tests. This helps in developing solutions and mitigation plans.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Oracle and HANA Testing&lt;/STRONG&gt;: For Oracle, Oracle RAT testing can be leveraged, and for HANA, HANA capture and replay can be used. However, both approaches require substantial investment in preparing the environment.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;SAP customers are constantly looking for solutions to address peak workload situation and failure scenarios.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Introducing Azure Chaos Studio for SAP Use Case&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Azure Chaos Studio offers a comprehensive list of &lt;A href="https://learn.microsoft.com/en-us/azure/chaos-studio/chaos-studio-fault-library" target="_blank" rel="noopener"&gt;fault and action library&lt;/A&gt;, and we have selected specific set of faults and actions to test with SAP, recognizing that Azure Chaos Studio has a broader potential. We have divided our test scenarios into two main categories: Stress Testing and Failure Testing with HA/DR use case. These tests aim to measure, understand, and enhance application resilience. Azure Chaos Studio compliments SAP testing but Chaos Studio is not meant to replace end to end SAP business process testing.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Stress Testing: &lt;/STRONG&gt;In the following scenarios, we introduce &lt;STRONG&gt;additional&lt;/STRONG&gt; pressure to CPU, Network and Memory while &lt;STRONG&gt;performing SAP end to end load testing&lt;/STRONG&gt;. We continue the test with different pressure points to learn and address configuration and resilience related findings.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;CPU Pressure&lt;/LI&gt;
&lt;LI&gt;Network Packet Loss&lt;/LI&gt;
&lt;LI&gt;Physical Memory Pressure&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;CPU Pressure Example: In below screenshot, showing Azure Chaos Studio Configuration for setting up 50% CPU Pressure:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;In the screenshot below showing CPU Spike after triggering CPU Pressure activity:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class=""&gt;&lt;STRONG&gt;Failure Testing: &lt;/STRONG&gt;For testing failure scenarios, we used the following faults with SAP process High Availability solution to validate the proper function of the cluster and failovers:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Kill Process&lt;/LI&gt;
&lt;LI&gt;Network Disconnect&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;"Kill Process" Example: In below screenshot, showing Azure Chaos Studio Configuration for killing HANA Index process:&lt;/P&gt;
&lt;img /&gt;
&lt;H5&gt;&lt;STRONG&gt;Next Steps&lt;/STRONG&gt;&lt;/H5&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;QuickStart guide: &lt;/STRONG&gt;Leverage QuickStart guide to get started on Azure Chaos Studio &lt;A href="https://learn.microsoft.com/en-us/azure/chaos-studio/chaos-studio-quickstart-azure-portal" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/chaos-studio/chaos-studio-quickstart-azure-portal&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Scenario Building: &lt;/STRONG&gt;With Azure Chaos Studio’s steps and branches, you can build complex repeatable scenarios that can then be used on whole landscapes. Each step would include a fault that would be tested, and branches describe which steps can be run in parallel and which one are run serially. Creating scenarios doesn’t require a lot of technical knowledge and making changes to specific settings is easy given the intuitive interface.&lt;/LI&gt;
&lt;LI&gt;Azure Chaos Studio region availability can be found in this link &lt;A href="https://learn.microsoft.com/en-us/azure/chaos-studio/chaos-studio-region-availability" target="_blank" rel="noopener"&gt;Regional availability of Azure Chaos Studio | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H5&gt;&lt;STRONG&gt;Benefits of Azure Chaos Studio&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;In our testing, Azure Chaos Studio enabled us to explore test and resilience use cases that were not achievable with SAP application test suites alone. By introducing additional resource pressure, we were able to push the boundaries more effectively and uncover failure scenarios that were previously undetectable.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Summary&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Azure Chaos Studio can be used to introduce&amp;nbsp;&lt;STRONG&gt;additional resource pressure or simulate failure scenarios.&lt;/STRONG&gt; We recommend that customers enhance their existing SAP test cases with Azure Chaos Studio techniques to add scenarios that are currently not possible, thereby improving resilience and failure handling.&lt;/P&gt;
&lt;P&gt;Useful Links:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/chaos-studio/chaos-studio-overview" target="_blank" rel="noopener"&gt;What is Azure Chaos Studio? | Microsoft Learn&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 05 Mar 2025 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/using-azure-chaos-studio-to-fortify-sap-systems-testing-and/ba-p/4387503</guid>
      <dc:creator>anbugovi</dc:creator>
      <dc:date>2025-03-05T16:00:00Z</dc:date>
    </item>
  </channel>
</rss>

