sap on azure
47 TopicsSAP Business Data Cloud Now Available on Microsoft Azure
We’re thrilled to announce that SAP Business Data Cloud (SAP BDC) including SAP Databricks is now available on Microsoft Azure marking a major milestone in our strategic partnership with SAP and Databricks and our commitment to empowering customers with cutting-edge Data & AI capabilities. SAP BDC is a fully managed SaaS solution designed to unify, govern, and activate SAP and third-party data for advanced analytics and AI-driven decision-making. Customers can now deploy SAP BDC on Azure in US East, US West and Europe West, with additional regions coming soon, and unlock transformative insights from their enterprise data with the scale, security, and performance of Microsoft’s trusted cloud platform. Why SAP BDC on Azure Is a Game-Changer for Data & AI Deploying SAP BDC on Azure enables organizations to accelerate their Data & AI initiatives by modernizing their SAP Business Warehouse systems and leveraging a modern data architecture that includes SAP HANA Cloud, data lake files and connectivity to Microsoft technology. Whether it’s building AI-powered intelligent applications, enabling semantically rich data products, or driving predictive analytics, SAP BDC on Azure provides the foundation for scalable, secure, and context-rich decision-making. Running SAP BDC workloads on Microsoft Azure unlocks the full potential of enterprise data by integrating SAP systems with non-SAP data using Microsoft’s powerful Data & AI services - enabling customers to build intelligent applications grounded in critical business context. Why Azure is an Ideal Platform for Running SAP BDC Microsoft Azure stands out as a leading cloud platform for hosting SAP solutions, including SAP BDC. Azure’s global infrastructure, high-performance networking, and powerful Data & AI capabilities make it an ideal foundation for large-scale SAP workloads. When organizations face complex data environments and need seamless interoperability across tools, Azure’s resilient backbone and enterprise-grade services provide the scalability and reliability essential for building a robust SAP data architecture. Under the Hood: SAP Databricks in SAP BDC is Powered by Azure Databricks A key differentiator of SAP BDC on Azure is that SAP Databricks, a core component of BDC, runs on Azure Databricks—Microsoft’s first-party service. Azure Databricks is a fully managed first party service making Microsoft Azure the optimal cloud for running Databricks workloads. It uniquely offers: Native integration with Microsoft Entra ID for seamless access control. Optimized performance with Power BI, delivering unmatched analytics speed. Enterprise-grade security and compliance, inherent to Azure’s first-party services. Joint engineering and unified support from Microsoft and Databricks. Zero-copy data sharing between SAP BDC and Azure Databricks, enabling frictionless collaboration across platforms. This deep integration ensures that customers benefit from the full power of Azure’s AI, analytics, and governance capabilities while running SAP workloads. Expanding Global Reach: What’s Next While SAP BDC is now live in three Azure regions US East, US West and Europe - we’re just getting started. Over the next few months, availability will expand to additional Azure regions such as Brazil and Canada. For the remaining regions, a continuously updated roadmap can be found on the SAP Roadmap Explorer website Final Thoughts This launch reinforces Microsoft Azure’s longstanding partnership with SAP, backed by over 30 years of trusted partnership and co-innovation. With SAP BDC now available on Azure, customers can confidently modernize their data estate, unlock AI-driven insights, and drive business transformation at scale. Stay tuned as we continue to expand availability and bring even more Data & AI innovations to our joint customers over the next few months.Building a Custom Continuous Export Pipeline for Azure Application Insights
1. Introduction MonitorLiftApp is a modular Python application designed to export telemetry data from Azure Application Insights to custom sinks such as Azure Data Lake Storage Gen2 (ADLS). This solution is ideal when Event Hub integration is not feasible, providing a flexible, maintainable, and scalable alternative for organizations needing to move telemetry data for analytics, compliance, or integration scenarios. 2. Problem Statement Exporting telemetry from Azure Application Insights is commonly achieved via Event Hub streaming. However, there are scenarios where Event Hub integration is not possible due to cost, complexity, or architectural constraints. In such cases, organizations need a reliable, incremental, and customizable way to extract telemetry data and move it to a destination of their choice, such as ADLS, SQL, or REST APIs. 3. Investigation Why Not Event Hub? Event Hub integration may not be available in all environments. Some organizations have security or cost concerns. Custom sinks (like ADLS) may be required for downstream analytics or compliance. Alternative Approach Use the Azure Monitor Query SDK to periodically export data. Build a modular, pluggable Python app for maintainability and extensibility. 4. Solution 4.1 Architecture Overview MonitorLiftApp is structured into four main components: Configuration: Centralizes all settings and credentials. Main Application: Orchestrates the export process. Query Execution: Runs KQL queries and serializes results. Sink Abstraction: Allows easy swapping of data targets (e.g., ADLS, SQL). 4.2 Configuration (app_config.py) All configuration is centralized in app_config.py, making it easy to adapt the app to different environments. CONFIG = { "APPINSIGHTS_APP_ID": "<your-app-id>", "APPINSIGHTS_WORKSPACE_ID": "<your-workspace-id>", "STORAGE_ACCOUNT_URL": "<your-adls-url>", "CONTAINER_NAME": "<your-container>", "Dependencies_KQL": "dependencies \n limit 10000", "Exceptions_KQL": "exceptions \n limit 10000", "Pages_KQL": "pageViews \n limit 10000", "Requests_KQL": "requests \n limit 10000", "Traces_KQL": "traces \n limit 10000", "START_STOP_MINUTES": 5, "TIMER_MINUTES": 5, "CLIENT_ID": "<your-client-id>", "CLIENT_SECRET": "<your-client-secret>", "TENANT_ID": "<your-tenant-id>" } Explanation: This configuration file contains all the necessary parameters for connecting to Azure resources, defining KQL queries, and scheduling the export job. By centralizing these settings, the app becomes easy to maintain and adapt. 4.3 Main Application (main.py) The main application is the entry point and can be run as a Python console app. It loads the configuration, sets up credentials, and runs the export job on a schedule. from app_config import CONFIG from azure.identity import ClientSecretCredential from monitorlift.query_runner import run_all_queries from monitorlift.target_repository import ADLSTargetRepository def build_env(): env = {} keys = [ "APPINSIGHTS_WORKSPACE_ID", "APPINSIGHTS_APP_ID", "STORAGE_ACCOUNT_URL", "CONTAINER_NAME" ] for k in keys: env[k] = CONFIG[k] for k, v in CONFIG.items(): if k.endswith("KQL"): env[k] = v return env class MonitorLiftApp: def __init__(self, client_id, client_secret, tenant_id): self.env = build_env() self.credential = ClientSecretCredential(tenant_id, client_id, client_secret) self.target_repo = ADLSTargetRepository( account_url=self.env["STORAGE_ACCOUNT_URL"], container_name=self.env["CONTAINER_NAME"], cred=self.credential ) def run(self): run_all_queries(self.target_repo, self.credential, self.env) if __name__ == "__main__": import time client_id = CONFIG["CLIENT_ID"] client_secret = CONFIG["CLIENT_SECRET"] tenant_id = CONFIG["TENANT_ID"] app = MonitorLiftApp(client_id, client_secret, tenant_id) timer_interval = app.env.get("TIMER_MINUTES", 5) print(f"Starting continuous export job. Interval: {timer_interval} minutes.") while True: print("\n[INFO] Running export job at", time.strftime('%Y-%m-%d %H:%M:%S')) try: app.run() print("[INFO] Export complete.") except Exception as e: print(f"[ERROR] Export failed: {e}") print(f"[INFO] Sleeping for {timer_interval} minutes...") time.sleep(timer_interval * 60) Explanation: The app can be run from any machine with Python and the required libraries installed—locally or in the cloud (VM, container, etc.). No compilation is needed; just run as a Python script. Optionally, you can package it as an executable using tools like PyInstaller. The main loop schedules the export job at regular intervals. 4.4 Query Execution (query_runner.py) This module orchestrates KQL queries, runs them in parallel, and serializes results. import datetime import json from concurrent.futures import ThreadPoolExecutor, as_completed from azure.monitor.query import LogsQueryClient def run_query_for_kql_var(kql_var, target_repo, credential, env): query_name = kql_var[:-4] print(f"[START] run_query_for_kql_var: {kql_var}") query_template = env[kql_var] app_id = env["APPINSIGHTS_APP_ID"] workspace_id = env["APPINSIGHTS_WORKSPACE_ID"] try: latest_ts = target_repo.get_latest_timestamp(query_name) print(f"Latest timestamp for {query_name}: {latest_ts}") except Exception as e: print(f"Error getting latest timestamp for {query_name}: {e}") return start = latest_ts time_window = env.get("START_STOP_MINUTES", 5) end = start + datetime.timedelta(minutes=time_window) query = f"app('{app_id}')." + query_template logs_client = LogsQueryClient(credential) try: response = logs_client.query_workspace(workspace_id, query, timespan=(start, end)) if response.tables and len(response.tables[0].rows) > 0: print(f"Query for {query_name} returned {len(response.tables[0].rows)} rows.") table = response.tables[0] rows = [ [v.isoformat() if isinstance(v, datetime.datetime) else v for v in row] for row in table.rows ] result_json = json.dumps({"columns": table.columns, "rows": rows}) target_repo.save_results(query_name, result_json, start, end) print(f"Saved results for {query_name}") except Exception as e: print(f"Error running query or saving results for {query_name}: {e}") def run_all_queries(target_repo, credential, env): print("[INFO] run_all_queries triggered.") kql_vars = [k for k in env if k.endswith('KQL') and not k.startswith('APPSETTING_')] print(f"Number of KQL queries to run: {len(kql_vars)}. KQL vars: {kql_vars}") with ThreadPoolExecutor(max_workers=len(kql_vars)) as executor: futures = { executor.submit(run_query_for_kql_var, kql_var, target_repo, credential, env): kql_var for kql_var in kql_vars } for future in as_completed(futures): kql_var = futures[future] try: future.result() except Exception as exc: print(f"[ERROR] Exception in query {kql_var}: {exc}") Explanation: Queries are executed in parallel for efficiency. Results are serialized and saved to the configured sink. Incremental export is achieved by tracking the latest timestamp for each query. 4.5 Sink Abstraction (target_repository.py) This module abstracts the sink implementation, allowing you to swap out ADLS for SQL, REST API, or other targets. from abc import ABC, abstractmethod import datetime from azure.storage.blob import BlobServiceClient class TargetRepository(ABC): @abstractmethod def get_latest_timestamp(self, query_name): pass @abstractmethod def save_results(self, query_name, data, start, end): pass class ADLSTargetRepository(TargetRepository): def __init__(self, account_url, container_name, cred): self.account_url = account_url self.container_name = container_name self.credential = cred self.blob_service_client = BlobServiceClient(account_url=account_url, credential=cred) def get_latest_timestamp(self, query_name, fallback_hours=3): blob_client = self.blob_service_client.get_blob_client(self.container_name, f"{query_name}/latest_timestamp.txt") try: timestamp_str = blob_client.download_blob().readall().decode() return datetime.datetime.fromisoformat(timestamp_str) except Exception as e: if hasattr(e, 'error_code') and e.error_code == 'BlobNotFound': print(f"[INFO] No timestamp blob for {query_name}, starting from {fallback_hours} hours ago.") else: print(f"[WARNING] Could not get latest timestamp for {query_name}: {type(e).__name__}: {e}") return datetime.datetime.utcnow() - datetime.timedelta(hours=fallback_hours) def save_results(self, query_name, data, start, end): filename = f"{query_name}/{start:%Y%m%d%H%M}_{end:%Y%m%d%H%M}.json" blob_client = self.blob_service_client.get_blob_client(self.container_name, filename) try: blob_client.upload_blob(data, overwrite=True) print(f"[SUCCESS] Saved results to blob for {query_name} from {start} to {end}") except Exception as e: print(f"[ERROR] Failed to save results to blob for {query_name}: {type(e).__name__}: {e}") ts_blob_client = self.blob_service_client.get_blob_client(self.container_name, f"{query_name}/latest_timestamp.txt") try: ts_blob_client.upload_blob(end.isoformat(), overwrite=True) except Exception as e: print(f"[ERROR] Failed to update latest timestamp for {query_name}: {type(e).__name__}: {e}") Explanation: The sink abstraction allows you to easily switch between different storage backends. The ADLS implementation saves both the results and the latest timestamp for incremental exports. End-to-End Setup Guide Prerequisites Python 3.8+ Azure SDKs: azure-identity, azure-monitor-query, azure-storage-blob Access to Azure Application Insights and ADLS Service principal credentials with appropriate permissions Steps Clone or Download the Repository Place all code files in a working directory. 2. **Configure **app_config.py Fill in your Azure resource IDs, credentials, and KQL queries. 3. Install Dependencies pip install azure-identity azure-monitor-query azure-storage-blob 4. Run the Application Locally: python monitorliftapp/main.py Deploy to a VM, Azure Container Instance, or as a scheduled job. (Optional) Package as Executable Use PyInstaller or similar tools if you want a standalone executable. 2. Verify Data Movement Check your ADLS container for exported JSON files. 6. Screenshots App Insights Logs:\ Exported Data in ADLS: \ 7. Final Thoughts MonitorLiftApp provides a robust, modular, and extensible solution for exporting telemetry from Azure Monitor to custom sinks. Its design supports parallel query execution, pluggable storage backends, and incremental data movement, making it suitable for a wide range of enterprise scenarios.107Views1like0CommentsGen1 to Gen2 Azure VM Upgrade in Rolling Fashion
Introduction: Azure offers Trusted Launch. This seamless solution is designed to significantly enhance the security of our Generation 2 virtual machines (VMs), providing robust protection against advanced and persistent attack techniques. Trusted Launch is composed of several coordinated infrastructure technologies, each of which can be enabled independently. These technologies work in harmony to create multiple layers of defense, ensuring our virtual machines remain secure against sophisticated threats. With Trusted Launch, we can confidently improve our security posture and safeguard our VMs from potential vulnerabilities. Upgrading of Azure VMs from Generation 1 (Gen1) to Generation 2 (Gen2) involves several steps to ensure a smooth transition without data loss or disruptions. Rolling fashion upgrade process: First and foremost, it is crucial to have a complete backup of virtual machines before starting the upgrade. This step is essential to protect valuable data in case of any unforeseen issues that may arise during the process. Having a backup will give you peace of mind and ensure that data is safe and secure. It is crucial to perform any new process or implementation in pre-production systems first. This step is vital to ensure that we can identify and resolve any potential issues before moving to the production environment. By doing so, we can maintain the integrity and stability of our systems, ultimately serving our customers better. Please run the pre-validation steps before you bring down the VM. SSH into VM: Connect to the Gen1 Linux VM. Identify Boot Device with sudo : bootDevice=$(echo "/dev/$(lsblk -no pkname $(df /boot | awk 'NR==2 {print $1}'))") Check Partition Type (must return 'gpt'): sudo blkid $bootDevice -o value -s PTTYPE Validate EFI System Partition (e.g., /dev/sda2 or 3): sudo fdisk -l $bootDevice | grep EFI | awk '{print $1}' Check EFI Mountpoint (/boot/efi must be in /etc/fstab): sudo grep -qs '/boot/efi' /etc/fstab && echo '/boot/efi present in /etc/fstab' || echo '/boot/efi missing /boot/efi present in /etc/fstab' Example: Once the complete backup is in place and the pre-validation steps are completed, we will need the SAP Basis team to proceed with stopping the application. As part of our planned procedure, once the application has been taken down, the Unix team will proceed to shut down the operating system on the ERS servers. Azure team to follow below steps and perform the Gen upgrade on the selected approved servers: Example: Example: Example: Start the VM: Start-AzVM -ResourceGroupName myResourceGroup -Name myVm (Or) Start from Azure Portal Login into Azure Portal to check the VM Generation is successfully changed to V2. Example: Unix team to validate OS on approved servers. SAP Basis team to generate a new license key based on the new hardware to apply and start the application. Unix team to perform failover of ASCS cluster. SAP Basis team to stop the application server. Unix team to shutdown OS on ERS for selected VM’s and validate the OS. SAP Basis team to apply the new Hardware key and start the application. Unix team to perform failover of ASCS cluster. Azure team to work on capacity analysis to find the path forward for hosting Mv2 VMs on the same PPG group. Once successfully completed test rollback on at least one app server for rollback planning. Here are the other methods to achieve this: Method 1: Using Trusted Launch Direct Upgrade Prerequisites Check: Ensure your subscription is onboarded to preview feature Gen1ToTLMigrationPreview under Microsoft. Compute namespace. The VM should be configured with Trusted launch supported size family and OS version. Also have a successful backup in place. Update Guest OS Volume: Update guest OS volume with GPT Disk layout and EFI system partition. Use PowerShell-based orchestration script for MBR2GPT validation and conversion. Enable Trusted Launch:Deallocate the VM using Stop-AzVM. Enable Trusted launch by setting -SecurityType to TrustedLaunch using Update-AzVM command. Stop-AzVM -ResourceGroupName myResourceGroup -Name myVm, Update-AzVM -ResourceGroupName myResourceGroup -VMName myVm -SecurityType TrustedLaunch -EnableSecureBoot $true -EnableVtpm $true Validate and Start VM:Validate the security profile in the updated VM configuration. Start the VM and verify that you can sign in using RDP or SSH. Method 2: Using Azure Backup Verify Backup Data: Ensure you have valid and up-to-date backups of your Gen1 VMs, including both OS disks and data disks. Verify that the backups are successfully completed and can be restored. Create Gen2 VMs: Create new Gen2 VMs with the desired specifications and configuration. There's no need to start them initially, just have them created and ready for when we need them. Restore VM Backups: In the Azure Portal, go to the Azure Backup service. Select "Recovery Services vaults" from the left-hand menu, and then select your existing backup vault that contains the backups of the Gen1 VMs. Inside the recovery services vault, go to the "Backup Items" section and select the VM you want to restore. Initiate a restore operation for the VM. During the restore process, choose the target resource group and the target VM (which should be the newly created Gen2 VM). Restore OS Disk: Choose to restore the OS disk of the Gen1 VM to the newly created Gen2 VM. Azure Backup will restore the OS disk to the new VM, effectively migrating it to Generation 2. Restore Data Disks: Once the OS disk is restored and the Gen2 VM is operational, proceed to restore the data disks. Repeat the restore process for each data disk, attaching them to the Gen2 VM as needed. Verify and Test: Verify that the Gen2 VM is functioning correctly, and all data is intact. Test thoroughly to ensure all applications and services are running as expected. Decommission Gen1 VMs (Optional): Once the migration is successful, and you have verified that the Gen2 VMs are working correctly please decommission the original Gen1 VMs. Important Notes: Before proceeding with any production migration, thoroughly test this process in a non-production environment to ensure its success and identify any potential issues. Make sure you have a backup of critical data and configurations before attempting any migration. While this approach focuses on using Azure Backup for restoring the VMs, there are other migration strategies available that may better suit your specific scenario. Evaluate them based on your requirements and constraints. Remember, migrating VMs between generations involves changes in the underlying virtual hardware, so thorough testing and planning are essential to ensure a smooth transition without data loss or disruptions. Why Generation2 upgrade without Trusted launch is not supported? Trusted Launch provides foundational compute security for our VMs at no additional cost, which means we can enhance our security posture without incurring extra expenses. Moreover, Trusted Launch VMs are largely on par with Generation 2 VMs in terms of features and performance. This means that upgrading to Generation 2 VMs without enabling Trusted Launch does not provide any added benefits. Unsupported Gen1 VM configurations: Gen1 to Trusted launch VM upgrade is NOT supported if Gen1 VM is configured with below options: Operating system: Windows Server 2016, Azure Linux, Debian, and any other operating system not listed under Trusted launch supported operating system (OS) version. Ref link: Trusted Launch for Azure VMs - Azure Virtual Machines | Microsoft Learn VM size: Gen1 VM configured with VM size not listed under Trusted launch supported size families. Ref link: Trusted Launch for Azure VMs - Azure Virtual Machines | Microsoft Learn Azure Backup: Gen1 VM configured with Azure Backup using Standard policy. As workaround, migrate Gen1 VM backups from Standard to Enhanced policy. Ref link: Move VM backup - standard to enhanced policy in Azure Backup - Azure Backup | Microsoft Learn Conclusion: We will enhance Azure virtual machines by transitioning from Gen1 to Gen2. By implementing these approaches, we can seamlessly unlock improved security and performance of systems. This transition will not only bolster our security measures but also significantly enhance the overall performance, ensuring our operations run more smoothly and efficiently. Let us make this upgrade to ensure virtual machines are more robust and capable of handling future demands. Ref links: Upgrade Gen1 VMs to Trusted launch - Azure Virtual Machines | Microsoft Learn GitHub - Azure/Gen1-Trustedlaunch: aka.ms/Gen1ToTLUpgrade Enable Trusted launch on existing Gen2 VMs - Azure Virtual Machines | Microsoft LearnDeep dive into Pacemaker cluster for Azure SAP systems optimization
Introduction: Azure Pacemaker offers a centralized management platform that streamlines the process of monitoring and maintaining pacemakers. With this innovative service, you can ensure the safety and well-being of your systems through automated alerts and comprehensive management tools. By leveraging Azure Pacemaker, organizations can experience enhanced efficiency and peace of mind knowing that their pacemakers are being managed optimally. The centralized platform simplifies the management process, making it easier to keep track of devices and promptly respond to any issues that may arise. Current customer challenges: Configuration: Common misconfigurations occur when customers don’t follow up-to-date HA setup guidance from learn.microsoft.com, leading to failover issues. Testing: Manual testing causes untested failover scenarios and config drift. Limited expertise in HA tools complicates troubleshooting. Key Use Cases for SAP HA Testing Automation: I wanted to discuss some important updates regarding our testing and validation procedures to ensure that we continue to maintain the highest standards in our work. First off, we need to automate the validation process on new OS versions. This will help us ensure that the Pacemaker cluster configuration remains up-to-date and functions smoothly with the latest OS releases. By doing this, we can promptly address any compatibility issues that might arise. Next, we should implement loop tests to run on a regular cadence. These tests will enable us to catch regressions early and ensure that our customer systems remain robust and reliable over time. It's essential to have continuous monitoring in place to maintain optimal performance. Furthermore, we must validate our high availability (HA) configurations according to the documented SAP on Azure best practices. This will ensure effective failover and quick recovery, minimize downtime and maximize system uptime. Adhering to these best practices will significantly enhance our HA capabilities. SAP Testing Automation Framework (Public Preview): The most recommended approach for validating Pacemaker configurations in SAP HANA clusters, which is through the SAP Deployment Automation Framework (SDAF) and its High Availability Testing Framework. This framework includes a comprehensive set of automated test cases designed to validate cluster behavior under various scenarios such as primary node crashes, manual resource migrations, and service failures. Additionally, it rigorously checks OS versions, Azure roles for fencing, SAP parameters, and Pacemaker/Corosync configurations to ensure everything is set up correctly. Low-level administrative commands are employed to validate the captured values against best practices, particularly focusing on constraints and meta-attributes. This thorough validation process ensures that our clusters are reliable, resilient, and adhering to industry standards. SAP System High Availability on Azure: SAP HANA Scale-UP: SAP Central Services: Support Matrix: Linux Distribution: Distribution Supported Release SUSE Linux Enterprise Server (SLES) 15 SP4, 15 SP5, 15 SP6 Red Hat Enterprise Linux (RHEL) 8.8, 8.10, 9.2, 9.4 High Availability Configuration Patterns: Component Type Cluster Type Storage SAP Central Services ENSA1 or ENSA2 Azure Fencing Agent Azure Files or ANF SAP Central Services ENSA1 or ENSA2 ISCSI (SBD device) Azure Files or ANF SAP HANA Scale-up Azure Fencing Agent Azure Managed Disk or ANF SAP HANA Scale-up ISCSI (SBD device) Azure Managed Disk or ANF High Availability Tests scenarios: Test Type Database Tier (HANA) Central Services Configuration Checks HA Resource Parmeter Validation Azure Load Balancer Configuration HA Resource Parmeter Validation SAPControl Azure Load Balancer Configuration Failover Tests HANA Resource Migration Primary Node Crash ASCS Resource Migration ASCS Node Crash Process & Services Index Server Crash Node Kill Kill SBD Service Message Server Enqueue Server Enqueue Replication Server SAPStartSRV process Network Tests Block network Block network Infrastructure Virtual machine crash Freeze file system (storage) Manual Restart HA Failover to Node Reference links: SLES: Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure | Microsoft Learn Troubleshoot startup issues in a SUSE Pacemaker cluster - Virtual Machines | Microsoft Learn RHEL: Set up Pacemaker on RHEL in Azure | Microsoft Learn Troubleshoot Azure fence agent issues in an RHEL Pacemaker cluster - Virtual Machines | Microsoft Learn STAF: GitHub - Azure/sap-automation-qa: This is the repository supporting the quality assurance for SAP systems running on Azure. Conclusion: This innovative tool is designed to significantly streamline and enhance the high availability deployment of SAP systems on Azure by reducing potential misconfigurations and minimizing manual effort. Please note that since this framework performs multiple failovers sequentially to validate the cluster behavior. It is not recommended to be run on production systems directly. It is intended for use in new high availability deployments that are not yet live / non-business critical systems.Announcing Public Preview for Business Process Solutions
In today’s AI powered enterprises, success hinges on access to reliable, unified business information. Whether you are deploying AI-augmented workflows or fully autonomous agentic solutions, one thing is clear: trusted, consistent data is the fuel that drives intelligent outcomes. Yet in many organizations, data remains fragmented across best of breed applications – creating blind spots in cross-functional processes and throwing roadblocks in the path of automation. Microsoft is dedicated to tackle these challenges, delivering a unified data foundation that accelerates AI adoption, simplifies automation and reduces risk – empowering businesses to unlock the full potential of unified data analytics and agentic intelligence. Our new solution offers cross-functional insights across previously siloed environments and includes: Prebuilt data models for enterprise business applications in Microsoft Fabric Source system data mappings and transformations Prebuilt dashboards and reports in Power BI Prebuilt AI Agents in Copilot Studio (coming soon) Integrated Security and Compliance By unifying Microsoft’s Fabric and AI solutions we can rapidly accelerate transformation and derisk AI rollout through repeatable, reliable, prebuilt solutions. Functional Scope Our new solution currently supports a set of business applications and functional areas, enabling organizations to break down silos and drive actionable insights across their core processes. The platform covers key domains such as: Finance: Delivers a comprehensive view of financial performance, integrating data from general ledger, accounts receivable, and accounts payable systems. This enables finance teams to analyze trends, monitor compliance, and optimize cash flow management all from within Power BI. The associated Copilot agent provides not only access to this data via natural language but will also enable financial postings. Sales: Provides a complete perspective on customers’ opportunity to cash journeys, from initial opportunity through invoicing and payment via Power BI reports and dashboards. The associated Copilot agent can help improve revenue forecasting, by connecting structured ERP and CRM data with unstructured data from Microsoft 365, also tracking sales pipeline health and identify bottlenecks. Procurement: Supports strategic procurement and supplier management, consolidating purchase orders, goods receipts, and vendor invoicing data into a complete spend dashboard. This empowers procurement teams to optimize sourcing strategies, manage supplier risk, and control spend. Manufacturing: (coming soon): Will extend coverage to manufacturing and production processes, enabling organizations to optimize resource allocation and monitor production efficiency. Each item within Business Process Solutions is delivered as a complete, business-ready offering. These models are thoughtfully designed to ensure that organizations can move seamlessly from raw data to actionable execution. Key features include: Facts and Dimensions: Each model is structured to capture both transactional details (facts) and contextual information (dimensions), supporting granular analysis and robust reporting across business processes. Transformations: Built-in transformations automatically prepare data for reporting and analytics, making it compatible with Microsoft Fabric. For example, when a business user needs to compare sales results from Europe, Asia, and North America, the solution transformations handle currency conversion behind the scenes. This ensures that results are consistent across regions, making analysis straightforward and reliable—without the need for manual intervention or complex configuration. Insight to Action: Customers will be able to leverage prebuilt Copilot Agents within Business Process Solutions to turn insight into action. These agents are deeply integrated not only with Microsoft Fabric and Microsoft Teams, but also connected source applications, enabling users to take direct, contextual actions across systems based on real-time insights. By connecting unstructured data sources such as emails, chats, and documents from Microsoft 365 apps, the agents can provide a holistic and contextualized view to support smarter decisions. With embedded triggers and intelligent agents, automated responses could be initiated based on new insights -- streamlining decision-making and enabling proactive, data-driven operations. Ultimately, this will empower teams to not just understand what is happening on a wholistic level, but to also take faster and smarter actions, and with greater confidence. Authorizations: Data models are tailored to respect organizational security and access policies, ensuring that sensitive information is protected and only accessible to authorized users. The same user credential principles apply to the Copilot agents when interacting with/updating the source system in the user-context. Behind the scenes, the solution automatically provisions the required objects and infrastructure to build the data warehouse, removing the usual complexity of bringing data together. It guarantees consistency and reliability, so organizations can focus on extracting value from their data rather than managing technical details. This reliable data foundation serves as one of the key informants of the agentic business processes. Accelerated Insights with Prebuilt Analytics Building on these robust data models, Business Process Solutions offer a suite of prebuilt Power BI reports tailored to common business processes. These reports provide immediate access to key metrics and trends, such as financial performance, sales effectiveness, and procurement efficiency. Designed for rapid deployment, they allow organizations to: Start analyzing data from day one, without lengthy setup or customization. Adapt existing reports for your organization’s exact business needs. Demonstrate best practices for leveraging data models in analytics and decision-making. This approach accelerates time-to-value and also empowers users to explore new analytical scenarios and drive continuous improvement. Extensibility and Customization Every organization is unique and our new solution is designed to support this, allowing you to adapt analytics and data models to fit your specific processes and requirements. You can customize scope items, bring in your own tables and views, integrate new data sources as your business evolves, and combine data across Microsoft Fabric for deeper insights. Similarly, the associated agents will be customizable from Copilot Studio to adapt to your specific Enterprise apps configuration. This flexibility ensures that, no matter how your organization operates, Business Process Solutions helps you unlock the full value of your data. Data integration Business Process Solutions uses the same connectivity options as Microsoft Fabric and Copilot Studio but goes further by embedding best practices that make integration simpler and more effective. We recognize that no single pattern can address the diverse needs of all business applications. We also understand that many businesses have already invested in data extraction tools, which is why our solution supports a wide range of options, from native connectivity to third-party options that bring specialized capabilities to the table. With Business Process Solutions we ensure data can be interacted with in a reliable and high-performant way, whether working with massive volumes or complex data structures. Getting started If your organization is ready to unlock the value of unified analytics, getting started is simple. Just send us a request using the form at: https://aka.ms/JoinBusAnalyticsPreview. Our team will guide you through the next steps and help you begin your journey.Backup SAP Oracle Databases Using Azure VM Backup Snapshots
This blog article provides a comprehensive step-by-step guide for backing up SAP Oracle databases using Azure VM backup snapshots, ensuring data safety and integrity. Installation of CIFS Utilities: The process begins with the installation of cifs-utils on Oracle Linux, which is the recommended OS for running Oracle databases in the cloud. Setting Up Environment Variables: Users are instructed to define necessary environment variables for resource group and storage account names. Creating SMB Credentials: The guide explains how to create a folder for SMB credentials and retrieve the storage account key, emphasizing the need for appropriate permissions. Mounting SMB File Share: Instructions are provided for checking the accessibility of the storage account and mounting the SMB file share, which will serve as a backup location for archived logs. Preparing Oracle Database for Backup:Users must place the Oracle database in hot backup mode to ensure a consistent backup while allowing ongoing transactions. Initiating Snapshot Backup: Once the VM backup is configured, users can initiate a snapshot backup to capture the state of the virtual machine, including the Oracle database. Restoration Process: The document outlines the steps for restoring the Oracle database from the backup, including updating IP addresses and starting the database listener. Final Steps and Verification: Users are encouraged to verify the configuration and ensure that all necessary backups are completed successfully, including the SMB file share.Azure Files NFS Encryption In Transit for SAP on Azure Systems
Azure Files NFS volumes now support encryption in-transit via TLS. With this enhancement, Azure Files NFS v4.1 offers the robust security that modern enterprises require, without compromising performance by ensuring all traffic between clients and servers is fully encrypted. Now Azure Files NFS data can be encrypted end-to-end: at rest, in transit, and across the network. Using Stunnel, an open-source TLS wrapper, Azure Files encrypts the TCP stream between the NFS client and Azure Files with strong encryption using AES-GCM, without needing Kerberos. This ensures data confidentiality while eliminating the need for complex setups or external authentication systems like Active Directory. The AZNFS utility package simplifies encrypted mounts by installing and setting up Stunnel on the client (Azure VMs). The AZNFS mount helper mounts the NFS shares with TLS support. The mount helper initializes dedicated stunnel client process for each storage account’s IP address. The stunnel client process listens on a local port for inbound traffic and then redirects encrypted nfs client traffic to the 2049 port where NFS server is listening on. The AZNFS package runs a background job called aznfswatchdog. It ensures that stunnel processes are running for each storage account and cleans up after all shares from the storage account are unmounted. If for some reason a stunnel process is terminated unexpectedly, the watchdog process restarts it. For more details, refer to the following document: How to encrypt data in transit for NFS shares Availability in Azure Regions All regions that support Azure Premium Files now support encryption in transit. Supported Linux releases For SAP on Azure environment, Azure Files NFS Encryption in Transit (EiT) is available for the following Operating System releases. SLES for SAP 15 SP4 onwards RHEL for SAP 8.6 onwards (EiT is currently not supported for file systems managed by Pacemaker clusters on RHEL.) Refer to SAP Note 1928533 for Operating system supportability for SAP on Azure systems. How to deploy Encryption in Transit (EiT) for Azure Files NFS Shares Refer to the SAP on Azure deployment planning guide about Using Azure Premium Files NFS and SMB for SAP workload As described in the planning guide, for SAP workloads, following are the supported uses of Azure Files NFS shares and EiT can be used for all the scenarios: sapmnt volume for a distributed SAP systems transport directory for SAP landscape /hana/shared for HANA scale-out. Review carefully the considerations for sizing /hana/shared, as appropriately sized /hana/shared volume contributes to system's stability file interface between your SAP landscape and other applications Deploy the Azure File NFS storage account. Refer to the standard documentation for creating the Azure Files storage account, file share and private endpoint. Create an NFS Azure file share Note : We can enforce EiT for all the file shares in the Azure Storage account by enabling ‘secure transfer required’ option. Deploy the mount helper (AZNFS) package on the Linux VM. Follow the instructions for your Linux distribution to install the package. Create the directories to mount the file shares. mkdir -p <full path of the directory> Mount the NFS File share. Refer to the section for mounting the Azure Files NFS EiT file share in Linux VMs. To mount the file share permanently by adding the mount commands in ‘/etc/fstab’. vi /etc/fstab sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 aznfs noresvport,vers=4,minorversion=1,sec=sys,_netdev 0 0 # Mount the file systems mount -a o File systems mentioned above are an example to explain the mount command syntax. o When adding nfs mount entry to /etc/fstab, the fstype is "nfs". However, to use AZNFS mount helper and EiT, we need to use the fstype as "aznfs" which is not known to the Operating System, so at boot time the server tries to mount these entries before the watchdog is active, and they may fail. Users should always add "_netdev" option to their /etc/fstab entries to make sure shares are mounted on reboot only after the required services (like network) are active. o We can add “notls” option in the mount command, if we don’t want to use the EiT but just want to use AZNFS mount helper to mount the file system. Also , we cannot mix EiT and no-EiT methods for different file systems using Azure Files NFS in the same Azure VM. Mount commands may fail to mount the file systems if EiT and no-EiT methods are used in the same VM o Mount helper supports private-endpoint based connections for Azure Files NFS EiT. o If SAP VM is custom domain joined, then we can use custom DNS FQDN OR short names for file share in the ‘/etc/fstab’ as its defined in the DNS. To verify the hostname resolution, check using ‘nslookup <hostname>’ and ‘getent host <hostname>’ commands. Mount the NFS File share as pacemaker cluster resource for SAP Central Services. In high availability setup of SAP Central Services, we may use file system as a resource in pacemaker cluster and it needs to be mounted using pacemaker cluster command. In the pacemaker commands to setup file system as cluster resource, we need to change the mount type to ‘aznfs’ from ‘nfs’. Also it’s recommended to use ‘_netdev’ in the options parameter. Following are the SAP Central Services setup scenarios in which Azure Files NFS is used as pacemaker resource agent, and we can use Azure Files NFS EiT. Azure VMs high availability for SAP NW on SLES with NFS on Azure Files Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files For SUSE Linux: SUSE 15 SP4 (for SAP) and higher releases recognise the ‘aznfs’ as file system type in the pacemaker resource agent. SUSE recommends using simple mount approach for high availability setup of SAP Central services, in which all file systems are mounted using ‘/etc/fstab’ only. For RHEL Linux: RHEL 8.6 (for SAP) and higher releases will be recognising ‘aznfs’ as file system type in pacemaker resource agent. At the time of writing the blog, ‘aznfs’ as file system type is not yet recognised by the FileSystem resource agent(RS) on RHEL, hence this setup can’t be used at this moment. For SAP HANA scale-out with HSR setup We can use Azure Files NFS EiT for SAP HANA scale-out with HSR setup as described in the below docs. SAP HANA scale-out with HSR and Pacemaker on SLES SAP HANA scale-out with HSR and Pacemaker on RHEL We need to mount ‘/hana/shared’ File system with EiT by defining the filesystem type as ‘aznfs’ in ‘/etc/fstab’. Also it’s recommended to use ‘_netdev’ in the options parameter. For SUSE Linux: In the Create File system resource section with SAP HANA high availability “SAPHanaSR-ScaleOut” package, in which we create a dummy file system cluster resource, which will monitor and report failures for ‘/hana/shared’ file system, we can continue to follow the steps as it is in the above document with ‘fstype=nfs4’. ‘/hana/shared’ file system will still be using EiT as defined in ‘/etc/fstab’. For SAP HANA high availability “SAPHanaSR-angi”, there are no further actions needed to use Azure File NFS EiT. For RHEL Linux: In the Create File system resource section, we can replace the file system type to ‘aznfs’ from ‘nfs’ in the pacemaker resource configuration for ‘/hana/shared’ file systems. Validation of in-transit data Encryption for Azure Files NFS. Refer to Verify that the in-transit data encryption succeeded section to check and confirm if EiT is successfully working. Summary Go ahead with EiT!! Simplified deployment of Encryption in Transit of Azure Files Premium NFS (Locally redundant Storage / Zonal redundant Storage) will strengthen the security footprint of Production and non-Production SAP on Azure environments.SAP Web Dispatcher on Linux with High Availability Setup on Azure
1. Introduction The SAP Web Dispatcher component is used for load balancing SAP web HTTP(s) traffic among the SAP application servers. It works as “reverse proxy” and the entry point for HTTP(s) requests into SAP environment, which consists of one or more SAP NetWeaver system. This blog provides detailed guidance about setting up high availability of standalone SAP Web Dispatcher on Linux operating system on Azure. There are different options to set up high availability for SAP Web Dispatcher. Active/Passive High Availability Setup using a Linux pacemaker cluster (SUSE or Red Hat) with a virtual IP/hostname defined in Azure Load Balancer. Active/Active High Availability Setup by deploying multiple parallel instances of SAP Web Dispatcher across different Azure Virtual Machines (running either SUSE or Red Hat) and distributing traffic using Azure Load Balancer. We will walk through the configuration steps for both high availability scenarios in this blog. 2. Active/Passive HA Setup of SAP Web Dispatcher 2.1. System Design Following is the high level architecture diagram of HA SAP Production environment on Azure. SAP Web Dispatcher (WD) standalone HA setup is highlighted in the SAP architecture design. In this setup as active/passive node design, primary node of the SAP Web Dispatcher will be receiving the user's requests and transferring (and load balancing) it to the backed SAP Application Servers. In case of unavailability of primary node, Linux pacemaker cluster will perform the failover of SAP Web Dispatcher to the secondary node. Users will connect to the SAP Web Dispatcher using the virtual hostname(FQDN) and virtual IP Address as defined in the Azure Loadbalancer. Azure Loadbalancer health probe port will be activated by pacemaker cluster on the primary node, so all the user connections to the virtual IP/hostname will be redirected by Azure Loadbalancer to the active SAP Web Dispatcher. Also, SAP Help documentation describes this HA architecture as “High Availability of SAP Web Dispatcher with External HA Software”. The following are the advantages of active-passive SAP WD setup. Linux pacemaker cluster will continuously monitor the SAP WD active node and services running on it. In case of any error scenario, the active node will be fenced by pacemaker cluster and secondary node will be made active. This will ensure best user experience round the clock. Complete automation of error detection and start/stop functionality of SAP WD. Its would be less challenging to define application-level SLA when pacemaker managing the SAP WD. Azure provides VM level SLA of 99.99% , if VMs are deployed in Availability Zones. We need following components to setup HA SAP Web Dispatcher on Linux. A pair of SAP Certified VMs on Azure with supported Linux Operating System. Cross Availability Zone deployment is recommended for higher VM level SLA. Azure Fileshare (Premium) for ‘sapmnt’ NFS share which will be available/mounted on both VMs for SAP Web Dispatcher. Azure Load Balancer for configuring virtual IP and hostname (in DNS) of the SAP Web Dispatcher. Configure Linux pacemaker cluster. Installation of SAP Web Dispatcher on both the VMs with same SID and system number. It is recommended to use the latest version of SAP Web Dispatcher. Configure the pacemaker resource agent for SAP Web Dispatcher application. 2.2. Deployment Steps This section provides detailed steps for HA active/passive SAP Web Dispatcher deployment for both the supported Linux operating systems (SUSE and Red Hat). Please refer to SAP Note 1928533 for SAP on Azure certified VMs, SAPS values and supported operating systems versions for SAP environment. In the below steps, ‘For SLES’ is applicable to SLES operating system and ‘For RHEL’ is applicable to RHEL operating system. If for any step, operating system is not mentioned then its applicable to both the operating system. Also following items are prefixed with: [A]: Applicable to all nodes. [1]: Applicable to only node 1. [2]: Applicable to only node 2. Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SLES/RHEL for SAP. In this blog, below VM names are used: Node1: webdisp01 Node2: webdisp02 Virtual Hostname: eitwebdispha Follow the standard SAP on Azure document for base pacemaker setup for the SAP Web Dispatcher VMs. We can either use SBD device or Azure fence agent for setting up fencing in the pacemaker cluster. For SLES: Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure For RHEL: Set up Pacemaker on Red Hat Enterprise Linux in Azure The rest of the below setup steps are derived from the below SAP ASCS/ERS HA setup document and SUSE/RHEL blog on SAP WD setup. It's highly recommended to read the following documents. For SLES: High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with NFS on Azure Files. SUSE Blog: SAP Web Dispatcher High Availability on Cloud with SUSE Linux. For RHEL: High availability for SAP NetWeaver on VMs on RHEL with NFS on Azure Files RHEL Blog: How to manage standalone SAP Web Dispatcher instances using the RHEL HA Add-On - Red Hat Customer Portal Deploy the Azure standard load balancer for defining the virtual IP of the SAP Web Dispatcher. In this example, the following setup is used in deployment. Frontend IP Backend Pool Health Probe Port Load Balancing Rule 10.50.60.45 (Virtual IP of SAP Web Dispatcher) Node 1 & Node 2 VMs 62320 (set probeThreshold=2) HA Port: Enable Floating IP: Enable Idle Timeout: 30 mins Don't enable TCP time stamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the “net.ipv4.tcp_timestamps” OS parameter to '0'. For details, see Load Balancer health probes. Run the following command to set this parameter, and to set up value permanently add or update the parameter in /etc/sysctl.conf. sudo sysctl net.ipv4.tcp_timestamps=0 When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity unless you perform additional configuration to allow routing to public endpoints. For details on how to achieve outbound connectivity, see Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios. Configure NFS for ‘sapmnt’ and SAP WD instance Filesystem on Azure Files. Deploy the Azure Files storage account (ZRS) and create fileshares for ‘sapmnt’ and ‘SAP WD instance (/usr/sap/SID/Wxx)’. Connect it to the vnet of the SAP VMs using private endpoint. For SLES: Refer to the Deploy an Azure Files storage account and NFS shares section for detailed steps. For RHEL: Refer to the Deploy an Azure Files storage account and NFS shares section for detailed steps. Mount NFS volumes. [A] For SLES: NFS client and other resources come pre-installed. [A] For RHEL: Install the NFS Client and other resources. sudo yum -y install nfs-utils resource-agents resource-agents-sap [A] Mount the NFS file system on both VMs. Create shared directories. sudo mkdir -p /sapmnt/WD1 sudo mkdir -p /usr/sap/WD1/W00 sudo chattr +i /sapmnt/WD1 sudo chattr +i /usr/sap/WD1/W00 [A] Mount the File system that will not be controlled by pacemaker cluster. echo "sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-sapmnt /sapmnt/WD1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 2" >> /etc/fstab mount -a Prepare for SAP Web Dispatcher HA Installation. [A] For SUSE: Install the latest version of the SUSE connector. sudo zypper install sap-suse-cluster-connector [A] Set up host name resolution (including virtual hostname). We can either use a DNS server or modify /etc/hosts on all nodes. [A] Configure the SWAP file. Edit ‘/etc/waagent.conf’ file and change the following parameters. ResourceDisk.Format=y ResourceDisk.EnableSwap=y ResourceDisk.SwapSizeMB=2000 [A] Restart the agent to activate the change sudo service waagent restart [A] For RHEL: Based on RHEL OS version follow SAP Notes. SAP Note 2002167 for RHEL 7.x SAP Note 2772999 for RHEL 8.x SAP Note 3108316 for RHEL 9.x Create the SAP WD instance Filesystem, virtual IP, and probe port resources for SAP Web Dispatcher. [1] For SUSE: # Keep node 2 in standby sudo crm node standby webdisp02 # Configure file system, virtual IP, and probe resource sudo crm configure primitive fs_WD1_W00 Filesystem device=' sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-su-usrsap' directory='/usr/sap/WD1/W00' fstype='nfs' options='noresvport,vers=4,minorversion=1,sec=sys' \ op start timeout=60s interval=0 \ op stop timeout=60s interval=0 \ op monitor interval=20s timeout=40s sudo crm configure primitive vip_WD1_W00 IPaddr2 \ params ip=10.50.60.45 \ op monitor interval=10 timeout=20 sudo crm configure primitive nc_WD1_W00 azure-lb port=62320 \ op monitor timeout=20s interval=10 sudo crm configure group g-WD1_W00 fs_WD1_W00 nc_WD1_W00 vip_WD1_W00 Make sure that all the resources in the cluster are in started status and running on Node 1. Check the status using the command ‘crm status’. [1] For RHEL: # Keep node 2 in standby sudo pcs node standby webdisp02 # Create file system, virtual IP, probe resource sudo pcs resource create fs_WD1_W00 Filesystem device='sapnfsafs.privatelink.file.core.windows.net:/sapnfsafs/webdisp-rh-usrsap' \ directory='/usr/sap/WD1/W00' fstype='nfs' force_unmount=safe options='sec=sys,nfsvers=4.1' \ op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \ --group g-WD1_W00 sudo pcs resource create vip_WD1_W00 IPaddr2 \ ip=10.50.60.45 \ --group g-WD1_W00 sudo pcs resource create nc_WD1_W00 azure-lb port=62320 \ --group g-WD1_W00 Make sure that all the resources in the cluster are in started status and running on Node 1. Check the status using the command ‘pcs status’. [1] Install SAP Web Dispatcher on the first Node. For RHEL: Allow access to SWPM. This rule is not permanent. If you reboot the machine, you should run the command again. sudo firewall-cmd --zone=public --add-port=4237/tcp Run the SWPM. ./sapinst SAPINST_USE_HOSTNAME=<virtual hostname> Enter the virtual hostname and Instance number. Provide the S/4 HANA message server details for backend connections. Continue with SAP Web Dispatcher installation. Check the status of SAP WD. [1] Stop the SAP WD and disable the systemd service. This step is only if SAP startup framework is managed by systemd as per SAP Note 3115048. # login as sidadm user sapcontrol -nr 00 -function Stop # login as root user systemctl disable SAPWD1_00.service [1] Move the Filesystem, virtual IP, and probe port resources for SAP Web Dispatcher to second Node. For SLES: sudo crm node online webdisp02 sudo crm node standby webdisp01 For RHEL: sudo pcs node unstandby webdisp02 sudo pcs node standby webdisp01 NOTE: Before proceeding to the next steps, check that resources successfully moved to Node 2. [2] Setup SAP Web Dispatcher on the second Node. To setup the SAP WD on Node 2, we can copy the following files and directories from Node 1 to Node 2. Also perform the other tasks in Node 2 as mentioned below. Note: Please ensure that permissions, owner, and group names are same in Node 2 for all the copied items as in Node 1. Before copying, save a copy of the existing files in Node 2. Files to copy # For SLES and RHEL /usr/sap/sapservices /etc/system/system/SAPWD1_00.service /etc/polkit-1/rules.d/10-SAPWD1-00.rules /etc/passwd /etc/shadow /etc/group # For RHEL /etc/gshadow Folders to copy # After copying, Rename the ‘hostname’ in the environment file names. /home/wd1adm /home/sapadm /usr/sap/ccms /usr/sap/tmp Create the 'SYS' directory in the /usr/sap/WD1 folder Create all subdirectories and soft links as available in Node 1. [2] Install the saphostagent Extract the SAPHOSTAGENT.SAR file Run the command to install it ./saphostexec -install Check if SAP hostagent is running successfully /usr/sap/hostctrl/exe/saphostexec -status [2] Start SAP WD on node 2 and check the status sapcontrol -nr 00 -function StartService WD1 sapcontrol -nr 00 -function Start sapcontrol -nr 00 -function GetProcessStatus [1] For SLES: Update the instance profile vi /sapmnt/WD1/profile/WD1_W00_wd1webdispha # Add the following lines. service/halib = $(DIR_EXECUTABLE)/saphascriptco.so service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector [A] Configure SAP users after the installation sudo usermod -aG haclient wd1adm [A] Configure keepalive parameter and add the parameter in /etc/sysctl.conf to set the value permanently sudo sysctl net.ipv4.tcp_keepalive_time=300 Create SAP Web Dispatcher resource in cluster For SLES: sudo crm configure property maintenance-mode="true" sudo crm configure primitive rsc_sap_WD1_W00 SAPInstance \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=WD1_W00_wd1webdispha \ START_PROFILE="/usr/sap/WD1/SYS/profile/WD1_W00_wd1webdispha" \ AUTOMATIC_RECOVER=false MONITOR_SERVICES="sapwebdisp" sudo crm configure modgroup g-WD1_W00 add rsc_sap_WD1_W00 sudo crm node online webdisp01 sudo crm configure property maintenance-mode="false" For RHEL sudo pcs property set maintenance-mode=true sudo pcs resource create rsc_sap_WD1_W00 SAPInstance \ InstanceName=WD1_W00_wd1webdispha START_PROFILE="/sapmnt/WD1/profile/WD1_W00_wd1webdispha" \ AUTOMATIC_RECOVER=false MONITOR_SERVICES="sapwebdisp" \ op monitor interval=20 on-fail=restart timeout=60 \ --group g-WD1_W00 sudo pcs node unstandby webdisp01 sudo pcs property set maintenance-mode=false [A] For RHEL: Add firewall rules for SAP Web Dispatcher and Azure load balancer health probe ports on both nodes. sudo firewall-cmd --zone=public --add-port={62320,44300,8000}/tcp --permanent sudo firewall-cmd --zone=public --add-port={62320,44300,8000}/tcp Verify SAP Web Dispatcher Cluster is running successfully Check "insights" blade of Azure load balancer in portal. It would show connections are redirected to one of the nodes. Check the backend S/4 HANA connection is working using the SAP Web Dispatcher Administration link. Run the sapwebdisp config check sapwebdisp pf=/sapmnt/WD1/profile/WD1_W00_wd1webdispha -checkconfig Test the cluster setup For SLES Pacemaker cluster testing for SAP Web Dispatcher can be derived from the document Azure VMs high availability for SAP NetWeaver on SLES (for ASCS/ERS Cluster) We can run the following test cases (from the above link), which can be applicable for SAP WD component. Test HAGetFailoverConfig and HACheckFailoverConfig Manually migrate the SAP Web Dispatcher resource Test HAFailoverToNode Simulate node crash Blocking network communication Test manual restart of SAP WD instance For RHEL Pacemaker cluster testing for SAP Web Dispatcher can be derived from the document Azure VMs high availability for SAP NetWeaver on RHEL (for ASCS/ERS Cluster) We can run the following test cases (from the above link), which can be applicable for SAP WD component. Manually migrate the SAP Web Dispatcher resource Simulate a node crash Blocking network communication Kill the SAP WD process 3. Active/Active HA Setup of SAP Web Dispatcher 3.1. System Design In this Active/Active setup of SAP Web Dispatcher (WD), we can deploy and run parallel standalone WD on individual VMs with share nothing designs and have different SID. To connect to the SAP Web Dispatcher, Users will be using the one virtual hostname (FQDN)/IP as defined in the front-end IP of Azure Load balancer. Virtual IP to hostname/FQDN mapping needs to be performed in AD/DNS. Incoming traffic will be distributed to either of the WD by the Azure Internal Load balancer. No Operating system cluster setup is required in this scenario. This architecture can be deployed in either Linux or Windows operating systems. In ILB configuration, Session persistence settings will ensure that user’s successive requests always be routed from Azure Load balancer to same WD as long as its active and ready to receive connections. Also, SAP Help documentation describes this HA architecture as “High availability with several parallel Web Dispatchers”. The following are the advantages of the active-active SAP WD setup. Simpler design no need to set up Operating System Cluster We have 2 WD instances to handle the requests and distribute the workload. If one of the nodes fail, Load balancer will forward request to another and stop sending requests to failed node. So, it means SAP WD setup is highly available. We need the following components to setup active/active SAP Web Dispatcher on Linux. A pair of SAP Certified VMs on Azure with supported Linux Operating System. Cross Availability Zone deployment is recommended for higher VM level SLA. Azure managed disk of required size on each VM to create Filesystems for ‘sapmnt’ and ‘/sar/sap’. Azure Load Balancer for configuring virtual IP and hostname (in DNS) of the SAP Web Dispatcher. Installation of SAP Web Dispatcher on both the VMs with different SID. It is recommended to use the latest version of SAP Web Dispatcher. 3.2. Deployment Steps This section provides detailed steps for HA active/active SAP Web Dispatcher deployment for both the supported Linux operating systems (SUSE Linux and Redhat Linux). Please refer to SAP Note 1928533 for SAP on Azure certified VMs, SAPS values and supported operating systems versions for SAP environment. 3.2.1. For SUSE and RHEL Linux Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SUSE/RHEL Linux for SAP. Add managed data disk on each of the VMs and create ‘/usr/sap’ and ‘/sapmnt/<SID> Filesystem in it. Install the SAP Web Dispatcher using SAP SWPM on both VMs. Both SAP WD are completely independent of each other and should have separate SID. Perform the basic configuration check for both SAP web dispatchers using “sapwebdisp pf=<profile> -checkconfig”. We should also check if SAP WD Admin URL is working for both WD. Deploy the Azure standard load balancer for defining the virtual IP of the SAP Web Dispatcher. As a reference, the following setup is used in deployment. Front-end IP Backend Pool Health Probe Port Load Balancing Rule 10.50.60.99 (Virtual IP of SAP Web Dispatcher) Node1 & Node2 VM Protocol: HTTPS Port: 44300 (WD https port) Path: /sap/public/icman/ping Interval: 5 seconds (set probeThreshold=2 using azure CLI) Port & Backend Port: 44300 Floating IP: Disable, TCP Reset: Disable, Idle Timeout: Max (30 Minutes) Icman/ping is a way to ensure that SAP web dispatcher is successfully connected to backend SAP S/4 HANA or SAP ERP based application servers. This check is also part of the basic configuration check of SAP web dispatcher using “sapwebdisp pf=<profile> -checkconfig”. If we use HTTP(s) based health probe, ILB connection will be redirected to SAP WD only when connection between SAP WD and S/4 HANA OR ERP Application is working. If we have Java based SAP system as backend environment, then ‘icman/ping’ will not be available, and HTTP(S) path can’t be used in health probe. In that case, we can use TCP based health probe (protocol value as ‘tcp’) and use SAP WD tcp port (like port 8000) in the health probe configuration. In this setup, we used https port 44300 as port & backend port value as that is the only port number used by incoming/source URL. If there are multiple ports to be used/allowed in incoming URL, then we can enable ‘HA Port’ in Load balancing rule instead of specifying the used port. Note: As per SAP Note 2941769, we need to set SAP web dispatcher parameter wdisp/filter_internal_uris=FALSE. Also we need to verify if icman ping URL is working for both the SAP Web dispatchers with their actual hostnames. Define the front-end IP (virtual IP) and hostname mapping in the DNS or /etc/hosts file. Check if Azure Loadbalancer is routing traffic to both WD. In the ‘Insights’ section for Azure loadbalancer, connection health to the VMs should be green. Validate the SAP Web Dispatcher URL is accessible using virtual hostname. Perform high availability tests for SAP WD. Stop first SAP WD and verify WD connections are working. Then start the first WD and stop the second WD and verify that the WD connections are working. Simulate node crash of each of the WD VMs and verify that the WD connections are working. 3.3. SAP Web Dispatcher (active/active) for Multiple Systems We can use the SAP WD (active/active) pair to connect to multiple backend SAP systems rather than setting up separate SAP WD for each SAP backend environment. Based on the unique URL of the incoming request with different virtual hostname/FQDN and/or port of the SAP WD, user request will be directed to any one of the SAP WD and then SAP WD will determine the backend system to redirect and load balance the requests. SAP documents describe the design and SAP specific configurations steps for this scenario. SAP Web Dispatcher for Multiple Systems One SAP Web Dispatcher, Two Systems: Configuration Example In Azure environment, SAP Web Dispatcher architecture will be as below. We can deploy this setup by defining an Azure standard load balancer with multiple front-end IPs attached to one backend-pool of SAP WD VMs and configuring health-probe and load balancing rules to associate it. When configuring Azure Load Balancer with multiple frontend IPs pointing to the same backend pool/port, floating IP must be enabled for each load balancing rule. If floating IP is not enabled on the first rule, Azure won’t allow the configuration of additional rules with different frontend IPs on the same backend port. Refer to the article Multiple frontends - Azure Load Balancer With floating IPs enabled on multiple load balancing rules, the frontend IP must be added to the network interface (e.g., eth0) on both SAP Web Dispatcher VMs. 3.3.1. Deployment Steps Deploy the VMs (of the desired SKU) in the availability zones and choose operating system image as SUSE/RHEL Linux for SAP. Add managed data disk on each of the VMs and create ‘/usr/sap’ and ‘/sapmnt/<SID> Filesystem in it. Install the SAP Web Dispatcher using SAP SWPM on both VMs. Both SAP WD are completely independent of each other and should have separate SID. Deploy Azure Standard Load Balancer with configuration as below Front-end IP Backend Pool Health Probe Port Load Balancing Rule 10.50.60.99 (Virtual IP of SAP Web Dispatcher for redirection to S/4 or Fiori SID E10) Node1 & Node2 VMs Protocol: TCP Port: 8000 (WD tcp port) Interval: 5 seconds (set probeThreshold=2 using azure CLI) Protocol: TCP Port & Backend Port: 44300 Floating IP: Enable, TCP Reset: Disable, Idle Timeout: Max (30 Minutes) 10.50.60.101 (Virtual IP of SAP Web Dispatcher for redirection to S/4 SID or Fiori E60) Protocol: TCP Port & Backend Port: 44300 Floating IP: Enable, TCP Reset: Disable, Idle Timeout: Max (30 Minutes) As described above, we are defining 2 front-end IPs, 2 load-balancing rules, 1 back-end pool and 1 health probe. In this setup, we used https port 44300 as port & backend port value as that is the only port number used by incoming/source URL. If there are multiple ports to be used/allowed in incoming URL, then we can enable ‘HA Port’ in Load balancing rule instead of specifying the used port. Define the front-end IP (virtual IP) and hostname mapping in the DNS or /etc/hosts file. Add both the virtual IPs to the SAP WD VMs network interface. Make sure the additional IPs are added permanently and do not disappear after VM reboot. For SLES, refer to “alternative workaround” section in Automatic Addition of Secondary IP Addresses in Azure For RHEL, refer to the solution provided using “nmcli” command in the How to add multiple IP range in RHEL9 Displaying the "ip addr show" for SAP WD VM1: >>ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 60:45:bd:73:bd:14 brd ff:ff:ff:ff:ff:ff inet 10.50.60.87/26 brd 10.50.60.127 scope global eth0 valid_lft forever preferred_lft forever inet 10.50.60.99/26 brd 10.50.60.127 scope global secondary eth0 valid_lft forever preferred_lft forever inet 10.50.60.101/26 brd 10.50.60.127 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::6245:bdff:fe73:bd14/64 scope link valid_lft forever preferred_lft forever Displaying the "ip addr show" for SAP WD VM2: >> ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 60:45:bd:73:b1:92 brd ff:ff:ff:ff:ff:ff inet 10.50.60.93/26 brd 10.50.60.127 scope global eth0 valid_lft forever preferred_lft forever inet 10.50.60.99/26 brd 10.50.60.127 scope global secondary eth0 valid_lft forever preferred_lft forever inet 10.50.60.101/26 brd 10.50.60.127 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::6245:bdff:fe73:b192/64 scope link valid_lft forever preferred_lft forever Update the Instance profile of SAP WDs. #----------------------------------------------------------------------- # Back-end system configuration #----------------------------------------------------------------------- wdisp/system_0 = SID=E10, MSHOST=e10ascsha, MSPORT=8100, SSL_ENCRYPT=1, SRCSRV=10.50.60.99:* wdisp/system_1 = SID=E60, MSHOST=e60ascsha, MSPORT=8100, SSL_ENCRYPT=1, SRCSRV=10.50.60.101:* Stop and Start the SAP WD on VM1 and VM2. Note: With the above SRCSRV parameter value, only incoming request from “.99 (or its hostname)” for E10 or “.101 (or its hostname)” for E60 will be sent to SAP backend environment. If we also want to use SAP WD actual IP or hostname-based request to be also connected to SAP Backend systems, then we need to add those IP or hostnames in the value (separated by semicolon) of SRCSRV parameter. Perform the basic configuration check for both SAP web dispatcher using “sapwebdisp pf=<profile> -checkconfig”. We should also check if SAP WD Admin URL is working for both WD. In the Azure Portal, in the ‘Insights’ section of Azure load balancer, we can see that connection status to the SAP WD VMs are healthy.SAP on Azure Product Announcements Summary – SAP Sapphire 2025
Introduction Today at Sapphire, we made an array of exciting announcements that strengthen the Microsoft-SAP partnership. I'd like to share additional details that complement these announcements as well as give updates on further product innovation. With over three decades of close collaboration and co-innovation with SAP, we continue to deliver RISE with SAP on Azure and integrations with SAP S/4HANA Public Cloud, allowing customers to innovate with services from both SAP BTP and Microsoft. Our new integrations enhance security through multi-layer cloud protection for SAP and non-SAP workloads, while our AI and Copilot platform provides unified analytics to improve decision-making for customers. Samsung C&T's Engineering & Construction Group is a leader in both the domestic and international construction industries. It recently embarked on the ERP Cloud transformation with RISE with SAP on Azure to enhance its existing ERP System, which is optimized for local environment, to support the global business expansion by transitioning to RISE with SAP on Azure. “Samsung C&T’s successful transition to RISE with SAP on Azure serves as a best practice for other Samsung Group affiliates considering cloud-based ERP adoption. It also demonstrates that even highly localized operations can be integrated into a cloud-based environment that supports global standards.” Aidan Nam, Former Vice President, Corporate System Team, Samsung C&T SAP on Azure also offers AI, Data, and Security solutions that enhance customers' investments and help unlock valuable information stored within ERP systems. When Danfoss, a global leader in energy-efficient solutions, began searching for new security tools for business-critical SAP infrastructure, it quickly leveraged Microsoft Sentinel solution for SAP applications, to find potential malicious activity and deploy multilayered protection around its expanding core infrastructure thereby achieving scalable security visibility. “With Microsoft Sentinel and the Microsoft Sentinel solution for SAP applications, we’ve centralized our security logs and gained a single pane of glass with which we can monitor our SAP systems,” Kevin Cai, IT Specialist in the Security Operations Center at Danfoss We are pleased to announce additional SAP on Azure product updates and details to further help customers innovate on the most trusted cloud for SAP. Simplified onboarding of SAP BTP estate with the new agentless data connector for Microsoft Sentinel Solution for SAP Microsoft Defender for Endpoint for SAP applications is now fully SAP HANA aware offering unparalleled protection for SAP S/4HANA environments. Public Preview of SAP OData as a knowledge source making it easy to add content from SAP systems to Copilot Studio. The new storage and memory optimized Medium Memory Mbv3 VM Series (Mbsv3 and Mbdsv3) is now SAP certified, delivering compute capabilities with IOPs performance of up to 650K. The Mv3 Very High Memory series now features an expanded range of SAP-certified VM sizes, spanning from 24TB to 32TB of memory and scaling up to 1,792 vCPUs. General Availability of SAP ASE (Sybase) database backup support on Azure Backup. SAP Deployment Automation Framework now supports validation of SAP deployments on Azure with public preview of SAP Testing Automation Framework (STAF), automating high availability testing process to ensure SAP systems reliability and availability. The Inventory Checks for SAP Workbook in Azure Center for SAP Tools now comes with New Dashboards for Enhanced Visibility. Let's dive into the summary of product updates and services. Extend and Innovate Microsoft Sentinel Solution for SAP Business applications pose a unique security challenge with highly sensitive information that can make them prime targets for attacks. Attackers can compromise newly discovered unprotected SAP systems within three hours. Microsoft offers best in class security solutions support for SAP business applications with Microsoft Sentinel. The new agentless data connector is our first party solution that re-uses customers’ SAP BTP estate for drastically simplified onboarding. In addition, new strategic third-party solutions have been added to the Microsoft Sentinel content hub by SAP SE and other ISVs making Sentinel the most effective SIEM for SAP workloads: SAP LogServ: RISE, addon for SAP ECS internal logs -infra, database, etc. (Generally Available) SAP Enterprise Threat Detection (Preview) SecurityBridge (Generally Available) Microsoft Defender for Endpoint for SAP applications We are thrilled to announce a major milestone made possible through the deep collaboration between SAP and Microsoft: Microsoft Defender for Endpoint (MDE) is now the first NextGen antivirus solution that is SAP HANA aware. This joint innovation allows organizations like COFCO International to protect their SAP landscapes seamlessly and securely, without disruption. This groundbreaking capability sets MDE apart in the cybersecurity landscape, offering unparalleled protection for SAP S/4HANA environments — all without interfering with mission-critical operations. Thanks to close engineering collaboration, MDE has been carefully trained to recognize SAP HANA binaries and data files. Specialized detection training ensures MDE accurately identifies these critical components and treats them as known, trusted entities — combining world-class cybersecurity with SAP-native awareness. API Management SAP Principal Propagation (for simplicity often also referred to as SSO) is the gold standard for app integration – especially when it comes to 3rd party apps such as Microsoft Power Platform. We proudly announce that SSO is now password-less with Azure. Microsoft Entra ID Managed Identity works seamlessly with SAP workloads such as RISE, SuccessFactors and more. Cut your maintenance efforts for SAP SSO in half and become more secure in doing so. Find more details on this blog. Teams Integration In addition to the availability of the SAP Joule agent in Teams and Copilot, the “classic” integration of Teams with products like SAP S/4HANA Public Cloud is available as well. What started as “Share links to the business context (apps) in chats” has now evolved to Adaptive Card-based Loop components, Chat, Voice and Video calls integrations in contact cards and To Dos in Teams. Users of SAP S/4HANA Public Cloud can stay in their flow of work and access their business-critical data from with SAP S/4HANA Public Cloud or connected Teams applications. Copilot Studio – SAP OData Support in Knowledge Sources Knowledge Sources in Copilot Studio enhance generative answers by using data from Power Platform, Dynamics 365, websites, and external systems. This enables agents to offer relevant information and insights to customers. Today, we announce the Public Preview of SAP OData as a new knowledge source. Customers and partners can now add content from SAP systems to Copilot Studio. Users can query the latest status of Sales Orders in SAP S/4HANA, view pending Invoices from ECC, or query information about employees from SAP SuccessFactors. All you need to do is connect to the relevant SAP OData services as a knowledge source in Copilot Studio. Copilot Studio will not duplicate the data but analyze the data structure and create the relevant queries on demand whenever a user asks a related question. The user context is always kept ensuring roles and permissions in the SAP system are taken into account. Head over to product documentation to read more and get started. New SAP Certified Compute and Storage Thousands of organizations today trust the Azure M-series virtual machines to run some of their largest mission-critical SAP workloads, including SAP HANA. Very High Memory Mv3 VM Series We are excited to unveil updates to our Mv3 Very High Memory (VHM) series with the addition of a 24TB VM, a testament to our ongoing commitment to innovation. Building on our past successes, this series integrates customer insights and industry advancements to deliver unmatched performance and efficiency. It features advanced capabilities for diverse workloads, powered by the 4th generation Intel® Xeon® Platinum 8490H processors, which offer faster processing speeds and better price-performance. You can learn more about our new Mv3 VHM here link. Below is the summary of the recently released Mv3 VHM SKUs. (New) Standard_M896ixds_24_v3: Designed for S/4HANA workloads, with 896 cores and SMT disabled for optimal SAP performance. It is SAP certified for OLTP (S/4HANA) Scale-Up/4-node Scale-Out, and OLAP (BW4H) Scale-Up operations. Standard_M896ixds_32_v3: Designed for S/4HANA workloads, with 896 cores and SMT disabled for optimal SAP performance. It is SAP certified for OLTP (S/4HANA) Scale-Up/4-node Scale-Out, and OLAP (BW4H) Scale-Up operations. Standard_M1792ixds_32_v3: Designed for S/4HANA workloads, with 1792 cores. It is SAP certified for OLTP (S/4HANA) Scale-Up/2-node Scale-Out, and OLAP (BW4H) Scale-Up operations. The new VM Size provides robust memory and CPU power, ensuring exceptional handling of large-scale in-memory databases. With 200 Gbps bandwidth and adaptable storage options such as Premium Disk and Azure NetApp Files (ANF), these VMs deliver speed and flexibility for SAP HANA configurations. Medium Memory Mbv3 VM Series The new Mbv3 series (Mbsv3 and Mbdsv3), released in September 2024, featuring both storage-optimized and memory-optimized are now certified as SAP certified compute VM as of March 2025. The new Mbv3 VMs are based on the 4th generation Intel® Xeon® Scalable processors, scale for workloads up to 4TB, and deliver with NVMe interface for higher remote disk storage performance. It offers up to 650,000 IOPS providing a 5x improvement in network throughput over the previous M-series families and up to 10GBps of remote disk storage bandwidth, providing a 2.5x improvement in remote storage bandwidth over the previous M-series families. Details of SAP Certified Compute Mbv3 VMs are here link. SAP on Azure Software Products and Services Azure Backup for SAP We are pleased to announce the general availability of backup support for SAP ASE database running on Azure virtual machines using Azure Backup. SAP ASE databases are mission critical workloads that require a low recovery point objective (RPO) and a fast recovery time objective (RTO). This backup service offers zero-infrastructure backup and restore of SAP ASE databases with Azure Backup enterprise management capabilities. Key benefits of SAP ASE database backup 15-minute RPO with point-in-time recovery capability. Striping to increase the backup throughput between ASE Virtual Machine (VM) and Recovery services vault Support for cost-effective backup policies and ASE Native compression to lower backup storage costs. Multiple databases restore options including Alternate Location Restore (System refresh), Original Location Restore and Restore as Files. Recovery Services Vault that provides security capabilities like Immutability, Soft Delete and Multiuser Authentication. SAP Testing Automation Framework (STAF) While deployment automation frameworks like SAP Deployment Automation Framework (SDAF) have streamlined system implementation, the critical testing phase has largely remained a manual bottleneck – until now. We are introducing the SAP Testing Automation Framework (STAF), a new framework (currently in public preview) that automates high-availability (HA) testing for SAP deployments on Azure. STAF currently focuses on testing HA configurations for SAP HANA and SAP Central Services. Importantly, STAF is a cross-distribution solution supporting both SUSE Linux Enterprise Server (SLES) and RedHat Enterprise Linux (RHEL), reflecting our commitment to serve the diverse SAP on Azure customer base.. STAF uses modular architecture with Ansible for orchestration and custom modules for validation. It ensures business continuity by validating configurations and recovery mechanisms before systems go live, reducing risks, boosting efficiency, and ensuring compliance with standards. You can start leveraging its capabilities today by visiting the project on GitHub at https://github.com/azure/sap-automation-qa. To know more about the framework please visit our blog: Introducing SAP Testing Automation Framework (STAF) Azure Center for SAP solutions Tools and Frameworks We are pleased to introduce three new dashboards for Azure Inventory Checks for SAP, enhancing visibility into Azure infrastructure and security. These dashboards offer a more structured, visual approach to monitoring health and compliance. Here are the new dashboards at a glance Summary Dashboard: Offers a snapshot of your Azure landscape with results from 21 key infrastructure checks critical for SAP workloads. It highlights your environment’s readiness and identifies areas needing attention. Extended Report Dashboard: This view presents the Inventory Checks for SAP in a user-friendly dashboard layout, with enhanced navigation and filtering. AzSecurity Dashboard: This dashboard presents 10 key Azure security checks to provide insights into configurations and identify vulnerabilities, ensuring compliance and safety. These dashboards transform raw data into actionable insights, allowing customers to quickly assess SAP infrastructure on Azure, identify misconfigurations, track improvements, and prepare confidently for audits and reviews. SAP + Microsoft Co-Innovations Microsoft and SAP are continually innovating to facilitate business transformation for our customers. This year, we are strengthening our partnership in several areas including Business Suite, AI, Data, Cloud ERP, Security, SAP BTP, among others. Please ensure that you check out our blog to learn more about the significant announcements we are making this year at SAP Sapphire.