Recent Discussions
A Comprehensive Guide to Multi-Identity Provider Integration with SAP IAS
Check my comprehensive blog post, where I provide a detailed, step-by-step guide for implementing Single Sign-On (SSO) through SAP Identity Authentication Service (IAS) across multiple Identity Providers. Addressing an urgent and ongoing need, this blog focuses on a practical and detailed approach to implement SSO in a cloud-native environment. Deciphering Seamless SAML Single Sign-On: A Comprehensive Guide to Multi-Identity Provider Integration with SAP IAS as Your Proxy for S/4 HANA and Beyond (Part 1) https://blogs.sap.com/2023/11/15/deciphering-seamless-saml-single-sign-on-a-comprehensive-guide-to-multi-identity-provider-integration-with-sap-ias-as-your-proxy-for-s-4-hana-and-beyond-part-1/ Thanks, Amin604Views1like0CommentsIssue - Extracting Data from SAP using CDC connector
Hi, I'm encountering an issue with the fullandIncremental load. While the initial loads are processing successfully, the delta loads aren't. Although I can observe the delta records in the staging area, they aren't being merged into the Azure SQL database. Can you assist? Below is the error Operation on target CDS View Extraction failed: {"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Sink 'SQLDB': Incorrect syntax near '393088964'.","Details":"shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near '393088964'.\n\tat shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:265)\n\tat shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1673)\n\tat shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:907)\n\tat shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:802)\n\tat shaded.msdataflow.com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7627)\n\tat shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:3912)\n\tat shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:268)\n\tat shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:242)\n\tat shaded."}358Views0likes0CommentsSAP S/4HANA choose best IaaS
Hello Folks I need some help and advice from you, I have this requirement below and I need to know what is the best to choose: Type System usage Usage CPU SAPS RAM in GB SAP needed Storage S/4HANA-APP S4H-DEV DEV 8000 16 200 GB S/4HANA-APP S4H-QAS QAS 8000 24 200 GB S/4HANA-APP S4H-PRD PRD 6000 16 200 GB S/4HANA-APP S4H-PRD PRD 16000 64 200 GB S/4HANA-DB HANA-DEV DEV HANA certified appliance of 256 GB of RAM or equivalent certified TDI S/4HANA-DB HANA-QAS QAS HANA certified appliance of 384 GB of RAM or equivalent certified TDI S/4HANA-DB HANA-PRD PRD HANA certified appliance of 512 GB of RAM or equivalent Certified TDI PO/Sybase PO-DEV DEV 8000 16 256 GB PO/Sybase PO-QAS QAS 12000 32 512 GB PO/Sybase PO-PRD PRD 16000 64 1 TB Services SAP-router PRD 4000 8 20 GB Services SolMan PRD 8000 24 800 GB Services ADS-PRD PRD 6000 16 500 GB Services Jump-box PRD 6000 16 2TB 114 Core 1,392 GiB 3.6 Tera first I go to SAP and check the hardware certified and TDI then I select Azure IaaS platform it is about 118 and selects one of them: (M128ds_v2 ) this one I think feel it is fit to me, so this one gives me one virtual machine or I can run all VMs Above Thank you in advance1.1KViews1like1CommentSetup SAP on 3 SAP VMs with one disk as shared in Azure
Hi Members, We are migrating On-primise to Azure of our SAP servers. As we all aware of that some of the SAP file mount points and FS like ex:trans / mnt etc has to be available throughout all other 3 systems as well. We tried some options, but no luck. Request you could you please let me know the steps or useful URLs to refer. Environment : Hana 2 and Suse Linux Thank you very much,787Views0likes1CommentSAP Landscape sizing and volume consolidation with ANF
SAP Landscape sizing and volume consolidation with ANF This Blog does not have the claim to be all-embracing and should not be seen as single source of truth. I only would like to open a much broader sizing discussion and present a different view on this topic. The second part will try to explain how volume consolidation works. Sizing an SAP Landscape is usually a very difficult task because there are so many different, sometimes unknown, parameters and values to be taken into consideration. Most of the sizing tools only look towards a single system. This is surely okay for the VM (CPU) sizing however, when it comes to an optimized storage design most tools are not seeing the to avoid complete SAP Landscape and this obviously are not optimized for the best TCO for the customer. Even when a storage design looks more expensive in the first view, it can be the basis of a much better TCO when all IT costs are taken into consideration. Especially storage changes and optimalizations are usually very complex tasks which sometimes even require longer system downtimes. To avoid unnecessary outages the SAP landscape, need to have a very flexible storage environment which allows the customer to grow and react on changes or different requirements from the application very quickly. All this together guarantees optimized TCO and a smooth and reliable SAP landscape for our customers. Most of the effort and cost is going in the landscape management and administration. Only 25% of the overall costs are going into the Infrastructure investment. Source: https://flylib.com/books/en/4.91.1.14/1/ No support of NVA in the Data Path!!! Because of performance and latency reasons it is not supported to configure an Network Virtual Appliance in the data path of the SAP App server to the DB nor from the DB server to the ANF. This is also stated in SAP note: https://launchpad.support.sap.com/#/notes/2731110 Performance tuning To optimize an SAP Landscape, it is essential to monitor the used capacity (CPU, Network and Storage) continuously and evaluate the business needs with this outcome to be able to align and optimize quickly to meet the business requirements. The IT must catch up with the business not the other way around. It is a continuous process … Monitor ->Evaluate ->Adjust -> Monitor….. Storage landscape sizing based on Azure NetApp Files (ANF) Before this new feature was introduced there was a fixed performance ratio for the volume, depending on the Capacity pool QoS we got 16, 64 or 128 MB/s per Terabyte volume size. After implementing the new “Manual QoS Capacity Pool” (public pre-view) feature for ANF the storage sizing is much more optimized than with the previous fixed ratio between performance and volume size. This new feature allows to optimize the volume throughput per volume. Now, also small volumes can benefit from a higher throughput which helps to optimize the overall design. The challenge now is to find a good mix of “slow” and “fast” volumes in the Manual QoS Capacity Pool. This challenge gets much easier by larger Capacity Pools. I will give some sizing examples where I can demonstrate how easy it is when we focus on a landscape - and not focus on a single system design. ANF Storage QoS classes In ANF we have three different storage classes available. Note that this performance OoS pool setting is only there to manage the capacity of the storage system. The data will always be written to the same ANF backend. We differentiate between: Capacity QoS Performance per terabyte volume size Standard 16MB/s Premium 64MB/s Ultra 128MB/s Of course, different costs are associated to the different QoS classes. The cost calculator is available under: https://azure.microsoft.com/en-us/pricing/details/netapp/ Storage sizing … different approach To find the optimal Capacity Pool size we need to calculate the individual storage requirements. And then integrate this individual number into the Capacity Pool calculation. This will present then the “Big Picture”. The big benefit of ANF here is that nothing is written in stone, size and performance can be adopted dynamically during normal operation. Changing size or performance quota does not need any downtime for the customer. System RAM vers. HANA DB size….It is essential to understand that the system RAM cannot be taken as the DB size of HANA. There is space in RAM required for OS operations, HANA delta tables, HANA temporary tables and so on. So the actual DB in size in memory if about 50% of the RAM. This is also the “golden sizing rule” for SAP HANA. Source: SAP TDIv5 Memory guide To calculate the required storage for the system this sizing table should help to calculate the overall storage requirements. Important is not the single data-, log, shared or backup volume, only the total value is important here. For larger VM’s (4TB) those Values are very rough estimations. The backup concept has massive impact on the size of the backup volume. Shared Volume: Shared between group of systems, like DEV, QAS and PRD. Shared Volume also contains /usr/sap. Backup Volume: Shared between all instances. (1x DB Data+ 2x Log Volume + X) SnapShot reserve is already calculated in the data-volume – RAM size not equal DB size VM Main Memory Data Vol (GB) Log Vol (GB) Shared (GB) Backup (GB) Total (GB) 256GB 300 300 300 900 1800 512GB 500 300 500 1100 2400 1024GB 1000 500 1000 2000 4500 2048GB 2000 500 2000 3000 7500 4096GB 4000 500 4000 5000 13500 6192GB 6000 500 6000 7000 19500 Table 1 – overall storage requirement Volume Performance and sizing As a basis for a start design, we estimate a performance quota for DEV, Sandbox, QAS and PRD systems (we stick to the SAP HANA storage KPI’s only for PRD systems). If customers also using their QAS systems for performance testing with the same dataset, it would make sense to design also the QAS storage performance accordingly. Surly this can/must be adopted dynamically if required differently from the customer. SAP is providing KPI’s only for data- and log-volumes. Those KPI’s are equal over all sizes of the databases, this of course cannot be adopted as general guidance for productive environments. Here we know that the throughput is varying heavily from the KPI’s for larger systems. The DB startup time is depending on the data-volume read e.g.: 1 TB DB with 400 MB/s read startup time +/- 20 Min 2 TB DB with 400 MB/s read startup time +/- 40 Min 6 TB DB with 400 MB/s read startup time +/- 1 h 20 Min ---at least here we see a discrepancy of min. KPI These MINIMAL requirements are meant for PRD systems. In general, it is 250 MB/s write for the log-volume and 400 MB/s for the data-volume. It is important to understand that we need to provide more throughput for larger systems. As mentioned, … this is a starting point. System Type 256GB + 512GB RAM % of KPI Data-Volume Log-Volume Sandbox 25% 100MB/s 50MB/s DEV 25% 100MB/s 50MB/s QAS 50% 200MB/s 125MB/s PRD 100% 400MB/s 250MB/s Table 2 – throughput per volume System Type 1024GB RAM % of KPI Data-Volume Log-Volume Sandbox 25% 100MB/s 50MB/s DEV 25% 150MB/s 75MB/s QAS 50% 250MB/s 125MB/s PRD 100% 500MB/s 250MB/s Table 3 – throughput per volume Startup time DB size 0.5TB = +/- 15Min System Type 2048GB RAM % of KPI Data-Volume Log-Volume Sandbox 25% 150MB/s 100MB/s DEV 25% 150MB/s 100MB/s QAS 50% 300MB/s 150MB/s PRD 100% 600MB/s 300MB/s Table 4 – throughput per volume Startup time DB size 1.2TB = +/- 30 Min System Type 4TB + 6TB RAM % of KPI Data-Volume Log-Volume Sandbox 25% 200MB/s 100MB/s DEV 25% 200MB/s 100MB/s QAS 50% 400MB/s 200MB/s PRD 100% 800MB/s 400MB/s Table 5 – throughput per volume Startup time DB size 3TB = +/- 60Min Sizing a Landscape with 10 Systems So how can a ANF storage design for 10 HANA databases looks like? Let’s assume we have 4x 256GB (M32ls) DEV, 4x 1TB (M64s) QAS and 2x 1 TB (M64s) PRD systems System type Storage Requirements (table 1) Performance Requirements (from table 2-5) DEV 4x 1800GB = 7TB Data=4x 100MB/s ; Log=4x 50MB/s QAS 4x 4500GB = 18TB Data=4x 250MB/s ; Log=4x 150MB/s PRD 2x 4500GB = 9TB Data=2x 500MB/s ; Log=2x 300MB/s Backup DEV=100MB/s, QAS=200MB/s,PRD=500MB/s Shared DEV=50MB/s, QAS=50MB/s, PRD=100MB/s Total Storage= 34TB (from Table 1) Total MB/s=(data and log) 3.800MB/s + (Backup) 800MB/s estimate + (Shared) 250MB/s estimated= 4800MB/s Translated to an ANF Capacity Pool Premium 35x 64MB/s = 2240MB/s (we make Backup a bit smaller that it fits into the calculation. Can grow on demand - but it will take some time) Ultra 20x 128MB/s = 2560MB/s Total 4800MB/s Cost estimation https://azure.microsoft.com/us-en/pricing/calculator/?service=netapp Mix of Premium and Ultra 35TB x 64MB/s = 2240MB/s + 20TB x 128MB/s =2560 à 4800MB/s Only Ultra (34TB) 38TB x 128MB/s = 4864MB/s Conclusion: In this case it is much more efficient to choose Ultra over Premium. Only very little overprovisioning and very easy deployment because everything is in one pool. The volumes surely will be distributed over several controller. . Consolidation of ANF Volumes Consolidating HANA Log-Volumes One additional option is to share log-Volumes. Log volumes are not a point of interest for the backup scenarios since there are open files (log files from the database) which cannot be backed up properly. Not all databases are creating the same amount of log information nor requiring the same throughput on the log-volume. Thus it can be very beneficial to share the log-volume among some database systems to benefit from much higher performance for the group of databases which are writing into this shared log-volume. 1 Prod System => 250 MB/s 2 Prod Systems=> +125 MB/s (+50%) = 375 MB/s 3 Prod Systems => +125 MB/s = 500 MB/s 4 Prod Systems => +125 MB/s = 625 MB/s 5 Prod Systems => +125 MB/s = 750 MB/s How to create a consolidated structure with ANF The main reason of doing consolidation is to achieve more with less. Less administration, less resources but more performance and easier administration. Since the performance of ANF is related to the size of the volume it is essential to create a large volume to benefit from this performance quota. Create a meaningful directory structure in this volume to have a good overview of the installed SAP systems. From the application node view this structure is basically invisible. Understanding how ANF is structured and how it needs to be configured Before you can create volumes in ANF for your SAP environment you need to create a NetApp account then a capacity pool and last the volume in the capacity pool. More Info:https://docs.microsoft.com/en-gb/azure/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy#capacity_pools Design concept of multiple SAP systems in one ANF Volume The basic idea behind this is to gain performance and lower the administration overhead of single volumes or default Azure storage. Customers tend to separate non-prod and prod environments, this surely can be done here as well. But instead of managing, sometimes tens of volumes, you only need to manage two or maybe three volumes. This example shows only two SAP systems but surly this can be applied to a very large scale. Create a single volume, here for example non-prod and create for every SAP simple directories in the main volume. For the SAP those ´nested´ mount points are completely invisible. Deployment of ANF with multiple instances Data volumes: If you plan to use SnapShot based backups and cloning, shared data volumes are not a good idea. If you use shared volumes a SnapRevert is not supported anymore because you would also overwrite the data of the other, shared, SAP instances. For all other volumes volume consolidation is always a good idea. If it is required to restore a single file form one instance there is always the chance to go into the (hidden) snapshot directory and copy a single file out of this snapshot into its original location. A shared landscape can look like this: Here you see two DEV, two QAS and two PRD systems in an optimized volume deployment. Another interesting idea is to consolidate volumes by SID. In this case you benefit from the fact that a snapshot would take all three areas (data, log and shared) together. And also, the performance is shared among the three areas. There is some more work to do here before you can clone/refresh the HANA database in this approach.6.9KViews3likes0CommentsHorizontal strategy for SAP Migration
Dear All , I have a query regarding Horizontal strategy for migrating SAP Landscapes . In this we migrate all the Development systems together, then Quality and then finally Production . While this migration is on , how do we maintain the transport landscape ? For example , if DEV is migrated to Azure and QAS is on premise , they need to share a common transport directory . So how will it be achieved ? Does anyone have a real use case where they faced this . As taking a transport "freeze" for the entire duration of the migration is not an option for us . Thanks in advance !2KViews0likes1CommentWEBINAR: Extend SAP solutions using Microsoft Power Platform
Extending the scope of SAP solutions is crucial for companies to get and maintain a competitive advantage.Join this webinar to learn more about Microsoft's Power Platform and how you can benefit from creating and connecting apps and workflows to your SAP environment. WHEN: Wednesday, March 10th 2021 10:00 AM–11:00 AM Pacific Time Register here:SAP Webinar | Microsoft Power Apps HolgerBruchelt1.1KViews0likes0CommentsBackup and Disaster Recovery of SAP HANA workloads on Azure with Actifio
In this post, we will look at SAP HANA workload hosted on Azure and Actifio Copy Data Platform can work together to backup, restore and bring up SAP HANA databases from Azure Blob object storage in minutes. We propose reference architectures that leverage Actifio’ s integrations with Microsoft Azure Blob object storage and Elastic Block Storage offerings to deliver cost-optimized or performance-optimized DR. Enterprises run SAP HANA for their mission-critical applications. Azure Compute is a class-leading IaaS platform to run enterprise workloads.Click here for more details. SAP applications are susceptible to data corruption, accidental deletion, or even security threats such as ransomware attacks. Therefore, every SAP HANA installation needs to be supported by a enterprise backup and disaster recovery solution. Actifio and SAP HANA Actifio Virtual Data Pipeline technology enables businesses to protect, access, and move their data faster, more efficiently, and more by merely decoupling data from the underlying storage. The traditional way of backing up SAP HANA is using the Backint API. The Backint API sends backups to either a local storage cache on production (data is copied onto the backup server) or to an NFS mount point. Optionally, this data may further be copied into additional storage mediums in a deduplicated (dedup) format. Actifio is an SAP backint certified solution.Refer SAP Note 2031547. An alternative to the Backint API approach is to use a solution that invokes the same savepoint API that SAP HANA uses internally to flush data with consistency from memory to disk. Once the data is flushed to disk, to ensure that no new writes happen to the disk, take a snapshot of the underlying disk. Then, in conjunction with a bitmap connector that keeps track of changed blocks, the solution can transfer only the changed blocks from production to the backup server. Since the data is captured in application-consistent native format, recoveries and cloning data for testing/dev become much more comfortable with this approach. Actifio uses this savepoint API approach to capture SAP HANA data in an efficient, incremental forever manner, which is used for backup, recovery, migration, and cloning. Customer benefits are: Reduced performance impact:Incremental forever backups reduce the I/Os and network impact on the production environment. Short DB cloning times for test/dev:Instant DB cloning is possible because the captured data is mounted as virtual full copies, enabling users to self-service provision multiple copies of multi-TB SAP HANA databases simultaneously. Migration to Azure: Migrating a multi-tier app like SAP S4/HANA involves migrating the database and all the other components of applications running in VMs and physical servers to Azure. Actifio has the unique capability to capture DB, VMs, physical servers, file system running on-premises or any public cloud and migrate to Azure. Instant Access to backup:The data is captured in native application-consistent format, they canrecover in minutes by mounting virtual full copiesof the SAP HANA database. Reference Architectures A complete Actifio deployment, as depicted in the above diagram (Fig. 2), involves the provisioning of the following components. Actifio Global Manager (AGM): Actifio Global Manager provides a single pane of glass to manage all Actifio Sky appliances. AGM provides a central place to create and manage enterprise-wide Service Level Agreement (SLA) templates, configure, monitor, and troubleshoot Actifio copy data infrastructure. Additionally, AGM supports a rich security model that enables fine-grain control over resource organization and Role-Based Access Control (RBAC). Actifio Sky: Actifio Sky enables you to capture data from production systems, manage it in the most efficient way possible, and use virtual copies of the data for business requirements like Backup, Test and Dev, Analytics, and AI. Actifio Connector: Actifio Connector, a small-footprint, lightweight service running on SAP HANA instances. The connector discovers application context and integrates with any application-specific APIs to provide a consistent application backup. Actifio connectors are OS-specific and not application-specific. These components come together to deliver end-to-end backup and recovery functionality. In the reference architecture above, the Actifio connector resides on the SAP HANA instance and other application server instances. We recommend below reference architecture for protection against corruption, data loss, and human/programmatic errors. In this reference architecture, we recommended deploying Actifio Sky appliance in the same region and the same virtual network as the SAP Production landscape. Backup repositories can be configured on Azure Managed disk and/or Azure blob storage. In a typical deployment, the most recent points in time (PIT) are stored in the Azure managed disk due to the network proximity and high-performance media. Older PITs are tiered to Azure blob enabling long-term retention. For additional resiliency, Actifio recommends configuring Azure blob target in a different region. The key Actifio feature is Instant Access capability. Backup data fromAzure Managed Disk OR Azure Blobcan be shared in just a few seconds for immediate access. Combining the instant access capability with application-aware recovery workflows enables the database admin to spin up fully functional, virtual copies of the HANA database from backup in a matter of minutes. Reference Architecture for Disaster Recovery We recommend the below reference architecture to protect SAP HANA applications running in Azure from disaster. The reference architecture provided below delivers protection from region-wide physical and logical failures. The critical elements of this reference architecture are SAP HANA database and other Azure instances and file systems forming the SAP landscape are all protected by the Actifio appliance in the primary region. A second Actifio appliance installed in the second region. Actifio solution is configured with one or more data replication policies. OnVault: By setting up an OnVault Pool (which utilizes Azure Blob as a backup repository), data can be periodically copied to an Azure Blob bucket in another region StreamSnap: StreamSnap replication allows copying data from an Actifio appliance in one region to another into a storage pool consisting of Azure Managed disk. Testing results Target Restore VM details: M128s (128 vCPU, 2048 GB RAM) HANA 2 SPS 04 Data size: 1.3 TB Source database Actifio configuration: Actifio GM: Standard D4s v3 (4 vCPU, 16 GiB memory) Actifio Sky: Standard D8s v3 (8 VCPU, 32 GiB memory) Database Mount (backup image) timings: 10 mins 40 secs to mount 1.3 TB of data. Database mount timing includes time to start up HANA in the target VM. Once the database is mounted, all the data is accessible. Before unmounting the backup image, we need to copy the data to the target database volume and it can happen in background as the system is functional. Summary Actifio’s incremental forever, application-consistent, native format capture capabilities provide low RPO, low RTO, and shorter cloning time for SAP HANA. LVM (Logical Volume Manager) disk configuration is a pre-requisite to leverage Actifio VDP. Actifio BackInt backup solution can be used for non LVM based storage options such as Azure Netapp Files (ANF) or when mdadm is used. This blog focusses on Actifio VDP for SAP HANA. We are working on a whitepaper that will encompass Actifio data protection for other databases (AnyDB). Authored by : Siddhartha Rabindran, Principal SAP Technical Specialist, Microsoft Anbu Govindasamy, Principal SAP Cloud Solution Architect, Microsoft Raj Hosamani, Senior Product Manager, Actifio Srikanth Shetkar, Actifio GO Ops Lead2.7KViews1like0CommentsHow to manage Linux OS user id/ group centrally
Dear All, We have many SAP systems which are running on AZURE Linux VM. Now , we have a requirement to manage all the OS users and group centrally. For example, in a sap system on Linux have user id sidadm and which is member of the group sapsys. We don't want those userid and group to be local of this particular VM. We know that, in Linux , those userid/group can be managed by LDAP server. But , what is the best practice in AZURE to manage those Linux userid/group? Any suggestion and recommendation? Thanks in advance. Debi Prasad Pal1.2KViews0likes0CommentsProximity Placement Groups in Azure : Testing the Impact on Query Performance in HANA
Hello All, I am into SAP Performance Optimization since many years and would like to share one of my case study about a performance issue on Azure. Sharing here for the benefit of other users in similar area. Network Latency in SAP Environments: Network latency is a very important metric to be measured, which has a direct impact on runtime of jobs or transactions run by users. Many SAP applications ( custom and standard ) run queries as part of application logic which requires multiple round-trips to database sometimes based on the program design. Examples could be a SELECT … UPTO 1 ROWS in a loop etc. A slight increase in latency can multiply as the number of times the loop is run and fetching records from database. During a migration of SAP to a IaaS or on-premise provider, it is difficult to identify these changes from an application level if the change is very insignificant per record, however it can multiply many fold if the logic is run in a loop. If the change is too big, it could be picked up my many metrics from Solman like increase in DB response time etc,so it can be identified easily. the detailed case study is published on SAP Blogs atProximity Placement Groups in Azure : Testing the Impact on Query Performance in HANA623Views0likes0CommentsQuick Guide to SAP on Azure SLA and OLA
SAP Service Managers/Owners/Leaders for SAP on Azure usually ask these questions “What is Microsoft’sService Level Agreement (SLA)to us for SAP on Azure?” “What does this Microsoft SLA mean to us, does it mean it encompasses everything SAP up to the application layer?” Please kindly goto this article here in detail as Quick Guide to SAP on Azure SLA and OLA Regards, Chin Lai1.5KViews0likes0Comments
Events
Recent Blogs
- Efficient and fast data protection and recovery of SAP HANA running on Azure NetApp Files with SnapCenter introduction, configuration, backup and restore. Ensure the highest level of data protection ...Nov 05, 2024121Views0likes0Comments
- In the ever-evolving world of cloud computing, maximizing performance and efficiency is crucial for businesses leveraging virtual machines (VMs) on platforms like Microsoft Azure, especially for high...Nov 01, 20249.1KViews1like3Comments
Resources
Tags
No tags to show