User Profile
Amin_Omidy
Iron Contributor
Joined 3 years ago
User Widgets
Recent Discussions
A Comprehensive Guide to Multi-Identity Provider Integration with SAP IAS
Check my comprehensive blog post, where I provide a detailed, step-by-step guide for implementing Single Sign-On (SSO) through SAP Identity Authentication Service (IAS) across multiple Identity Providers. Addressing an urgent and ongoing need, this blog focuses on a practical and detailed approach to implement SSO in a cloud-native environment. Deciphering Seamless SAML Single Sign-On: A Comprehensive Guide to Multi-Identity Provider Integration with SAP IAS as Your Proxy for S/4 HANA and Beyond (Part 1) https://blogs.sap.com/2023/11/15/deciphering-seamless-saml-single-sign-on-a-comprehensive-guide-to-multi-identity-provider-integration-with-sap-ias-as-your-proxy-for-s-4-hana-and-beyond-part-1/ Thanks, Amin678Views1like0CommentsRe: SAP S/4HANA choose best IaaS
Hi RaadAlrawi I believe it should be M series for sure to give you a better flexibility for cache: For standard storage, the possible cache types are: None Read Read/Write To get consistent and deterministic performance, set the caching on standard storage for all disks that contain DBMS-related data files, log and redo files, and tablespace to NONE. The caching of the base VHD can remain with the default. For Azure premium storage, the following caching options exist: None Read Read/write None + Write Accelerator, which is only for Azure M-Series VMs Read + Write Accelerator, which is only for Azure M-Series VMs For premium storage, we recommend that you use Read caching for data files of the SAP database and choose No caching for the disks of log file(s). For M-Series deployments, we recommend that you use Azure Write Accelerator for your DBMS deployment. Reference: https://learn.microsoft.com/en-us/azure/virtual-machines/workloads/sap/dbms_guide_general The general guidance is to use an average target CPU utilization of 65%. If your average utilization is below 30%, the CPU is considered oversized. In particular, SAP Note #1872170 documents reports (/SDF/HDB_SIZING and ZNEWHDB_SIZE) that facilitate estimating the memory and disk space requirements for the database tables of Business Suite on HANA, S/4HANA Thanks,730Views0likes0CommentsRe: Can we use Azure Premium File share for Windows Failover Cluster with Zone Redundancy
Hi Samanta, If you referencing to use for Premium storage for NetWeaver replication cluster in Windows you can't use blob and you can't use Premium you should use only LRS: https://learn.microsoft.com/en-us/training/modules/implement-ha-sap-netweaver-anydb/7-exercise-set-up-windows-server-failover-cluster If you referencing failover cluster DB for Windows you have to use Azure there should be a third partly cluster softwire and doublet there is any for HANA on windows and for SQL Server you can use Windows cluster via SQL Server Always on: https://learn.microsoft.com/en-us/training/modules/implement-high-availability-for-sap-workloads-azure/10-explore-sql-server-availability For sap enqueue replication(ERS) you must use some sort of third party softwires in Windows to be able to cluster fail over which Windows server do not support by itself Limitation of shared disk in Azure (Microsoft quote): -Using Windows Failover Cluster Service with shared disk configuration for the DBMS layer is NOT supported in Azure VMs. Instead, in Windows to provide high availability, customers should consider using non-shared disk solutions, such as SQL Server Always On, Oracle Data Guard, or HANA System Replication. -SIOS DataKeeper Cluster Edition is a third-party solution that you can use to emulate a shared disk for your cluster replication in Windows Servers.Shared locally attached disks are not natively supported on Azure VMs. Thanks,422Views0likes0Comments
Groups
Recent Blog Articles
No content to show