For example a highly scalable configuration would be a DS15v2 database server running SQL Server 2016 with Buffer Pool Extension enabled and Accelerated Networking and 6 D13v2 application servers:
DS15v2 database server running SQL Server 2016 SP1 with Buffer Pool Extension enabled and Accelerated Networking. 110GB of memory for SQL Server cache (SQL Max Memory) and another ~200GB of Buffer Pool Extension
6 * D13v2 each with two SAP instances with 50 work processes and PHYS_MEMSIZE set to 50%. A total of 600 work processes (6 * D13v2 VMs * 50 work process per instance * 2 instances per VM = 600)
The SAPS value for such a 3 tier configuration is around 100,000 SAPS = 30,000 SAPS for DB layer (DS15v2) and 70,000 SAPS for app layer (6 x D13v2)
3. Multiple ASCS or SQL Server Availability Groups on a Single Internal Load Balancer
Prior to the release of multiple frontend IP addresses for an ILB each SAP ASCS required a dedicated 2 node cluster.
Example: a customer with SAP ECC, BW, SCM, EP, PI, SolMan, GRC and NWDI would need
8 separate 2 node clusters = total of 16 small VMs for the SAP ASCS layer
With the release of the multiple ILB frontend IP address feature
only 2 small VMs are now required
A single Internal Load Balancer can now bind multiple frontend IP addresses. These frontend IP addresses can be listening on different ports such as the unique port assigned to each AlwaysOn Availability Group listener or the same port such as 445 used for Windows File Shares.
A script with the PowerShell commands to set the ILB configuration is available
Note: It is now possible to assign a Frontend IP address to the ILB for the Windows Cluster Internal Cluster IP (this is the IP used by the cluster itself). Assigning the IP address of the Cluster to the ILB allows the cluster admin tool and other utilities to run remotely.
Up to 30 Frontend IP addresses can be allocated to a single ILB. The default
in Azure is 5. A support request can be created to get this limit increased.
The following PowerShell commands are used
4. Encrypted Storage Accounts, Azure Key Vault for SQL TDE Keys & Advanced Disk Encryption (ADE)
SQL Server supports
Transparent Database Encryption
(TDE). SQL Server keys can be stored securely inside the
Azure Key Vault
. SQL Server 2014 and earlier can retrieve keys from the Azure Key Vault with a free
utility. SQL Server 2016 onwards
natively supports the Azure Key Vault
. It is generally recommended to Encrypt a database
loading data with R3load as the overhead involved is only ~5%. Applying TDE after an import is possible, but this will take a lot of time on large databases. The recommended cipher is AES-256. Backups are encrypted on TDE systems.
Advanced Disk Encryption
is a technology like "Bitlocker". It is preferable
to use ADE on disks holding DBMS datafiles, temp files or log files. The recommended technology to secure SQL Server (or other DBMS) datafiles at rest is TDE (or the native DBMS encryption tool).
It is strongly recommended not to use both SQL Server TDE and ADE disks in combination. This may create a large overhead and is a scenario that has not been tested. ADE is useful for encrypting the OS Boot Disk
A new scenario will be released in Q1. The new scenario is Azure to Azure ASR.
Scenarios supporting replicating Hyper-V, Physical or VMWare to Azure are already Generally Available.
The key differentiators between Azure Site Recovery (ASR) and competing technologies:
Azure Site Recovery substantially lowers the cost of DR solutions. Virtual Machines are not charged for unless there is an actual DR event (such as fire, flood, power loss or test failover). No Azure compute cost is charged for VMs that are synchronizing to Azure. Only the storage cost is charged
Azure Site Recovery allows customers to perform non-disruptive DR Tests. ASR Test Failovers copy all the ASR resources to a test region and start up all the protected infrastructure in a private test network. This eliminates any issues with duplicate Windows computernames. Another important capability is the fact that Test Failovers do not stop, impair or disrupt VM replication from on-premise to Azure. A test failover takes a "snapshot" of all the VMs and other objects at a particular point in time
The resiliency and redundancy built into Azure far exceeds what most customers and hosters are able to provide. Azure blob storage stores at least 3 independent copies of data thereby eliminating the chances of data loss even in event of a failure on a single storage node
ASR "Recovery Plans" allow customers to create sequenced DR failover / failback procedures or runbooks. For example, a customer might create a ASR Recovery Plan that first starts up Active Directory servers (to provide authentication and DNS services), then execute a PowerShell script to perform a recovery on DB servers, then start up SAP Central Services and finally start SAP application servers. This allows "Push Button" DR
Azure Site Recovery is a heterogeneous solution and works with Windows and Linux and works well with SQL Server, Oracle, Sybase and DB2.
9. Use ARM Deployments, Use Dv2 VMs, Single VM SLA and Use Premium Storage for DBMS only
Most of the new enhanced features discussed in this blog are ARM only features. These features are not available on old ASM deployments. It is therefore strongly recommended to only deploy ARM based systems and to migrate ASM systems to ARM.
Azure D-Series v2 VM types have fast powerful Haswell processors that are significantly faster than the original D-Series.
All customers should use
Premium Storage for the Production DBMS servers and for non-production systems.
should also be used for Content Server, TREX, LiveCache, Business Objects and other IO intensive non-Netweaver file based or DBMS based applications
Premium Storage is of no benefit on SAP application servers
Standard Storage can be used for database backups or storing archive files or interface files
Azure now offers a
financially backed SLA for single VMs
. Previously a SLA was only offered for VMs in an availability set. Improvements in online patching and reliability technologies allow Microsoft to offer this feature.
10. Sizing Solutions for Azure – Don't Just Map Current VM CPU & RAM Sizing
There are a few important factors to consider when developing the sizing solution for SAP on Azure:
1. Unlike on-premises deployments there is no requirement to provide a large sizing buffer for expected growth or changed requirements over the lifetime of the hardware. For example when purchasing new hardware for an on-premises system it is normal to purchase sufficient resources to allow the hardware to last 3-4 years. On Azure this is not required. If additional CPU, RAM or Storage is required after 6 months, this can be immediately provisioned
2. Unlike most on-premises deployment on Intel servers Azure VMs do not use Hyper-Threading as at November 2016. This means that the thread performance on Azure VMs is significantly higher than most on-premises deployments. D-Series v2 have more than 1,500 SAPS/thread
3. If the current on-premises SAP application server is running on 8 CPU and 56GB of RAM, this does not automatically mean a D13v2 is required. Instead it is recommended to:
a. Measure the CPU, RAM, network and disk utilization
b. Identify the CPU generation on-premises – Azure infrastructure is renewed and refreshed more frequently than most customer deployments.
c. Factor in the CPU generation and the average resource utilization. Try to use a smaller VM
4. If switching from 2-tier to 3-tier configurations it is recommended to review this
SAP systems are not supported on Azure until this SAP Note is fully implemented
12. Upload R3load Dump Files with AzCopy, RoboCopy or FTP
The diagram below shows the recommended topology for exporting a system from an existing datacenter and importing on Azure.
SAP Migration Monitor includes built in functionality to transfer dump files with FTP.
Some customers and partners have developed their own scripts to copy the dump files with Robocopy.
AzCopy can be used and this tool does not need a VPN or ExpressRoute to be setup as AzCopy runs directly to the storage account.
13. Use the Latest Windows Image & SQL Server Service Pack + Cumulative Update
The latest Windows Server image includes all important updates and patches. It is recommended to use the latest available Windows Server OS available in the Azure Gallery
The latest DBMS versions and patches are recommended.
We do not generally recommend deploying SQL Server 2008 R2 or earlier for any SAP system. SQL Server 2012 should only be used for systems that cannot be patched to support more recent SQL Server releases.
SQL Server 2014 has been supported by SAP for some time and is in widespread deployment amongst SAP customers already both on-premises and on Azure
SQL Server 2016 is supported by SAP for SAP_BASIS 7.31 and higher releases and has already been successfully deployed in Production at several large customers including a major global energy company. Support for SQL 2016 for Basis 7.00 to 7.30 is due soon.
The latest SQL Server Service Packs and Cumulative updates can be downloaded from
Below is a recommended Checklist for customers and partners to follow when migrating SAP applications to Azure.
1. Survey and Inventory the current SAP landscape. Identify the SAP Support Pack levels and determine if patching is required to support the target DBMS. In general the Operating Systems Compatibility is determined by the SAP Kernel and the DBMS Compatibility is determined by the SAP_BASIS patch level.
Build a list of SAP OSS Notes that need to be applied in the source system such as updates for SMIGR_CREATE_DDL. Consider upgrading the SAP Kernels in the source systems to avoid a large change during the migration to Azure (eg. If a system is running an old 7.41 kernel, update to the latest 7.45 on the source system to avoid a large change during the migration)
2. Develop the High Availability and Disaster Recovery solution. Build a PowerPoint that details the HA/DR concept. The diagram should break up the solution into the DB layer, ASCS layer and SAP application server layer. Separate solutions might be required for standalone solutions such as TREX or Livecache
3. Develop a Sizing & Configuration document that details the Azure VM types and storage configuration. How many Premium Disks, how many datafiles, how are datafiles distributed across disks, usage of storage spaces, NTFS Format size = 64kb. Also document Backup/Restore and DBMS configuration such as memory settings, Max Degree of Parallelism and traceflags
4. Network design document including VNet, Subnet, NSG and UDR configuration
5. Security and Hardening concept.
Remove Internet Explorer
, create a Active Directory Container for SAP Service Accounts and Servers and apply a Firewall Policy blocking all but a limited number of required ports
6. Create an OS/DB Migration Design document detailing the Package & Table splitting concept, number of R3loads, SQL Server traceflags, Sorted/Unsorted, Oracle RowID setting, SMIGR_CREATE_DDL settings, Perfmon counters (such as BCP Rows/sec & BCP throughput kb/sec, CPU, memory), RSS settings, Accelerated Networking settings, Log File configuration, BPE settings, TDE configuration
7. Create a "Flight Plan" graph showing progress of the R3load export/import on each test cycle. This allows the migration consultant to validate if tunings and changes improve r3load export or import performance. X axis = number of packages complete. Y axis = hours. This flight plan is also critical during the production migration so that the planned progress can be compared against the actual progress and any problem identified early.
8. Create performance testing plan. Identify the top ~20 online reports, batch jobs and interfaces. Document the input parameters (such as date range, sales office, plant, company code etc) and runtimes on the original source system. Compare to the runtime on Azure. If there are performance differences run SAT, ST05 and other SAP tools to identify inefficient statements
9. SAP BW on SQL Server. Check this blogsite regularly for new features for BW systems including Column Store
10. Audit deployment and configuration, ensure cluster timeouts, kernels, network settings, NTFS format size are all consistent with the design documents. Set perfmon counters on important servers to record basic health parameters every 90 seconds. Audit that the SAP Servers are in a separate AD Container and that the container has a Policy applied to it with Firewall configuration.
Content from third party websites, SAP and other sources reproduced in accordance with
criticism, comment, news reporting, teaching, scholarship, and research