Update: On 12 September 2024 we announced the General Availability for Hyperscale elastic pools. For more details, please read the GA announcement.
We are excited to announce the public preview (in selected regions) of premium-series hardware for Azure SQL Database Hyperscale elastic pools.
Price-performance optimization with Hyperscale elastic pools
Azure SQL Database Hyperscale elastic pools (“Hyperscale elastic pools”) enable software as a service (SaaS) developers to optimize the price performance ratio for a group of Hyperscale databases, by combining the cloud-native, highly scalable Hyperscale architecture with the cost effectiveness of elastic pools. Each database in a Hyperscale elastic pool retains its isolation from a security perspective, while benefiting from the cost effectiveness of resource pooling.
Higher performance and scalability with new hardware configurations
Hyperscale elastic pools were initially released in public preview with support for the standard-series (“Gen5”) hardware. Now, with the premium-series (“PRMS”) hardware configurations for Hyperscale elastic pools, workloads benefit from the latest generation of Intel® Xeon® Scalable processors (3rd gen - Ice Lake) and AMD EPYC™ 7763v (Milan). These options are generally available for Hyperscale single databases for some time now.
PRMS and MOPRMS hardware
- With premium-series (“PRMS”) hardware, you can have Hyperscale elastic pools with up to 128 vCores. This brings a tremendous amount of compute to power your most demanding workloads. In addition, we also introduce a new 64-vCore offering which offers a great “stepping stone” in between the previously highest 40-vCore and 80-vCore offers. Hyperscale elastic pools with PRMS hardware cost the same as standard-series hardware.
- With premium-series, memory-optimized (“MOPRMS”) hardware, we offer twice the memory to vCore ratio compared to other hardware configurations. This means that Hyperscale elastic pools on MOPRMS hardware can be utilized for demanding analytical workloads which are typically memory intensive. The pricing for MOPRMS hardware is accordingly higher. For more information on the resource limits for Hyperscale elastic pools with the new hardware configurations, refer to the documentation.
See PRMS in action!
Here are some test numbers with a CPU-intensive OLTP-like workload showing the performance of PRMS elastic pools. For this test, we first ran the workload a Hyperscale elastic pool with 8 databases in it. Then we scaled the elastic pool to use the premium-series hardware and ran the exact same workload again. We then compared the average requests / second between the two tests. The results are summarized below.
SLO |
# of active DBs |
Max vCores per DB |
# of connections per database |
Requests / sec (RPS) |
% improvement in RPS for PRMS |
|
|
|
|
|
Standard-series ("Gen5") |
Premium-series ("PRMS") |
|
24 vCore |
4 |
4 |
40 |
39,933 (67% CPU) |
49,037 (66% CPU) |
22.8% |
40 vcore |
4 |
8 |
80 |
85,491 (80% CPU) |
98,776 (68% CPU) |
15.5% |
80 vCore |
8 |
10 |
120 |
156,898 (100% CPU) |
203,613 (84% CPU) |
29.8% |
For such CPU-intensive workloads, PRMS hardware makes a notable improvement in the overall throughput of this OLTP-like workload, while still having some CPU capacity to spare (as seen by the % CPU usage).
See MOPRMS in action!
Similarly, we have seen notable improvement in the performance of analytical workloads involving fewer, but much more expensive queries which use highly parallel execution plans and access a lot of data. For demonstrating this, we run 4 instances of the HammerDB TPROCH workload at the same time, on 4 * SF30 TPROCH databases in a single 24-vCore elastic pool. The elastic pool itself has 8 DBs (of which 4 are active at any given time) and the elastic pool was configured to use max 6 vCores per DB.
Two tests were done, one with standard-series and one with MOPRMS. The nature of this workload is such that it touches a lot of data and hence potentially exercises the data I/O path heavily. So, it does much better if data is in memory or in the local high-speed SSD of the elastic pool compute.
We can also compare the elastic pool level resource usage between the two tests. Firstly, (as seen in the chart below), in the Gen5 tests, Data IO was the dominant resource usage pattern. Consequently, CPU usage was relatively low:
Comparatively, for the MOPRMS tests, there was negligible data IO. From SQL Query Store wait statistics (second chart below), Parallelism and CPU were the dominant resource usage patterns, which indicates the workload was able to run with data in cache throughout:
To recap, we expect PRMS and MOPRMS to offer significant performance benefits for many workloads. As always, performance results for your workloads will vary and we encourage doing some similar performance testing to determine the improvement in your specific workload.
Getting Started
The new hardware configurations are available in a variety of regions. For the current list, please reference the documentation.
Create a new PRMS Hyperscale elastic pool
To create a new Hyperscale elastic pool using the premium-series (PRMS) hardware, click on the “Change configuration” link in the main “Create SQL Elastic pool” page in the Azure portal:
In the next screen, select the appropriate hardware as per your workload’s needs.
Here’s how you can use PowerShell to create a new 8-vCore Hyperscale elastic pool with PRMS hardware:
New-AzSqlElasticPool -ResourceGroupName "my-example-rg" -ServerName "my-example-sql-svr" -ElasticPoolName "my_hs_pool" -Edition "Hyperscale" -ComputeGeneration "PRMS" -vCore 8
Update an existing Hyperscale elastic pool from standard-series hardware to premium-series hardware
An existing Hyperscale pool can be changed to premium-series hardware, subject to regional availability. Simply click on the “Change configuration” link within the Configure blade on the Azure portal:
And in the “SQL Hardware configuration” page (as shown before), you can select Premium-series, or Premium-series, memory optimized as per the needs of your workload. The scaling operation runs asynchronously, and you can expect only a minimal impact on the workload at the very end, as described here.
Conclusion
PRMS and MOPRMS hardware for Hyperscale elastic pools provide the best performance for your cloud native applications. We encourage you to evaluate them in this preview release. We are very eager to hear your experiences and feedback, either here as a comment, or at the SQL feedback site. Also, if you have any questions or suggestions for us, do leave us a comment and we will get back to you as soon as possible!