hyperscale
30 TopicsIntroducing the Azure SQL hub: A simpler, guided entry into Azure SQL
Choosing the right Azure SQL service can be challenging. To make this easier, we built the Azure SQL hub, a new home for everything related to Azure SQL in the Azure portal. Whether you’re new to Azure SQL or an experienced user, the hub helps you find the right service quickly and decide, without disrupting your existing workflows. For existing users: Your current workflows remain unchanged. The only visible update is a streamlined navigation pane where you access Azure SQL resources. For new users: Start from the Azure SQL hub home page. Get personalized recommendations by answering a few quick questions or chatting with Azure portal Copilot. Or compare services side by side and explore key resources, all without leaving the portal. This is one way to find it: Searching for "azure sql" in main search box or marketplace is also efficient way to get to Azure SQL hub Answer a few questions to get our recommendation and use Copilot to refine your requirements. Get a detailed side-by-side comparison without leaving the hub. Still deciding? Explore a selection of Azure SQL services for free. This option takes you straight to the resource creation page with a pre-applied free offer. Try the Azure SQL hub today in the Azure portal, and share your feedback in the comments!430Views3likes0CommentsConvert geo-replicated databases to Hyperscale
We’re excited to introduce the next improvement in Hyperscale conversion: a new preview feature that allows customers to convert Azure SQL databases to Hyperscale by keeping active geo-replication or failover group configurations intact. This builds on our earlier improvements and directly addresses one of the most requested capabilities from customers. With this improvement, customers can now modernize your database architecture with Hyperscale while maintaining business continuity. Overview We have heard feedback from customers about possible improvements we could make while converting their databases to Hyperscale. Customers complained about the complex steps they needed to perform to convert a database to Hyperscale when the database is geo-replicated by active geo-replication or failover groups. Previously, converting to Hyperscale required tearing down geo-replication links and recreating them after the conversion. Now, that’s no longer necessary. This improvement allows customers to preserve their cross-region disaster recovery or read scale-out configurations and still allows conversion to Hyperscale which helps in minimizing downtime and operational complexity. This feature is especially valuable for applications that rely on failover group endpoints for connectivity. Before this improvement, if application needs to be available during conversion, then connection string needed modifications as a part of conversion because the failover group and its endpoints had to be removed. With this new improvement, the conversion process is optimized for minimal disruption, with telemetry showing majority of cutover times under one minute. Even with a geo-replication configuration in place, you can still choose between automatic and manual cutover modes, offering flexibility in scheduling the transition. Progress tracking is now more granular, giving customers better visibility into each stage of the conversion, including the conversion of the geo-secondary to Hyperscale. Customer feedback Throughout the private preview phase, we have received overwhelmingly positive feedback from several customers about this improvement. Viktoriia Kuznetcova, Senior Automation Test Engineer from Nintex says: We needed a low-downtime way to move our databases from the Premium tier to Azure SQL Database Hyperscale, and this new feature delivered perfectly; allowing us to complete the migration in our test environments safely and smoothly, even while the service remained under continuous load, without any issues and without needing to break the failover group. We're looking forward to the public release so we can use it in production, where Hyperscale’s ability to scale storage both up and down will help us manage peak loads without overpaying for unused capacity. Get started The good news is that there are no changes needed to the conversion process. The workflow automatically detects that a geo-secondary is present and converts it to Hyperscale. There are no new parameters, and the method remains the same as the existing conversion process which works for non-geo-replicated databases. All you need is to make sure that: You have only one geo-secondary replica because Hyperscale doesn't support more than one geo-secondary replica. If a chained geo-replication configuration exists, it must be removed before starting the conversion to Hyperscale. Creating a geo-replica of a geo-replica (also known as "geo-replica chaining") isn't supported in Hyperscale. Once the above requirements are satisfied, you can use any of the following methods to initiate the conversion process. Conversion to Hyperscale must be initiated starting from the primary geo-replica. The following table provides sample commands to convert a database named WideWorldImporters on a logical server called contososerver to an 8-vcore Hyperscale database with manual cutover option. Method Command T-SQL ALTER DATABASE WideWorldImporters MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen5_8') WITH MANUAL_CUTOVER; PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -Edition "Hyperscale" -RequestedServiceObjectiveName "HS_Gen5_8" -ManualCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --edition Hyperscale --service-objective HS_Gen5_8 --manual-cutover Here are some notable details of this improvement: The geo-secondary database is automatically converted to Hyperscale with the same service level objective as the primary. All database configurations such as maintenance window, zone-resiliency, backup redundancy etc. remain the same as earlier (i.e., both geo-primary and geo-secondary would inherit from their own earlier configuration). A planned failover isn't possible while the conversion to Hyperscale is in progress. A forced failover is possible. However, depending on the state of the conversion when the forced failover occurs, the new geo-primary after failover might use either the Hyperscale service tier, or its original service tier. If the geo-secondary database is in an elastic pool before conversion, it is taken out of the pool and might need to be added back to a Hyperscale elastic pool separately after the conversion. The deployment of the feature is in progress and is expected to be completed in all Azure regions in a few weeks. In case you see error (Update to service objective '<SLO name>' with source DB geo-replicated is not supported for entity '<Database Name>') while converting primary to Hyperscale, wait for this new improvement to become available in your region. If you don’t want to use this preview capability yet, make sure to remove any geo-replication configuration before converting your databases to Hyperscale. Conclusion This update marks a significant step forward in the Hyperscale conversion process, offering simple steps, less downtime and keeping the geo-secondary available during the conversion process. We encourage you to try this new capability and provide your valuable feedback and help us refine this feature for general availability. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1.1KViews2likes0CommentsPublic Preview - Data Virtualization for Azure SQL Database
Data virtualization, now in public preview in Azure SQL Database, enables you to leverage all the power of Transact-SQL (T-SQL) and seamlessly query external data from Azure Data Lake Storage Gen2 or Azure Blob Storage, eliminating the need for data duplication, or ETL processes, allowing for faster analysis and insights. Integrate external data, such as CSV, Parquet, or Delta files, with your relational database while maintaining the original data format and avoiding unnecessary data movement. Present integrated data to applications and reports as a standard SQL object or through a normal SELECT command. Data Virtualization for Azure SQL Database supports SAS tokens, Managed Identity, and User identity for secure access. Data Virtualization for Azure SQL Database will introduce and expand support for: Database Scoped Credential. External Data Source. External File Format - with support for Parquet, CSV, and Delta. External Tables. OPENROWSET. Support metadata functions and JSON functions. For enhanced security and flexibility Data Virtualization for Azure SQL Database supports three authentication methods: Shared access signature. Managed identity (system assigned managed identity and user-assigned managed identity). User identity. Key Benefits Just like in SQL Server 2022 and Azure SQL Managed Instance the key benefits of Data Virtualization for Azure SQL Database are: Seamless Data Access: Query external CSV, Parquet, and Delta Lake tables using T-SQL as if they were native tables within Azure SQL Database. Allowing for off-loading cold data while keeping it easily accessible. Enhanced Productivity: Reduce the time and effort required to integrate and analyze data from multiple sources. Cost Efficiency: Minimize the need for data replication and storage costs associated with traditional data integration methods. Real-Time Insights: Enable real-time data querying and insights without delays caused by data movement or synchronization. Security: Leverage SQL Server security features for granular permissions, credential management, and control. Example Data Virtualization for Azure SQL Database is based on the same core principles as SQL Server’s PolyBase feature. With support for Azure Data Lake Gen 2, using prefix adls:// and Azure Blob Storage, using prefix abs://. For the following example we are going to use Azure Open Datasets, more specifically NYC yellow taxi trip records open data set which allows public access. For private data sources customers can leverage multiple authentication methods like SAS Tokens, Managed Identity and User Identity. -- Create Azure Blob Storage (ABS) data source CREATE EXTERNAL DATA SOURCE NYCTaxiExternalDataSource WITH ( LOCATION = 'abs://nyctlc@azureopendatastorage.blob.core.windows.net'); -- Using OPENROWSET to read Parquet files from the external data source SELECT TOP 10 * FROM OPENROWSET( BULK '/yellow/puYear=*/puMonth=*/*.parquet', DATA_SOURCE = 'NYCTaxiExternalDataSource', FORMAT = 'parquet' ) AS filerows; -- Or using External Tables CREATE EXTERNAL FILE FORMAT DemoFileFormat WITH (FORMAT_TYPE = PARQUET); --Create external table CREATE EXTERNAL TABLE tbl_TaxiRides ( vendorID VARCHAR(100) COLLATE Latin1_General_BIN2, tpepPickupDateTime DATETIME2, tpepDropoffDateTime DATETIME2, passengerCount INT, tripDistance FLOAT, puLocationId VARCHAR(8000), doLocationId VARCHAR(8000), startLon FLOAT, startLat FLOAT, endLon FLOAT, endLat FLOAT, rateCodeId SMALLINT, storeAndFwdFlag VARCHAR(8000), paymentType VARCHAR(8000), fareAmount FLOAT, extra FLOAT, mtaTax FLOAT, improvementSurcharge VARCHAR(8000), tipAmount FLOAT, tollsAmount FLOAT, totalAmount FLOAT ) WITH ( LOCATION = 'yellow/puYear=*/puMonth=*/*.parquet', DATA_SOURCE = NYCTaxiExternalDataSource, FILE_FORMAT = DemoFileFormat ); SELECT TOP 10 * FROM tbl_TaxiRides; You can also use these capabilities in combination with other metadata functions like sp_describe_first_result_set, filename(), and filepath(). Getting started Data Virtualization for Azure SQL Database is currently available in select regions, with broader availability coming soon across all Azure regions Data Virtualization for Azure SQL Database is based on the same core principles as SQL Server’s PolyBase feature. To know more and get started with Data Virtualization for Azure SQL Database.1.9KViews3likes0CommentsAzure SQL Database Hyperscale - Enhanced Performance Features Are Now Generally Available!
Enhanced Performance Features Now GA Increased Transaction Log Generation Rate to 150 MiB/s The transaction log generation rate for Azure SQL Database Hyperscale, now at 150 MiB/s, delivers faster data processing and better handling of write-intensive workloads. Customers who participated in the public preview reported remarkable results, including significantly faster data loads into the database. One key area of impact was the ability to smoothly transition to Hyperscale from other databases, enabling a more efficient transition while maintaining performance standards. Continuous Priming for Optimal Performance Continuous priming optimizes performance during failovers by ensuring that secondary replicas are primed with the “hot pages” of the primary replica. This innovation has drastically reduced read latencies for secondary replicas. For example, internal customer Azure DevOps has enabled continuous priming and observed substantial benefits in their RBPEX utilization, leading to enhanced operational efficiency. Customer Success Stories Throughout the public preview, customers reported faster bulk data imports, real-time data ingestion, and smoother transition to Hyperscale, thanks to the increased transaction log generation rate. Similarly, continuous priming decreased read latencies for secondary replicas, benefiting read-heavy workloads and failovers. Azure DevOps serves as a prime example of success, leveraging continuous priming to optimize RBPEX usage effectively. What does this mean for you These features, now available for all customers, mark a significant step forward in database technology. Whether you’re considering transitioning to Hyperscale or looking to optimize your current setup, these enhancements offer powerful tools to meet your growing data and performance needs. The feedback from our customers during the public preview has been invaluable, and we are excited to see how these features will continue to drive success across diverse use cases. Next Steps We invite you to explore these new capabilities and fully leverage what Azure SQL Database Hyperscale has to offer. Please note that these features are being rolled out region by region and will be available in all regions by the end of June. As always, we are here to support you every step of the way. Share your feedback, questions, or success stories by emailing us at sqlhsfeedback AT microsoft DOT com. Together, we can continue to evolve and innovate in the world of data management.1.4KViews0likes0CommentsSimplified & lower pricing for Azure SQL Database and Azure SQL Managed Instance backup storage
Today as you deploy your Azure SQL database or Azure SQL managed instance, one of the important decisions to be made is the choice for your backup storage redundancy (BSR). I say it's important because the availability of your database depends on the availability of your backups. Here’s why. Consider the scenario where your DB has high availability configured via zone redundancy. And, let's say, your backups are configured non-zone redundant. In the event of a failure in the zone, your database fails over to another zone within the region, however your backups won't, because of their storage setting. Now, in the new zone, the backup service attempts to backup your database but cannot reach the backups in the zone where the failure happened causing the logs to become full and eventually impacting the availability of the database itself. As you create the Azure SQL database, the choices for backup storage redundancy are: Locally Redundant Storage (LRS) Zone Redundant Storage (ZRS) Geo Redundant Storage (GRS) and Geo Zone Redundant Storage (GZRS) Each of these storage types provides different levels of durability, resiliency and availability for your databases and database backups. Not surprisingly, each storage type also has different levels of pricing, and the price increases significantly as the protection level increases with GZRS storage type almost 4-5x LRS. Choosing between resilience and cost optimization is an extremely difficult choice that the DB owner must make. We are thrilled to announce that, starting from Nov 01, 2024, the backup storage pricing is now streamlined and simplified across Azure SQL database and Azure SQL Managed Instance. Bonus – we even reduced the prices 😊 The price changes apply to the Backup Storage Redundancy configuration for both Point-in-time and Long-Term Retention backups, across the following tiers of Azure SQL Database and Azure SQL Managed Instance: Product Service Tier Azure SQL Database General Purpose Business Critical Hyperscale Azure SQL Managed Instance General Purpose Business Critical Next Generation General Purpose (preview) As we made the changes, following were the principles we adhered to: No price increase BSR pricing for ZRS is reduced to match the BSR pricing for LRS BSR pricing for GZRS is reduced to match the BSR pricing of GRS BSR pricing for GRS/GZRS will be 2x that of LRS/ZRS Type of backups What is Changing PITR BSR pricing for ZRS is reduced by 20% to match pricing for LRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance except for Azure SQL Database Hyperscale service tier. BSR pricing for GZRS is reduced by 41% to match pricing for GRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. LTR BSR pricing for ZRS is reduced by 20% to match pricing for LRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. BSR pricing for GZRS is reduced by 41% to match pricing for GRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. As an example, lets take East US as the region and look at the pricing for backup storage redundancy for Point in Time storage before and after the changes: For General Purpose/Business Critical service tiers the pricing would now be: Backup Storage Redundancy Current price New Price Price change LRS $0.10 $0.10 None ZRS $0.125 $0.10 20% less GRS $0.20 $0.20 None GZRS $0.34 $0.20 41% less For Hyperscale service tier, the new pricing would now be: Backup Storage Redundancy Current price New Price Price change LRS $0.08 $0.08 None ZRS $0.1 $0.10 None GRS $0.20 $0.20 None GZRS $0.34 $0.20 41% less Similarly, Backup storage redundancy prices for Long Term Retention backups in East US would be as follows: Backup Storage Redundancy Current price New Price Price change LRS $0.025 $0.025 None ZRS $0.0313 $0.025 20% less GRS $0.05 $0.05 None GZRS $0.0845 $0.05 41% less As a customer, the decision now becomes much easier for you. If you need regional resiliency: choose Zone Redundant Storage (ZRS) If you need regional and/or geo resiliency: choose Geo Zone Redundant Storage (GZRS). If the Azure region does not support Availability Zones, then choose Local Redundant Storage for regional resiliency, and Geo Redundant Storage for geo resiliency respectively. Please Note: The Azure pricing page and Azure pricing calculator will be updated with these new prices soon. The actual pricing meters have already been updated. Additionally, the LTR pricing change for Hyperscale will be in effect from January 1, 2025.1.7KViews0likes0CommentsConversion to Hyperscale: Now generally available with enhanced efficiency
We are excited to announce the general availability (GA) of the latest improvements in the Azure SQL Database conversion process to Hyperscale. These improvements bring shorter downtime, better control, and more visibility to the Hyperscale conversion process, making it easier and more efficient for our customers to switch to Hyperscale. Key enhancements We received feedback from customers about longer-than-expected downtime, lack of visibility, and unpredictable cutover time during database conversion to Hyperscale. In response, we have made key improvements in this area. 1. Shorter cutover time Prior to this improvement, the cutover time depended on the database size and workload. With the improvement we have significantly reduced the average cutover time (effective unavailability of the database for the application) from about six minutes, sometime extending to thirty minutes, to less than one minute. 2. Higher log generation rate By improving synchronizing mechanisms between source and destination while the conversion is in progress, we now support a higher log generation rate on the source database, ensuring that the conversion can complete successfully with write intensive workloads. This enhancement ensures a smoother and faster migration experience, even for high-transaction rate environments. We now support up to 50 MiB/s log generation rate on the source database during conversion. Once converted to Hyperscale, the supported log generation rate is 100 MiB/s, with higher rate of 150 MiB/s in preview. 3. Manual cutover One of the most significant improvements is the introduction of a customer-controlled cutover mode called manual cutover. This allows customers to have more control over the conversion process, enabling them to schedule and manage the cutover at the time of their choice. You can perform cutover within 24 hours once conversion process reaches “Ready to cutover” state. 4. Enhanced progress reporting Improved progress reporting capabilities provide detailed insights into the conversion process. Customers can now monitor the migration status in real-time, with clear visibility into each step of the process. Progress reporting is available via T-SQL, REST API, PowerShell, Azure CLI, or in the Azure portal. Detailed progress information about conversion phases provides greater transparency and control over the process. How to use it? All the improvements are applied automatically. Once exception is the manual cutover mode, where you need to use a new optional parameter in T-SQL, PowerShell, Azure CLI, or REST API while initiating the conversion process. The Azure portal also provides a new option to select manual cutover as shown in below image. Granular progress reporting is available irrespective of the cutover mode. One of our customers said - The migration to Hyperscale using the improvements was much easier than expected. The customer-controlled cutover and detailed progress reporting made the process seamless and efficient. For more information, see our documentation: Convert a Database to Hyperscale - Azure SQL Database | Microsoft Learn Conclusion We are thrilled to bring these enhancements to our customers and look forward to seeing how they will transform their Hyperscale conversion experience. This update marks a significant step forward in the Hyperscale conversion process, offering faster cutover time, enhanced control with a manual cutover option, and improved progress visibility. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!2.1KViews2likes1CommentImproving the conversion to Hyperscale with greater efficiency
Update: On 09 April 2025 we announced the general availability this improvement. For more details, please read the GA announcement. We are thrilled to announce the latest improvement in the Azure SQL Database conversion process to Hyperscale. This update, now available in public preview, streamlines the database conversion process, reducing downtime and offering greater control and visibility to our customers. Let’s dive into the enhancements and what they mean for you. Overview We have heard feedback from customers about possible improvements we could make while converting their databases to Hyperscale. Customers complained about longer than expected downtime during conversion, no insights into the current state of conversion, and unpredictable cutover time. We acted on the feedback and made several key improvements in this release. 1. Shorter cutover time One of the most significant enhancements in this improvement is the reduction in cutover times when converting a database to Hyperscale. This improvement ensures that the cutover process is faster and more efficient, minimizing downtime and connectivity disruptions. Based on our telemetry, cutover time 99 th percentile has been reduced from ~6 minutes to less than ~1 minute. 2. Support for higher log generation rate During the migration, the new process supports a higher log generation rate on the source database, ensuring that the conversion can handle more write intensive workloads and complete successfully. In the previous version we saw that the database conversions were not able to finish when log generation on the source was more than 8 MiBps continuously, throughout the conversion process. With this improvement we can now support up to 50 MiBps log generation on source and still succeed. This was achieved by improving synchronizing mechanisms between source and destination while conversion is in progress. 3. Manual cutover A new option introduced with this improvement is manual cutover. This allows customers to initiate the conversion process and pause it when the database is ready to cutover to Hyperscale, giving them up to 72 hours to perform the cutover at a time that best suits their operational needs. If the manual cutover is not completed within the given timeframe, the process is automatically canceled, and the database remains on the original service tier, without any data loss. If the new parameter is not passed, then the cutover experience would be the same as earlier i.e. automatic cutover as soon as Hyperscale database is ready. 4. Granular progress reporting Another improvement in the conversion process is that now customers can monitor the entire conversion process at a granular level. Whether using T-SQL, REST API, PowerShell, Azure CLI or Azure portal, detailed progress information about conversion phases is available, providing greater transparency and control over the process. Customer feedback Throughout the private preview phase, we have received overwhelmingly positive feedback from several customers about this improvement. John Nafa, Cloud Architect, Asurgent says: The new database conversion experience from Microsoft has been incredibly seamless and efficient, making the conversion process to Azure SQL Database Hyperscale smooth and straightforward. The progress reporting and manual cutover features were especially valuable, providing real-time insights and ensuring a smooth transition. It’s been a pleasure working with this improvement, and I’m excited to see it become available to a wider audience. Get started Out of the four key improvements mentioned above, most are automatic. To utilize the manual cutover option, you need to use a new optional parameter in T-SQL, PowerShell, Azure CLI, or REST API while initiating the conversion process. Azure portal also provides a new option to select manual cutover. Granular progress reporting is available irrespective of the cutover mode. Use manual cutover Let us go through new options available in various interfaces with this improvement. Azure Portal To use manual cutover, a new Cutover mode option is provided in the Azure portal. The following screenshot shows the steps to convert a database to Hyperscale. The new option is shown under step 3. If you have not seen this option in the Azure portal yet, don’t worry. We are enabling this capability now and you can expect it to be available within days. Commands A new parameter has been introduced in each interface to initiate the conversion process with the manual cutover option. The following table provides sample commands to convert a database named WideWorldImporters on a logical server called contososerver to an 8-vcore serverless Hyperscale database. Method Command T-SQL ALTER DATABASE WideWorldImporters MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_S_Gen5_8') WITH MANUAL_CUTOVER PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -Edition "Hyperscale" -requestedServiceObjectiveName "HS_S_Gen5_2" -ManualCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --edition Hyperscale --service-objective HS_S_Gen5_2 --manual-cutover Note: Additional parameters like backup storage redundancy or zone redundancy etc. can also be added. Refer documentation Set-AzSqlDatabase (Az.Sql) | Microsoft Learn (for PowerShell) and az sql db | Microsoft Learn (for Azure CLI). REST API also has new properties manualCutover and performCutover. Refer Databases - Create Or Update - REST API (Azure SQL Database) | Microsoft Learn for more details. Monitor the conversion progress The progress during this conversion can be monitored using various interfaces. Detailed phase information is available in the Dynamic Management View (DMV) sys.dm_operation_status for those using T-SQL. Similar command options are available for PowerShell and Azure CLI users. Azure Portal Progress of the conversion process can be seen by clicking on Details hyperlink as shown in the below screenshot at step 2. Monitoring programmatically As a part of this improvement, we have introduced new columns in the sys.dm_operation_status DMV called phase_code, phase_desc and phase_info which are populated during conversion process. Refer to documentation for more details. Similarly, new output columns, phase and phaseInformation, are available if using PowerShell under OperationPhaseDetails and if using Azure CLI under operationPhaseDetails. Here is the quick reference for commands via various interfaces. Method Command T-SQL SELECT state_desc, phase_code, phase_desc, JSON_VALUE(phase_info, '$.currentStep') AS currentStep, JSON_VALUE(phase_info, '$.totalSteps') AS totalSteps, phase_info, start_time, error_code, error_desc, last_modify_time FROM sys.dm_operation_status WHERE resource_type = 0 AND operation='ALTER DATABASE' AND major_resource_id = 'WideWorldImporters' PowerShell (Get-AzSqlDatabaseActivity -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters").OperationPhaseDetails Azure CLI (az sql db op list --resource-group ResourceGroup01 --server contososerver --database WideWorldImporters | ConvertFrom-Json).operationPhaseDetails.phase Perform manual cutover The manual cutover, if specified, can be performed using the Azure portal or programmatically, ensuring a smooth transition to the Hyperscale tier. Azure Portal Cutover can be initiated from the same screen where progress of conversion is reported. Commands Here is the quick reference of the commands to perform cutover. Method Command T-SQL ALTER DATABASE WideWorldImporters PERFORM_CUTOVER PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -PerformCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --perform-cutover Conclusion This update marks a significant step forward in the Hyperscale conversion process, offering faster cutover time, enhanced control with a manual cutover option, and improved progress visibility. We encourage you to try these features and provide your valuable feedback and help us refine this feature for general availability. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!2.6KViews3likes0CommentsAzure SQL Database Hyperscale – lower, simplified pricing!
Azure is a cloud platform designed to simplify building powerful and economical modern applications. Azure SQL Database Hyperscale is a leading relational database service offering for cloud-born applications. In addition to a rock-solid relational database foundation, Hyperscale offers several exciting modern developer features like REST and GraphQL endpoints, JSON data support, external API invocation. Hyperscale was built leveraging core cloud capabilities and offers auto-scaling, multi-tiered high performance storage, independently scalable compute, read scale-out, predictable, and quick operations like database copy, and much more! We want to ensure that all customers use Hyperscale for their application – no matter what size. Today, we're excited to announce changes to the way Hyperscale is priced. In most cases, you will see significantly lower costs – allowing you to invest the resultant savings in the resources you need to build AI-ready applications, increase the resiliency of your databases, and many other benefits unique to Hyperscale. Let’s take a deeper look at this exciting announcement! What is changing? We are reducing the price of compute by $0.10 USD per vCore per hour (some exceptions are listed later in this post), which in many cases can be up to 35% less than the pre-announcement (“current rate”) compute cost. The storage cost for Hyperscale has also been aligned with the market for developer databases and the pricing for other Azure Database offerings, while not charging for I/O operations. The new pricing will take effect and be displayed on the Azure SQL Database pricing page and Azure pricing calculator on December 15th. Examples of the pricing change Here are some examples to illustrate how the new pricing works as compared to the existing pricing. Note that all costs are estimated and assume a 730 hour month. Case 1: Hyperscale single DB with 6-vCore provisioned compute, 0 HA replica and 50 GB of allocated storage, East US Existing pricing New pricing Compute cost USD 1237.85 USD 800.05 Storage cost USD 5.0 USD 12.5 Total cost USD 1242.9 USD 812.55, saving 35% Case 2: Hyperscale single DB with 12-vCore provisioned compute, 1 HA replica and 200 GB of allocated storage, East US Existing pricing New pricing Compute cost USD 4075.9 USD 3200.20 Storage cost USD 20.0 USD 50.0 Total cost USD 4095.9 USD 3250.20, saving 21% Case 3: Hyperscale single DB with 32-vCore provisioned compute, 1 HA replica and 18 TB of allocated storage, East US Existing pricing New pricing Compute cost USD 10869.08 USD 8533.88 Storage cost USD 1,843.20 USD 4,608.00 Total cost USD 12712.28 USD 13141.88, 3% higher Conclusion In conclusion, these pricing changes for Hyperscale are aligned with our mission to provide the best features, with the highest performance and scalability, at a great price for all our customers. Our team is here to assist with any questions you may have about these changes. Please leave a comment on this blog and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all! Frequently Asked Questions When does the change take effect, and what does it impact? The pricing changes take effect on December 15 th , 2023 at 00:00 hours UTC. The changes will apply to the following resources created on or after Dec 15 th , 2023. Any newly created Hyperscale single provisioned compute databases. Any (existing or new) Hyperscale single serverless databases (currently in preview). Any (existing or new) Hyperscale elastic pools (currently in preview). Any newly created Hyperscale elastic pooled databases (currently in preview). What happens to my existing Hyperscale resources? To start with, nothing changes till December 15 th , 2023. Here’s what will happen starting December 15 th , 2023: To provide a seamless experience without any impact on existing workloads, all existing Hyperscale single databases with provisioned compute created before December 15 th , 2023, will continue to be billed at the existing rates, for a period of up to 3 years (ending December 14 th , 2026). Customers will receive further notice of the pricing change to their Hyperscale databases in advance of December 14 th , 2026. All existing Hyperscale single databases with serverless compute (currently in preview) will automatically switch to the new pricing starting December 15 th , 2023. All existing Hyperscale elastic pools (currently in preview) will automatically switch to the new pricing starting December 15 th , 2023. What if I update / change a database to a different deployment option? Hyperscale allows seamless, rapid scaling of the database compute. You can also scale a Hyperscale database to move from provisioned compute to serverless compute (or the other way around). You can add an existing Hyperscale database to an elastic pool or move an elastic pooled database out of the elastic pool to a single database. Here’s how your costs are impacted if you perform any of these changes on or after December 15 th , 2023: Change Impact Hyperscale (serverless single, or elastic pooled) database is changed to a Hyperscale single database with provisioned compute. The final cost of the database will be based on when the database was created. If the database was created prior to December 15 th , 2023, it will be billed as per the existing pricing. If the database was created on or after December 15 th , 2023, it will be billed as per the new pricing. Hyperscale database is changed to a Hyperscale single database with serverless compute. The database will be billed with the new pricing. Hyperscale database is added to an elastic pool on or after December 15 th. The database’s storage will be charged as per the “new” storage pricing. There is no separate compute cost for a database in an elastic pool. Hyperscale single database with provisioned compute is scaled up, or down, or has high-availability replicas added or removed, or its hardware family is changed. The pricing model remains as it was before the scaling operation. The actual costs for compute resources will change based on the scaling operation (for example, they will increase if the database is scaled up, or replicas added). Do note that if you are using reserved capacity ("reservations") and are changing the hardware family, you will need to exchange those reservations to align to the new hardware family. The costs of storage resources associated with the single database will not change due to the scaling operation itself. Any copies of a Hyperscale database created as a Hyperscale single database, on or after December 15 th , 2023. The database copies will use the new pricing, regardless of when the original database was created. Any new single database created via. restore, or geo-replication operations, on or after December 15 th , 2023. The new database will use the new pricing, regardless of when the original database was created. Any non-Hyperscale database is updated to Hyperscale on or after December 15 th , 2023. The new database will use the new pricing, regardless of when it was originally created. See the summarized tables below for a quick reference. Single databases Hyperscale single databases with provisioned compute Hyperscale single databases with serverless compute Timeline: before December 15 th , 2023 Timeline: on or after December 15 th , 2023 Timeline: before December 15 th , 2023 Timeline: on or after December 15 th , 2023 Database was created or migrated to Hyperscale before December 15th, 2023 Database created or migrated to Hyperscale after December 15th, 2023 Database was created or migrated to Hyperscale before December 15th, 2023 Database created or migrated to Hyperscale after December 15th, 2023 Compute Existing provisioned compute price. Existing provisioned compute price. New provisioned compute price. Existing serverless compute price. New serverless compute price. New serverless compute price. Storage Existing storage prices. Existing storage prices. New storage prices. Existing storage prices. New storage prices. New storage prices. Elastic pools and pooled databases Hyperscale elastic pools Hyperscale elastic pooled databases Timeline: before December 15 th , 2023 Timeline: on or after December 15 th , 2023 Timeline: before December 15 th , 2023 Timeline: on or after December 15 th , 2023 Elastic pool was created before December 15th, 2023 Elastic pool created after December 15th, 2023 Database was created or migrated to Hyperscale before December 15th, 2023 Database created or migrated to Hyperscale after December 15th, 2023 Compute Existing provisioned compute price. New provisioned compute price. New provisioned compute price. N/A (charged per elastic pool) Storage N/A – storage is charged per database. Existing storage prices. New storage prices. New storage prices. Can I continue to use reservations for Hyperscale? With reservations, you make a commitment to use SQL Database for a period of one or three years to get a significant discount on the compute (vCores) costs. There are no changes to the Compute (vCores) pricing, and you can continue to use reserved capacity (“reservations”) for Hyperscale single databases with provisioned compute and Hyperscale elastic pools. How can I move my existing Hyperscale databases to the new pricing? Currently, there is no in-built support to switch pricing for existing Hyperscale databases. However, you can consider one of the redeployment methods (database copy, point-in-time restore, or geo-replication) to create a new “copy” of the existing Hyperscale database. The newly created “copy” of the database will be billed the new pricing. If you do decide to go down this path, do consider creating the new database with zone redundancy, as described here. Do you have any projections of likely costs when converting non-Hyperscale DBs to Hyperscale? We recommend you use the Azure pricing calculator to compare base cost for compute and storage. However, costs of backups can vary depending on the nature of the workload and backup retention settings configured. Databases in the (DTU) Basic, Standard and Premium service tiers include backup in the base cost. When converting such databases to Hyperscale, keep in mind that backups in Hyperscale can be a significant factor in overall cost. It is only possible to determine this after sufficient testing with realistic workloads on Hyperscale and we strongly recommend you do such testing before converting DTU service tier databases to Hyperscale. Does the reduction in compute price apply to all subscription offer types? In the case of dev/test subscriptions and related offer types including Enterprise dev/test, where you were already not paying for license costs, there will not be a further reduction in the price of compute. For such subscriptions, the storage costs for Hyperscale resources will still be based on the guidelines in the “When does the change take effect, and what does it impact?” and “What happens to my existing Hyperscale resources” sections in this blog post. Can I still use Azure Hybrid Benefit for Hyperscale? The change in price per vCore is done by eliminating the software license fee for Hyperscale resources. Hence, Azure Hybrid Benefit no longer applies to the Hyperscale tier, except for Hyperscale single databases with provisioned compute which were created prior to December 15, 2023. Even for those older databases, Azure Hybrid Benefit can only be used till December 14, 2026. Note that specifying values of BasePrice or LicenseIncluded for the LicenseType parameter in APIs / SDKs / PowerShell / CLI, is only relevant for Hyperscale single databases with provisioned compute which were created prior to December 15, 2023. These values are effectively ignored for all other types of Hyperscale resources. Current limitations and known issues Consider a database which was created originally as a non-Hyperscale database prior to Dec 15th, 2023. If this database is then migrated to Hyperscale on or after Dec 15th, 2023, the Cost Summary section of the Azure portal will incorrectly show the "old pricing" for this database. The Cost Summary section in the Azure portal is only intended to be an estimate. We recommend you rely on the Cost Management section in the Azure portal to review actual costs. Azure portal cost summary view in greater China regions does not show the updated pricing information accurately. This is only a display issue and does not impact the billing in any way. Please refer to the pricing calculator or the pricing page for accurate pricing information.44KViews2likes13Comments