azure sql
650 TopicsDatabase compatibility level 170 in Azure SQL Database and SQL database in Microsoft Fabric
The alignment of SQL versions to default compatibility levels are as follows: 100: in SQL Server 2008 and Azure SQL Database 110: in SQL Server 2012 and Azure SQL Database 120: in SQL Server 2014 and Azure SQL Database 130: in SQL Server 2016 and Azure SQL Database 140: in SQL Server 2017and Azure SQL Database 150: in SQL Server 2019 and Azure SQL Database 160: in SQL Server 2022, Azure SQL Database and SQL database in Microsoft Fabric 170: in Azure SQL Database and SQL database in Microsoft Fabric Once this new database compatibility default goes into effect, if you still wish to use database compatibility level 160 (or lower), please follow the instructions detailed here: View or Change the Compatibility Level of a Database. For example, you may wish to ensure that new databases created on the same logical server use the same compatibility level as other Azure SQL Databases to ensure consistent query optimization and execution behavior across development, QA and production versions of your databases. With this example in mind, we recommend that any database configuration scripts in use explicitly designate the COMPATIBILITY_LEVEL rather than rely on the defaults, in order to ensure consistent application behavior. For new databases supporting new applications, we recommend using the latest compatibility level, 170. For pre-existing databases running at lower compatibility levels, the recommended workflow for upgrading the query processor to a higher compatibility level is detailed in the article Change the Database Compatibility Mode and Use the Query Store. Note that this article refers to database compatibility level 130 and SQL Server, but the same methodology that is described applies to database compatibility 170 for SQL Server and Azure SQL Database. To determine the current database compatibility level, query the compatibility_level column of sys.databases system catalog view. SELECT [name], compatibility_level FROM sys.databases; So, there may be a few questions that we have not directly answered with this announcement. Maybe questions such as: What do you mean by “database compatibility level 170 is now the default”? If you create a new database and don’t explicitly designate the COMPATIBILITY_LEVEL, the database compatibility level 170 will be used. Does Microsoft automatically update the database compatibility level for existing databases? No. We do not update the database compatibility level for existing databases. This is up to you as an owner of your database to do at your own discretion. With that said, we highly recommend that you plan on moving to the latest database compatibility level in order to leverage the latest improvements that are enabled with the latest compatibility level. I created a logical server before 170 was the default database compatibility level. What impact does this have? The master database of your logical server will reflect the database compatibility level that was the default when the logical server was created. New databases created on this logical server with an older compatibility level for the master database will use database compatibility level 170 if the database compatibility level is not explicitly specified. The master database compatibility cannot be changed without recreating the logical server. Having master database operating at an older database compatibility level will not impact user database behavior. Would the database compatibility level change to 170 if I restore a database from a point in time backup before the default changed? No. We will preserve the compatibility level that was in effect when the backup was performed.403Views0likes3CommentsSimplified & lower pricing for Azure SQL Database and Azure SQL Managed Instance backup storage
Today as you deploy your Azure SQL database or Azure SQL managed instance, one of the important decisions to be made is the choice for your backup storage redundancy (BSR). I say it's important because the availability of your database depends on the availability of your backups. Here’s why. Consider the scenario where your DB has high availability configured via zone redundancy. And, let's say, your backups are configured non-zone redundant. In the event of a failure in the zone, your database fails over to another zone within the region, however your backups won't, because of their storage setting. Now, in the new zone, the backup service attempts to backup your database but cannot reach the backups in the zone where the failure happened causing the logs to become full and eventually impacting the availability of the database itself. As you create the Azure SQL database, the choices for backup storage redundancy are: Locally Redundant Storage (LRS) Zone Redundant Storage (ZRS) Geo Redundant Storage (GRS) and Geo Zone Redundant Storage (GZRS) Each of these storage types provides different levels of durability, resiliency and availability for your databases and database backups. Not surprisingly, each storage type also has different levels of pricing, and the price increases significantly as the protection level increases with GZRS storage type almost 4-5x LRS. Choosing between resilience and cost optimization is an extremely difficult choice that the DB owner must make. We are thrilled to announce that, starting from Nov 01, 2024, the backup storage pricing is now streamlined and simplified across Azure SQL database and Azure SQL Managed Instance. Bonus – we even reduced the prices 😊 The price changes apply to the Backup Storage Redundancy configuration for both Point-in-time and Long-Term Retention backups, across the following tiers of Azure SQL Database and Azure SQL Managed Instance: Product Service Tier Azure SQL Database General Purpose Business Critical Hyperscale Azure SQL Managed Instance General Purpose Business Critical Next Generation General Purpose (preview) As we made the changes, following were the principles we adhered to: No price increase BSR pricing for ZRS is reduced to match the BSR pricing for LRS BSR pricing for GZRS is reduced to match the BSR pricing of GRS BSR pricing for GRS/GZRS will be 2x that of LRS/ZRS Type of backups What is Changing PITR BSR pricing for ZRS is reduced by 20% to match pricing for LRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance except for Azure SQL Database Hyperscale service tier. BSR pricing for GZRS is reduced by 41% to match pricing for GRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. LTR BSR pricing for ZRS is reduced by 20% to match pricing for LRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. BSR pricing for GZRS is reduced by 41% to match pricing for GRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. As an example, lets take East US as the region and look at the pricing for backup storage redundancy for Point in Time storage before and after the changes: For General Purpose/Business Critical service tiers the pricing would now be: Backup Storage Redundancy Current price New Price Price change LRS $0.10 $0.10 None ZRS $0.125 $0.10 20% less GRS $0.20 $0.20 None GZRS $0.34 $0.20 41% less For Hyperscale service tier, the new pricing would now be: Backup Storage Redundancy Current price New Price Price change LRS $0.08 $0.08 None ZRS $0.1 $0.10 None GRS $0.20 $0.20 None GZRS $0.34 $0.20 41% less Similarly, Backup storage redundancy prices for Long Term Retention backups in East US would be as follows: Backup Storage Redundancy Current price New Price Price change LRS $0.025 $0.025 None ZRS $0.0313 $0.025 20% less GRS $0.05 $0.05 None GZRS $0.0845 $0.05 41% less As a customer, the decision now becomes much easier for you. If you need regional resiliency: choose Zone Redundant Storage (ZRS) If you need regional and/or geo resiliency: choose Geo Zone Redundant Storage (GZRS). If the Azure region does not support Availability Zones, then choose Local Redundant Storage for regional resiliency, and Geo Redundant Storage for geo resiliency respectively. Please Note: The Azure pricing page and Azure pricing calculator will be updated with these new prices soon. The actual pricing meters have already been updated. Additionally, the LTR pricing change for Hyperscale will be in effect from January 1, 2025.1.3KViews0likes0CommentsConversion to Hyperscale: Now generally available with enhanced efficiency
We are excited to announce the general availability (GA) of the latest improvements in the Azure SQL Database conversion process to Hyperscale. These improvements bring shorter downtime, better control, and more visibility to the Hyperscale conversion process, making it easier and more efficient for our customers to switch to Hyperscale. Key enhancements We received feedback from customers about longer-than-expected downtime, lack of visibility, and unpredictable cutover time during database conversion to Hyperscale. In response, we have made key improvements in this area. 1. Shorter cutover time Prior to this improvement, the cutover time depended on the database size and workload. With the improvement we have significantly reduced the average cutover time (effective unavailability of the database for the application) from about six minutes, sometime extending to thirty minutes, to less than one minute. 2. Higher log generation rate By improving synchronizing mechanisms between source and destination while the conversion is in progress, we now support a higher log generation rate on the source database, ensuring that the conversion can complete successfully with write intensive workloads. This enhancement ensures a smoother and faster migration experience, even for high-transaction rate environments. We now support up to 50 MiB/s log generation rate on the source database during conversion. Once converted to Hyperscale, the supported log generation rate is 100 MiB/s, with higher rate of 150 MiB/s in preview. 3. Manual cutover One of the most significant improvements is the introduction of a customer-controlled cutover mode called manual cutover. This allows customers to have more control over the conversion process, enabling them to schedule and manage the cutover at the time of their choice. You can perform cutover within 24 hours once conversion process reaches “Ready to cutover” state. 4. Enhanced progress reporting Improved progress reporting capabilities provide detailed insights into the conversion process. Customers can now monitor the migration status in real-time, with clear visibility into each step of the process. Progress reporting is available via T-SQL, REST API, PowerShell, Azure CLI, or in the Azure portal. Detailed progress information about conversion phases provides greater transparency and control over the process. How to use it? All the improvements are applied automatically. Once exception is the manual cutover mode, where you need to use a new optional parameter in T-SQL, PowerShell, Azure CLI, or REST API while initiating the conversion process. The Azure portal also provides a new option to select manual cutover as shown in below image. Granular progress reporting is available irrespective of the cutover mode. One of our customers said - The migration to Hyperscale using the improvements was much easier than expected. The customer-controlled cutover and detailed progress reporting made the process seamless and efficient. For more information, see our documentation: Convert a Database to Hyperscale - Azure SQL Database | Microsoft Learn Conclusion We are thrilled to bring these enhancements to our customers and look forward to seeing how they will transform their Hyperscale conversion experience. This update marks a significant step forward in the Hyperscale conversion process, offering faster cutover time, enhanced control with a manual cutover option, and improved progress visibility. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1KViews2likes1CommentImproving the conversion to Hyperscale with greater efficiency
Update: On 09 April 2025 we announced the general availability this improvement. For more details, please read the GA announcement. We are thrilled to announce the latest improvement in the Azure SQL Database conversion process to Hyperscale. This update, now available in public preview, streamlines the database conversion process, reducing downtime and offering greater control and visibility to our customers. Let’s dive into the enhancements and what they mean for you. Overview We have heard feedback from customers about possible improvements we could make while converting their databases to Hyperscale. Customers complained about longer than expected downtime during conversion, no insights into the current state of conversion, and unpredictable cutover time. We acted on the feedback and made several key improvements in this release. 1. Shorter cutover time One of the most significant enhancements in this improvement is the reduction in cutover times when converting a database to Hyperscale. This improvement ensures that the cutover process is faster and more efficient, minimizing downtime and connectivity disruptions. Based on our telemetry, cutover time 99 th percentile has been reduced from ~6 minutes to less than ~1 minute. 2. Support for higher log generation rate During the migration, the new process supports a higher log generation rate on the source database, ensuring that the conversion can handle more write intensive workloads and complete successfully. In the previous version we saw that the database conversions were not able to finish when log generation on the source was more than 8 MiBps continuously, throughout the conversion process. With this improvement we can now support up to 50 MiBps log generation on source and still succeed. This was achieved by improving synchronizing mechanisms between source and destination while conversion is in progress. 3. Manual cutover A new option introduced with this improvement is manual cutover. This allows customers to initiate the conversion process and pause it when the database is ready to cutover to Hyperscale, giving them up to 72 hours to perform the cutover at a time that best suits their operational needs. If the manual cutover is not completed within the given timeframe, the process is automatically canceled, and the database remains on the original service tier, without any data loss. If the new parameter is not passed, then the cutover experience would be the same as earlier i.e. automatic cutover as soon as Hyperscale database is ready. 4. Granular progress reporting Another improvement in the conversion process is that now customers can monitor the entire conversion process at a granular level. Whether using T-SQL, REST API, PowerShell, Azure CLI or Azure portal, detailed progress information about conversion phases is available, providing greater transparency and control over the process. Customer feedback Throughout the private preview phase, we have received overwhelmingly positive feedback from several customers about this improvement. John Nafa, Cloud Architect, Asurgent says: The new database conversion experience from Microsoft has been incredibly seamless and efficient, making the conversion process to Azure SQL Database Hyperscale smooth and straightforward. The progress reporting and manual cutover features were especially valuable, providing real-time insights and ensuring a smooth transition. It’s been a pleasure working with this improvement, and I’m excited to see it become available to a wider audience. Get started Out of the four key improvements mentioned above, most are automatic. To utilize the manual cutover option, you need to use a new optional parameter in T-SQL, PowerShell, Azure CLI, or REST API while initiating the conversion process. Azure portal also provides a new option to select manual cutover. Granular progress reporting is available irrespective of the cutover mode. Use manual cutover Let us go through new options available in various interfaces with this improvement. Azure Portal To use manual cutover, a new Cutover mode option is provided in the Azure portal. The following screenshot shows the steps to convert a database to Hyperscale. The new option is shown under step 3. If you have not seen this option in the Azure portal yet, don’t worry. We are enabling this capability now and you can expect it to be available within days. Commands A new parameter has been introduced in each interface to initiate the conversion process with the manual cutover option. The following table provides sample commands to convert a database named WideWorldImporters on a logical server called contososerver to an 8-vcore serverless Hyperscale database. Method Command T-SQL ALTER DATABASE WideWorldImporters MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_S_Gen5_8') WITH MANUAL_CUTOVER PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -Edition "Hyperscale" -requestedServiceObjectiveName "HS_S_Gen5_2" -ManualCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --edition Hyperscale --service-objective HS_S_Gen5_2 --manual-cutover Note: Additional parameters like backup storage redundancy or zone redundancy etc. can also be added. Refer documentation Set-AzSqlDatabase (Az.Sql) | Microsoft Learn (for PowerShell) and az sql db | Microsoft Learn (for Azure CLI). REST API also has new properties manualCutover and performCutover. Refer Databases - Create Or Update - REST API (Azure SQL Database) | Microsoft Learn for more details. Monitor the conversion progress The progress during this conversion can be monitored using various interfaces. Detailed phase information is available in the Dynamic Management View (DMV) sys.dm_operation_status for those using T-SQL. Similar command options are available for PowerShell and Azure CLI users. Azure Portal Progress of the conversion process can be seen by clicking on Details hyperlink as shown in the below screenshot at step 2. Monitoring programmatically As a part of this improvement, we have introduced new columns in the sys.dm_operation_status DMV called phase_code, phase_desc and phase_info which are populated during conversion process. Refer to documentation for more details. Similarly, new output columns, phase and phaseInformation, are available if using PowerShell under OperationPhaseDetails and if using Azure CLI under operationPhaseDetails. Here is the quick reference for commands via various interfaces. Method Command T-SQL SELECT state_desc, phase_code, phase_desc, JSON_VALUE(phase_info, '$.currentStep') AS currentStep, JSON_VALUE(phase_info, '$.totalSteps') AS totalSteps, phase_info, start_time, error_code, error_desc, last_modify_time FROM sys.dm_operation_status WHERE resource_type = 0 AND operation='ALTER DATABASE' AND major_resource_id = 'WideWorldImporters' PowerShell (Get-AzSqlDatabaseActivity -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters").OperationPhaseDetails Azure CLI (az sql db op list --resource-group ResourceGroup01 --server contososerver --database WideWorldImporters | ConvertFrom-Json).operationPhaseDetails.phase Perform manual cutover The manual cutover, if specified, can be performed using the Azure portal or programmatically, ensuring a smooth transition to the Hyperscale tier. Azure Portal Cutover can be initiated from the same screen where progress of conversion is reported. Commands Here is the quick reference of the commands to perform cutover. Method Command T-SQL ALTER DATABASE WideWorldImporters PERFORM_CUTOVER PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -PerformCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --perform-cutover Conclusion This update marks a significant step forward in the Hyperscale conversion process, offering faster cutover time, enhanced control with a manual cutover option, and improved progress visibility. We encourage you to try these features and provide your valuable feedback and help us refine this feature for general availability. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1.7KViews3likes0CommentsGeneral availability: Database compatibility level 160 in Azure SQL Database
Database compatibility level 160 is now the default for new databases created in Azure SQL Database across all public regions. The alignment of SQL versions to default compatibility levels are as follows: 100: in SQL Server 2008 and Azure SQL Database 110: in SQL Server 2012 and Azure SQL Database 120: in SQL Server 2014 and Azure SQL Database 130: in SQL Server 2016 and Azure SQL Database 140: in SQL Server 2017 and Azure SQL Database 150: in SQL Server 2019 and Azure SQL Database 160: in SQL Server 2022 and Azure SQL Database For details about which feature, or features compatibility level 160 enables, please see Intelligent query processing in SQL databases. The IQP family of features includes multiple features that improve the performance of existing workloads with minimal or no implementation effort. Once this new database compatibility default goes into effect, if you still wish to use database compatibility level 150 (or lower), please follow the instructions detailed here: View or Change the Compatibility Level of a Database . For example, you may wish to ensure that new databases created on the same logical server use the same compatibility level as other Azure SQL Databases to ensure consistent query optimization and execution behavior across development, QA and production versions of your databases. With this example in mind, we recommend that any database configuration scripts in use explicitly designate the COMPATIBILITY_LEVEL rather than rely on the defaults, in order to ensure consistent application behavior. For new databases supporting new applications, we recommend using the latest compatibility level, 160. For pre-existing databases running at lower compatibility levels, the recommended workflow for upgrading the query processor to a higher compatibility level is detailed in the article Change the Database Compatibility Mode and Use the Query Store. Note that this article refers to database compatibility level 130 and SQL Server, but the same methodology that is described applies to database compatibility 160 for SQL Server and Azure SQL Database. To determine the current database compatibility level, query the compatibility_level column of sys.databases system catalog view. SELECT name, compatibility_level FROM sys.databases; Ok, we believe that there may be a few questions that we have not directly answered with this announcement. Maybe questions such as: What do you mean by “database compatibility level 160 is now the default”? If you create a new database and don’t explicitly designate COMPATIBILITY_LEVEL, the database compatibility level 160 will be used. Does Microsoft automatically update the database compatibility level for existing databases? No. We do not update the database compatibility level for existing databases. This is up to you as an owner of your database to do at your own discretion. With that said, we highly recommend that you plan on moving to the latest database compatibility level in order to leverage the latest improvements that are enabled with the latest compatibility level. I created a logical server before 160 was the default database compatibility level. What impact does this have? The master database of your logical server will reflect the database compatibility level that was the default when the logical server was created. New databases created on this logical server with an older compatibility level for the master database will use database compatibility level 160 if the database compatibility level is not explicitly specified. The master database compatibility cannot be changed without recreating the logical server. Having master at an older database compatibility level will not impact user database behavior. Would the database compatibility level change to 160 if I restore a database from a point in time backup before the default changed? No. We will preserve the compatibility level that was in effect when the backup was performed.7.4KViews3likes0CommentsAzure SQL Database Hyperscale – lower, simplified pricing!
Azure is a cloud platform designed to simplify building powerful and economical modern applications. Azure SQL Database Hyperscale is a leading relational database service offering for cloud-born applications. In addition to a rock-solid relational database foundation, Hyperscale offers several exciting modern developer features like REST and GraphQL endpoints, JSON data support, external API invocation. Hyperscale was built leveraging core cloud capabilities and offers auto-scaling, multi-tiered high performance storage, independently scalable compute, read scale-out, predictable, and quick operations like database copy, and much more! We want to ensure that all customers use Hyperscale for their application – no matter what size. Today, we're excited to announce changes to the way Hyperscale is priced. In most cases, you will see significantly lower costs – allowing you to invest the resultant savings in the resources you need to build AI-ready applications, increase the resiliency of your databases, and many other benefits unique to Hyperscale. Let’s take a deeper look at this exciting announcement! What is changing? We are reducing the price of compute by $0.10 USD per vCore per hour (some exceptions are listed later in this post), which in many cases can be up to 35% less than the pre-announcement (“current rate”) compute cost. The storage cost for Hyperscale has also been aligned with the market for developer databases and the pricing for other Azure Database offerings, while not charging for I/O operations. The new pricing will take effect and be displayed on the Azure SQL Database pricing page and Azure pricing calculator on December 15th. Examples of the pricing change Here are some examples to illustrate how the new pricing works as compared to the existing pricing. Note that all costs are estimated and assume a 730 hour month. Case 1: Hyperscale single DB with 6-vCore provisioned compute, 0 HA replica and 50 GB of allocated storage, East US Existing pricing New pricing Compute cost USD 1237.85 USD 800.05 Storage cost USD 5.0 USD 12.5 Total cost USD 1242.9 USD 812.55, saving 35% Case 2: Hyperscale single DB with 12-vCore provisioned compute, 1 HA replica and 200 GB of allocated storage, East US Existing pricing New pricing Compute cost USD 4075.9 USD 3200.20 Storage cost USD 20.0 USD 50.0 Total cost USD 4095.9 USD 3250.20, saving 21% Case 3: Hyperscale single DB with 32-vCore provisioned compute, 1 HA replica and 18 TB of allocated storage, East US Existing pricing New pricing Compute cost USD 10869.08 USD 8533.88 Storage cost USD 1,843.20 USD 4,608.00 Total cost USD 12712.28 USD 13141.88, 3% higher Conclusion In conclusion, these pricing changes for Hyperscale are aligned with our mission to provide the best features, with the highest performance and scalability, at a great price for all our customers. Our team is here to assist with any questions you may have about these changes. Please leave a comment on this blog and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all! Frequently Asked Questions When does the change take effect, and what does it impact? The pricing changes take effect on December 15 th , 2023 at 00:00 hours UTC. The changes will apply to the following resources created on or after Dec 15 th , 2023. Any newly created Hyperscale single provisioned compute databases. Any (existing or new) Hyperscale single serverless databases (currently in preview). Any (existing or new) Hyperscale elastic pools (currently in preview). Any newly created Hyperscale elastic pooled databases (currently in preview). What happens to my existing Hyperscale resources? To start with, nothing changes till December 15 th , 2023. Here’s what will happen starting December 15 th , 2023: To provide a seamless experience without any impact on existing workloads, all existing Hyperscale single databases with provisioned compute created before December 15 th , 2023, will continue to be billed at the existing rates, for a period of up to 3 years (ending December 14 th , 2026). Customers will receive further notice of the pricing change to their Hyperscale databases in advance of December 14 th , 2026. All existing Hyperscale single databases with serverless compute (currently in preview) will automatically switch to the new pricing starting December 15 th , 2023. All existing Hyperscale elastic pools (currently in preview) will automatically switch to the new pricing starting December 15 th , 2023. What if I update / change a database to a different deployment option? Hyperscale allows seamless, rapid scaling of the database compute. You can also scale a Hyperscale database to move from provisioned compute to serverless compute (or the other way around). You can add an existing Hyperscale database to an elastic pool or move an elastic pooled database out of the elastic pool to a single database. Here’s how your costs are impacted if you perform any of these changes on or after December 15 th , 2023: Change Impact Hyperscale (serverless single, or elastic pooled) database is changed to a Hyperscale single database with provisioned compute. The final cost of the database will be based on when the database was created. If the database was created prior to December 15 th , 2023, it will be billed as per the existing pricing. If the database was created on or after December 15 th , 2023, it will be billed as per the new pricing. Hyperscale database is changed to a Hyperscale single database with serverless compute. The database will be billed with the new pricing. Hyperscale database is added to an elastic pool on or after December 15 th. The database’s storage will be charged as per the “new” storage pricing. There is no separate compute cost for a database in an elastic pool. Hyperscale single database with provisioned compute is scaled up, or down, or has high-availability replicas added or removed, or its hardware family is changed. The pricing model remains as it was before the scaling operation. The actual costs for compute resources will change based on the scaling operation (for example, they will increase if the database is scaled up, or replicas added). Do note that if you are using reserved capacity ("reservations") and are changing the hardware family, you will need to exchange those reservations to align to the new hardware family. The costs of storage resources associated with the single database will not change due to the scaling operation itself. Any copies of a Hyperscale database created as a Hyperscale single database, on or after December 15 th , 2023. The database copies will use the new pricing, regardless of when the original database was created. Any new single database created via. restore, or geo-replication operations, on or after December 15 th , 2023. The new database will use the new pricing, regardless of when the original database was created. Any non-Hyperscale database is updated to Hyperscale on or after December 15 th , 2023. The new database will use the new pricing, regardless of when it was originally created. See the summarized tables below for a quick reference. Single databases Hyperscale single databases with provisioned compute Hyperscale single databases with serverless compute Timeline: before December 15 th , 2023 Timeline: on or after December 15 th , 2023 Timeline: before December 15 th , 2023 Timeline: on or after December 15 th , 2023 Database was created or migrated to Hyperscale before December 15th, 2023 Database created or migrated to Hyperscale after December 15th, 2023 Database was created or migrated to Hyperscale before December 15th, 2023 Database created or migrated to Hyperscale after December 15th, 2023 Compute Existing provisioned compute price. Existing provisioned compute price. New provisioned compute price. Existing serverless compute price. New serverless compute price. New serverless compute price. Storage Existing storage prices. Existing storage prices. New storage prices. Existing storage prices. New storage prices. New storage prices. Elastic pools and pooled databases Hyperscale elastic pools Hyperscale elastic pooled databases Timeline: before December 15 th , 2023 Timeline: on or after December 15 th , 2023 Timeline: before December 15 th , 2023 Timeline: on or after December 15 th , 2023 Elastic pool was created before December 15th, 2023 Elastic pool created after December 15th, 2023 Database was created or migrated to Hyperscale before December 15th, 2023 Database created or migrated to Hyperscale after December 15th, 2023 Compute Existing provisioned compute price. New provisioned compute price. New provisioned compute price. N/A (charged per elastic pool) Storage N/A – storage is charged per database. Existing storage prices. New storage prices. New storage prices. Can I continue to use reservations for Hyperscale? With reservations, you make a commitment to use SQL Database for a period of one or three years to get a significant discount on the compute (vCores) costs. There are no changes to the Compute (vCores) pricing, and you can continue to use reserved capacity (“reservations”) for Hyperscale single databases with provisioned compute and Hyperscale elastic pools. How can I move my existing Hyperscale databases to the new pricing? Currently, there is no in-built support to switch pricing for existing Hyperscale databases. However, you can consider one of the redeployment methods (database copy, point-in-time restore, or geo-replication) to create a new “copy” of the existing Hyperscale database. The newly created “copy” of the database will be billed the new pricing. If you do decide to go down this path, do consider creating the new database with zone redundancy, as described here. Do you have any projections of likely costs when converting non-Hyperscale DBs to Hyperscale? We recommend you use the Azure pricing calculator to compare base cost for compute and storage. However, costs of backups can vary depending on the nature of the workload and backup retention settings configured. Databases in the (DTU) Basic, Standard and Premium service tiers include backup in the base cost. When converting such databases to Hyperscale, keep in mind that backups in Hyperscale can be a significant factor in overall cost. It is only possible to determine this after sufficient testing with realistic workloads on Hyperscale and we strongly recommend you do such testing before converting DTU service tier databases to Hyperscale. Does the reduction in compute price apply to all subscription offer types? In the case of dev/test subscriptions and related offer types including Enterprise dev/test, where you were already not paying for license costs, there will not be a further reduction in the price of compute. For such subscriptions, the storage costs for Hyperscale resources will still be based on the guidelines in the “When does the change take effect, and what does it impact?” and “What happens to my existing Hyperscale resources” sections in this blog post. Can I still use Azure Hybrid Benefit for Hyperscale? The change in price per vCore is done by eliminating the software license fee for Hyperscale resources. Hence, Azure Hybrid Benefit no longer applies to the Hyperscale tier, except for Hyperscale single databases with provisioned compute which were created prior to December 15, 2023. Even for those older databases, Azure Hybrid Benefit can only be used till December 14, 2026. Note that specifying values of BasePrice or LicenseIncluded for the LicenseType parameter in APIs / SDKs / PowerShell / CLI, is only relevant for Hyperscale single databases with provisioned compute which were created prior to December 15, 2023. These values are effectively ignored for all other types of Hyperscale resources. Current limitations and known issues Consider a database which was created originally as a non-Hyperscale database prior to Dec 15th, 2023. If this database is then migrated to Hyperscale on or after Dec 15th, 2023, the Cost Summary section of the Azure portal will incorrectly show the "old pricing" for this database. The Cost Summary section in the Azure portal is only intended to be an estimate. We recommend you rely on the Cost Management section in the Azure portal to review actual costs. Azure portal cost summary view in greater China regions does not show the updated pricing information accurately. This is only a display issue and does not impact the billing in any way. Please refer to the pricing calculator or the pricing page for accurate pricing information.41KViews2likes13CommentsABORT_QUERY_EXECUTION query hint - public preview
We are pleased to announce the public preview of a new query hint, ABORT_QUERY_EXECUTION. The hint is intended to be used as a Query Store hint to let administrators block future execution of known problematic queries, for example non-essential queries causing high resource consumption and affecting application workloads. The hint is now available in Azure SQL Database for all databases without restrictions. The hint will later be available in Azure SQL Managed Instance with the always-up-to-date update policy, as well as in a future version of SQL Server. For more information, see Block future execution of problematic queries in documentation. Frequently Asked Questions Is this supported by Microsoft Support during public preview? Yes, just like other query hints. How do I use this? Use Query Store catalog views or the Query Store UI in SSMS to find the query ID of the query you want to block and execute sys.sp_query_store_set_hints specifying that query ID as a parameter. For example: EXEC sys.sp_query_store_set_hints @query_id = 17, @query_hints = N'OPTION (USE HINT (''ABORT_QUERY_EXECUTION''))'; What happens when a query with this hint is executed? This hint is intended to be used as a Query Store hint but can be specified directly as well. In either case, the query fails immediately with error 8778, severity 16: Query execution has been aborted because the ABORT_QUERY_EXECUTION hint was specified. How do I unblock a query? Remove the hint by executing sys.sp_query_store_clear_hints with the query ID value of the query you want to unblock passed via the @query_id parameter. Can I block a query that is not in Query Store? No. At least one execution of the query must be recorded in Query Store. That query execution does not have to be successful. This means that a query that started executing but was canceled or timed out can be blocked too. When I add the hint, does it abort any currently executing queries? No. The hint only aborts future query executions. You can use KILL to abort currently executing queries. What permissions are required to use this? As with all other Query Store hints, the ALTER permission on the database is required to set and clear the hint. Can I block all queries matching a query hash? Not directly. As with all other Query Store hints, you must use a query ID to set and clear a hint. However, you can create automation that will periodically find all new query IDs matching a given query hash and block them. Can I find all blocked queries in Query Store? Yes, by executing the following query: SELECT qsh.query_id, q.query_hash, qt.query_sql_text FROM sys.query_store_query_hints AS qsh INNER JOIN sys.query_store_query AS q ON qsh.query_id = q.query_id INNER JOIN sys.query_store_query_text AS qt ON q.query_text_id = qt.query_text_id WHERE UPPER(qsh.query_hint_text) LIKE '%ABORT[_]QUERY[_]EXECUTION%' Where do I send feedback about this hint? The preferred feedback channel is via https://aka.ms/sqlfeedback. Feedback sent that way is public and can be voted and commented on by other SQL community members. You can also leave comments on this blog post or email us at intelligentqp@microsoft.com.591Views0likes0CommentsEnhanced Server Audit for Azure SQL Database: Greater Performance, Availability and Reliability
We are excited to announce a significant update to the server audit feature for Azure SQL Database. We have re-architected major portions of SQL Auditing resulting in increased availability and reliability of server audits. As an added benefit, we have achieved closer feature alignment with SQL Server and Azure SQL Managed Instance. Database auditing remains unchanged. In the remainder of this blog article, we cover Functional changes Changes Affecting customers Sample queries Call for action Implementation and Notification Time-based Filtering Functional Changes In the current design when server audit is enabled, it triggers a database level audit and executes one audit session for each database. With the new architecture, enabling server audit will create one extended event session at the server level that captures audit events for all databases. This optimizes memory and CPU and is consistent with how auditing works in SQL Server and Azure SQL Managed Instance. Changes Affecting Customers Folder Structure change for storage account Folder structure change for Read-Only replicas Permissions required to view Audit logs One of the primary changes involves the folder structure for audit logs stored in storage account containers. Previously, server audit logs were written to separate folders, one for each database, with the database name serving as the folder name. With the new update, all server audit logs will be consolidated into a single folder which is ‘Master’ folder. This behavior is the same as Azure SQL Managed Instance and SQL Server For Read-Only database replicas, which previously had their logs stored in a read-only folder, those logs will now also be written into the Master folder. You can retrieve these logs by filtering on the new column ‘is_secondary_replica_true’. Please note that the audit logs generated after deployment will adhere to the new folder structure, while the existing audit logs will stay in their current folders until their retention periods expire. Sample Queries To help you adopt these changes in your workflows, here are some sample queries: Current New To Query audit logs for a specific database called "test" SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/test/ SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/07_06_40_590_0.xel', default, default) SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/master/SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/07_06_40_590_0.xel', default, default) WHERE database_name = 'test'; To query audit logs for test database from read only replica SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/test/SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/RO/07_06_40_590_0.xel', default, default) SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/master/SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/07_06_40_590_0.xel', default, default) WHERE is_secondary_replica_true = 'true'; Permissions Control database on user database Control Database on Master Implementation and Notifications We are rolling out this change region-wise. Subscription owners will receive notifications with the subject “Update your scripts to point to a new folder for server level audit logs” for each region as the update is implemented. It is important to update any scripts that refer to the folder structure to retrieve audit logs based on the database name for the specific region. Note that this change applies only to server-level auditing; database auditing remains unchanged. Call for Action These actions apply only to customers who are using storage account targets. No action is needed for customers using Log Analytics or Event hubs. Folder references: Change the reference for audit logs from the database name folder to the Master folder and use specific filters to retrieve logs for a required database. Read -Only Database Replicas: Update references for audit logs from the Read-Only replica folder to the Master folder and filter using the new parameter as shown in the examples. Permissions: Ensure you have the necessary control server permissions to review the audit logs for each database using fn_get_audit_file. Manual Queries This update also applies to manual queries where you use fn_get_audit_file to retrieve audit logs from the storage account Time-based filtering To enhance your ability to query audit logs using filters, consider using efficient time-based filtering with the fn_get_audit_file_v2 function. This function allows you to retrieve audit log data with improved filtering capabilities. For more details, refer to the official documentation here.1.3KViews2likes0Comments