azure sql database
271 TopicsGeneral Availability of license-free standby replica for Azure SQL database
We are excited to announce General Availability of license-free standby replica for Azure SQL Database letting you to save on licensing costs by designating your secondary disaster recovery database as standby replica. Typically license costs constitute to be about 40% and so with license-free standby replica the secondary will be about 40% less expensive. To protect database powering the application from region failures and achieving higher business continuity it is crucial to enable disaster recovery for database. In some industries it is mandatory and part of compliance requirement to have disaster recovery in place and frequently conduct drills. One of the biggest hindrances in enabling disaster recovery has been cost as secondary database is mainly used in the event of a disaster. When a secondary database replica is used only for disaster recovery, and doesn't have any workloads running on it, or applications connecting to it, you can save on licensing costs by designating the database as a standby replica. Microsoft provides you with the number of vCores licensed to the primary database at no extra charge under the failover rights benefit in the product licensing terms for standby replica. You're still billed for the compute and storage that the secondary database uses. The standby database replica must only be used for disaster recovery. The following lists the only activities that are permitted on the standby database: Perform maintenance operations, such as checkDB Connect monitoring applications Run disaster recovery drills You can designate one secondary single database deployment model as license-free standby replica in General Purpose & Business Critical service tier and provisioned compute tier. It is possible to configure license-free standby replica using portal, powershell or CLI. Additional capabilities added for general availability release are: Perform in place update of geo replica to standby replica using portal and REST API. Assign standby replica while creating failover group using portal and REST API. Estimate cost for standby replica by using Azure pricing calculator and selecting Standby replica in Disaster Recovery dropdown. For comprehensive details on license-free standby replica including limitations and frequently asked questions, please refer to documentation.6KViews3likes1CommentHow to enable Auditing in Azure SQL Databases to Storage account and store logs in JSON format
In today's data-driven world, auditing is a crucial aspect of database management. It helps ensure compliance, security, and operational efficiency. Azure SQL Databases offer robust auditing capabilities, and in this blog, we'll explore how to enable auditing with a storage target in JSON format. This approach simplifies access to audit logs without the need for specialized tools like SQL Server Management Studio (SSMS). Step 1: Enable Database-Level Auditing with Azure Monitor The first step is to enable database-level auditing using Azure Monitor. This can be achieved through a REST API call. Here’s how you can do it: Request: PUT https://management.azure.com/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/test_rg/providers/Microsoft.Sql/servers/test-sv2/databases/test_db/extendedAuditingSettings/default?api-version=2021-11-01 Host: management.azure.com Content-Length: 249 { "properties": { "state": "Enabled", "auditActionsAndGroups": [ "FAILED_DATABASE_AUTHENTICATION_GROUP", "SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP", "BATCH_COMPLETED_GROUP" ], "isAzureMonitorTargetEnabled": true } } Explanation: state: Enables the auditing. auditActionsAndGroups: Specifies the audit groups to capture. isAzureMonitorTargetEnabled: Enables Azure Monitor integration. Step 2: Create Database-Level Diagnostic Setting Next, you need to create a diagnostic setting for storing logs in JSON format. This is done using another REST API call: Request: PUT https://management.azure.com/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/test_rg/providers/Microsoft.Sql/servers/test-sv2/databases/test_db/providers/Microsoft.Insights/diagnosticSettings/testDiagnosti1c?api-version=2021-05-01-preview Content-type: application/json Host: management.azure.com Content-Length: 414 { "properties": { "storageAccountId": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/test_rg/providers/Microsoft.Storage/storageAccounts/teststorage2", "metrics": [], "logs": [ { "category": "sqlsecurityauditevents", "enabled": true, "retentionPolicy": { "enabled": false, "days": 0 } } ] } } Key Details: storageAccountId: Specifies the target storage account. category: Chooses sqlsecurityauditevents to store SQL audit events. retentionPolicy: Retention is disabled in this configuration. Step 3: Run a Sample Query To generate audit logs, execute a simple query: SELECT 1; Step 4: Check the JSON Files in Storage Account Finally, navigate to the Azure Storage account specified in Step 2. Open the insights-logs-sqlsecurityauditevents container and download the generated JSON file to review the audit logs. Advantages: Audit logs are stored in JSON format, making them easily readable without the need for specialized tools like SQL Server Management Studio (SSMS). Limitations: This approach is only applicable for database-level auditing; server-level auditing is not supported. Retention policy settings are not functional in this configuration. By following this blog, you can enable auditing for Azure SQL Databases, which directly generates JSON files in an Azure Storage account. This method streamlines the access and analysis of audit data.101Views1like0CommentsStream data changes from Azure SQL Managed Instance and Azure SQL Database – private preview of CES
We’re excited to share that Change Event Streaming (CES) - the capability to stream data changes directly from your SQL into Azure Event Hubs - is now also available in private preview for Azure SQL Managed Instance. CES enables implementing near real-time data integration and event-driven architectures with minimal overhead, helping you keep systems in sync, power real-time analytics, and track changes as they happen. Whether you’re working with Azure SQL Managed Instance or Azure SQL Database, you can now join the private preview and: Try out the new functionality early. Collaborate with the product team. Share feedback that helps shape the final product. To apply for the private preview today, send an email to sqlcesfeedback [at] microsoft [dot] com or fill in the form at https://aka.ms/sql-ces-signup. More on CES available in the previous blog post. More useful resources: Free Azure SQL Database. Free Azure SQL Managed Instance. Azure SQL – Year 2024 in review. Azure SQL YouTube channel.218Views1like0CommentsDatabase compatibility level 170 in Azure SQL Database and SQL database in Microsoft Fabric
The alignment of SQL versions to default compatibility levels are as follows: 100: in SQL Server 2008 and Azure SQL Database 110: in SQL Server 2012 and Azure SQL Database 120: in SQL Server 2014 and Azure SQL Database 130: in SQL Server 2016 and Azure SQL Database 140: in SQL Server 2017and Azure SQL Database 150: in SQL Server 2019 and Azure SQL Database 160: in SQL Server 2022, Azure SQL Database and SQL database in Microsoft Fabric 170: in Azure SQL Database and SQL database in Microsoft Fabric Once this new database compatibility default goes into effect, if you still wish to use database compatibility level 160 (or lower), please follow the instructions detailed here: View or Change the Compatibility Level of a Database. For example, you may wish to ensure that new databases created on the same logical server use the same compatibility level as other Azure SQL Databases to ensure consistent query optimization and execution behavior across development, QA and production versions of your databases. With this example in mind, we recommend that any database configuration scripts in use explicitly designate the COMPATIBILITY_LEVEL rather than rely on the defaults, in order to ensure consistent application behavior. For new databases supporting new applications, we recommend using the latest compatibility level, 170. For pre-existing databases running at lower compatibility levels, the recommended workflow for upgrading the query processor to a higher compatibility level is detailed in the article Change the Database Compatibility Mode and Use the Query Store. Note that this article refers to database compatibility level 130 and SQL Server, but the same methodology that is described applies to database compatibility 170 for SQL Server and Azure SQL Database. To determine the current database compatibility level, query the compatibility_level column of sys.databases system catalog view. SELECT [name], compatibility_level FROM sys.databases; So, there may be a few questions that we have not directly answered with this announcement. Maybe questions such as: What do you mean by “database compatibility level 170 is now the default”? If you create a new database and don’t explicitly designate the COMPATIBILITY_LEVEL, the database compatibility level 170 will be used. Does Microsoft automatically update the database compatibility level for existing databases? No. We do not update the database compatibility level for existing databases. This is up to you as an owner of your database to do at your own discretion. With that said, we highly recommend that you plan on moving to the latest database compatibility level in order to leverage the latest improvements that are enabled with the latest compatibility level. I created a logical server before 170 was the default database compatibility level. What impact does this have? The master database of your logical server will reflect the database compatibility level that was the default when the logical server was created. New databases created on this logical server with an older compatibility level for the master database will use database compatibility level 170 if the database compatibility level is not explicitly specified. The master database compatibility cannot be changed without recreating the logical server. Having master database operating at an older database compatibility level will not impact user database behavior. Would the database compatibility level change to 170 if I restore a database from a point in time backup before the default changed? No. We will preserve the compatibility level that was in effect when the backup was performed.758Views0likes3CommentsAnnouncing General Availability of Azure SQL Database Capabilities for Microsoft Copilot in Azure
This week, the Azure Infrastructure team announced the General Availability (GA) of Copilot in Azure. We’re excited to share that Copilot in Azure includes capabilities for Azure SQL Database that help to streamline database operations, troubleshoot issues, and optimize performance—all designed to enhance productivity and simplify complex tasks. Capabilities available in this GA release include: Intelligent Troubleshooting Copilot can provide guidance for common SQL errors, ensuring faster resolution and less frustration. For example: Diagnose SQL errors (e.g., 10928, 18456, and 40613) with clear explanations and actionable steps to address them. Identify and resolve issues related to database scaling and replication to geo secondaries. Assist with login problems, helping users regain access swiftly. Performance Optimization Copilot helps users better understand and manage their database performance: Help you determine whether your database is nearing its storage capacity or hitting its IO limit. Analyze database connection timeouts and provide recommendations to optimize connection settings. Configuration Copilot in Azure simplifies configuration-related tasks by offering: Guidance on selecting the appropriate service tier for your database. Clear directions for creating and using the correct connection string to ensure seamless connectivity. Security and Data Management Data security is paramount, and Copilot assists with tasks such as: Troubleshooting issues with Transparent Data Encryption (TDE), ensuring your data remains protected. Providing insights into replication issues and offering solutions to secure and synchronize data across geo secondaries. Using Copilot in Azure Copilot in Azure is available in the Azure portal and Azure mobile app by pressing the Copilot icon in the navigation bar. To start a conversation with Copilot in Azure, just type or speak your query or command in natural language. For example, “Help me set up geo redundancy for my database” or "Is my database hitting its IO limit?" **Tip**: Where you are in the Azure portal influences how Copilot interprets your prompts. For example, when you are on an Azure SQL database page in the portal and ask a question, Copilot provides information relevant to that database. Additionally, if you're asking Copilot a question about a different database from the database service or page you are on, the result isn't clear and is more likely to give you a bad output. Changes to Preview Capabilities While many exciting features are part of this GA release, we also want to be transparent about changes made from the Public Preview to two specific capabilities: Natural Language to SQL (NL2SQL): This experience within the Azure portal query editor for Azure SQL Database translated natural language into T-SQL queries. DMV / Catalog View Queries: In some cases, Copilot would execute queries against your database and include the results as part of the response to a prompt. Recently, my colleague Erin Stellato announced that Copilot is coming to SQL Server Management Studio. Because SSMS can connect to all versions of SQL Server wherever it’s running, Copilot in SSMS is the best experience for NL2SQL and prompts/answers based on connected database context. To see demos and learn more about the roadmap, watch Erin’s recent SQL Conf() session A Developer’s Guide to Using Copilot in SSMS. And, of course, pay attention to this space for updates about when Copilot in SSMS will be available for Public Preview. Looking Ahead The release of SQL Database capabilities for Copilot in Azure represents a step forward in harnessing AI for database management. Our goal is to help you to be more productive and solve problems faster with Copilot's help within an intelligent and intuitive interface. We're excited to see how you leverage these capabilities to be more efficient, and we welcome your feedback as we continue refining and expanding Copilot's potential. Related Links Microsoft Copilot in Azure: Microsoft Copilot in Azure Overview | Microsoft Learn Microsoft Copilot in Azure with Azure SQL Database: https://aka.ms/sqlcopilot FAQ: https://aka.ms/sqlcopilot-faq641Views3likes1CommentSimplified & lower pricing for Azure SQL Database and Azure SQL Managed Instance backup storage
Today as you deploy your Azure SQL database or Azure SQL managed instance, one of the important decisions to be made is the choice for your backup storage redundancy (BSR). I say it's important because the availability of your database depends on the availability of your backups. Here’s why. Consider the scenario where your DB has high availability configured via zone redundancy. And, let's say, your backups are configured non-zone redundant. In the event of a failure in the zone, your database fails over to another zone within the region, however your backups won't, because of their storage setting. Now, in the new zone, the backup service attempts to backup your database but cannot reach the backups in the zone where the failure happened causing the logs to become full and eventually impacting the availability of the database itself. As you create the Azure SQL database, the choices for backup storage redundancy are: Locally Redundant Storage (LRS) Zone Redundant Storage (ZRS) Geo Redundant Storage (GRS) and Geo Zone Redundant Storage (GZRS) Each of these storage types provides different levels of durability, resiliency and availability for your databases and database backups. Not surprisingly, each storage type also has different levels of pricing, and the price increases significantly as the protection level increases with GZRS storage type almost 4-5x LRS. Choosing between resilience and cost optimization is an extremely difficult choice that the DB owner must make. We are thrilled to announce that, starting from Nov 01, 2024, the backup storage pricing is now streamlined and simplified across Azure SQL database and Azure SQL Managed Instance. Bonus – we even reduced the prices 😊 The price changes apply to the Backup Storage Redundancy configuration for both Point-in-time and Long-Term Retention backups, across the following tiers of Azure SQL Database and Azure SQL Managed Instance: Product Service Tier Azure SQL Database General Purpose Business Critical Hyperscale Azure SQL Managed Instance General Purpose Business Critical Next Generation General Purpose (preview) As we made the changes, following were the principles we adhered to: No price increase BSR pricing for ZRS is reduced to match the BSR pricing for LRS BSR pricing for GZRS is reduced to match the BSR pricing of GRS BSR pricing for GRS/GZRS will be 2x that of LRS/ZRS Type of backups What is Changing PITR BSR pricing for ZRS is reduced by 20% to match pricing for LRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance except for Azure SQL Database Hyperscale service tier. BSR pricing for GZRS is reduced by 41% to match pricing for GRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. LTR BSR pricing for ZRS is reduced by 20% to match pricing for LRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. BSR pricing for GZRS is reduced by 41% to match pricing for GRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. As an example, lets take East US as the region and look at the pricing for backup storage redundancy for Point in Time storage before and after the changes: For General Purpose/Business Critical service tiers the pricing would now be: Backup Storage Redundancy Current price New Price Price change LRS $0.10 $0.10 None ZRS $0.125 $0.10 20% less GRS $0.20 $0.20 None GZRS $0.34 $0.20 41% less For Hyperscale service tier, the new pricing would now be: Backup Storage Redundancy Current price New Price Price change LRS $0.08 $0.08 None ZRS $0.1 $0.10 None GRS $0.20 $0.20 None GZRS $0.34 $0.20 41% less Similarly, Backup storage redundancy prices for Long Term Retention backups in East US would be as follows: Backup Storage Redundancy Current price New Price Price change LRS $0.025 $0.025 None ZRS $0.0313 $0.025 20% less GRS $0.05 $0.05 None GZRS $0.0845 $0.05 41% less As a customer, the decision now becomes much easier for you. If you need regional resiliency: choose Zone Redundant Storage (ZRS) If you need regional and/or geo resiliency: choose Geo Zone Redundant Storage (GZRS). If the Azure region does not support Availability Zones, then choose Local Redundant Storage for regional resiliency, and Geo Redundant Storage for geo resiliency respectively. Please Note: The Azure pricing page and Azure pricing calculator will be updated with these new prices soon. The actual pricing meters have already been updated. Additionally, the LTR pricing change for Hyperscale will be in effect from January 1, 2025.1.3KViews0likes0CommentsConversion to Hyperscale: Now generally available with enhanced efficiency
We are excited to announce the general availability (GA) of the latest improvements in the Azure SQL Database conversion process to Hyperscale. These improvements bring shorter downtime, better control, and more visibility to the Hyperscale conversion process, making it easier and more efficient for our customers to switch to Hyperscale. Key enhancements We received feedback from customers about longer-than-expected downtime, lack of visibility, and unpredictable cutover time during database conversion to Hyperscale. In response, we have made key improvements in this area. 1. Shorter cutover time Prior to this improvement, the cutover time depended on the database size and workload. With the improvement we have significantly reduced the average cutover time (effective unavailability of the database for the application) from about six minutes, sometime extending to thirty minutes, to less than one minute. 2. Higher log generation rate By improving synchronizing mechanisms between source and destination while the conversion is in progress, we now support a higher log generation rate on the source database, ensuring that the conversion can complete successfully with write intensive workloads. This enhancement ensures a smoother and faster migration experience, even for high-transaction rate environments. We now support up to 50 MiB/s log generation rate on the source database during conversion. Once converted to Hyperscale, the supported log generation rate is 100 MiB/s, with higher rate of 150 MiB/s in preview. 3. Manual cutover One of the most significant improvements is the introduction of a customer-controlled cutover mode called manual cutover. This allows customers to have more control over the conversion process, enabling them to schedule and manage the cutover at the time of their choice. You can perform cutover within 24 hours once conversion process reaches “Ready to cutover” state. 4. Enhanced progress reporting Improved progress reporting capabilities provide detailed insights into the conversion process. Customers can now monitor the migration status in real-time, with clear visibility into each step of the process. Progress reporting is available via T-SQL, REST API, PowerShell, Azure CLI, or in the Azure portal. Detailed progress information about conversion phases provides greater transparency and control over the process. How to use it? All the improvements are applied automatically. Once exception is the manual cutover mode, where you need to use a new optional parameter in T-SQL, PowerShell, Azure CLI, or REST API while initiating the conversion process. The Azure portal also provides a new option to select manual cutover as shown in below image. Granular progress reporting is available irrespective of the cutover mode. One of our customers said - The migration to Hyperscale using the improvements was much easier than expected. The customer-controlled cutover and detailed progress reporting made the process seamless and efficient. For more information, see our documentation: Convert a Database to Hyperscale - Azure SQL Database | Microsoft Learn Conclusion We are thrilled to bring these enhancements to our customers and look forward to seeing how they will transform their Hyperscale conversion experience. This update marks a significant step forward in the Hyperscale conversion process, offering faster cutover time, enhanced control with a manual cutover option, and improved progress visibility. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1.1KViews2likes1CommentImproving the conversion to Hyperscale with greater efficiency
Update: On 09 April 2025 we announced the general availability this improvement. For more details, please read the GA announcement. We are thrilled to announce the latest improvement in the Azure SQL Database conversion process to Hyperscale. This update, now available in public preview, streamlines the database conversion process, reducing downtime and offering greater control and visibility to our customers. Let’s dive into the enhancements and what they mean for you. Overview We have heard feedback from customers about possible improvements we could make while converting their databases to Hyperscale. Customers complained about longer than expected downtime during conversion, no insights into the current state of conversion, and unpredictable cutover time. We acted on the feedback and made several key improvements in this release. 1. Shorter cutover time One of the most significant enhancements in this improvement is the reduction in cutover times when converting a database to Hyperscale. This improvement ensures that the cutover process is faster and more efficient, minimizing downtime and connectivity disruptions. Based on our telemetry, cutover time 99 th percentile has been reduced from ~6 minutes to less than ~1 minute. 2. Support for higher log generation rate During the migration, the new process supports a higher log generation rate on the source database, ensuring that the conversion can handle more write intensive workloads and complete successfully. In the previous version we saw that the database conversions were not able to finish when log generation on the source was more than 8 MiBps continuously, throughout the conversion process. With this improvement we can now support up to 50 MiBps log generation on source and still succeed. This was achieved by improving synchronizing mechanisms between source and destination while conversion is in progress. 3. Manual cutover A new option introduced with this improvement is manual cutover. This allows customers to initiate the conversion process and pause it when the database is ready to cutover to Hyperscale, giving them up to 72 hours to perform the cutover at a time that best suits their operational needs. If the manual cutover is not completed within the given timeframe, the process is automatically canceled, and the database remains on the original service tier, without any data loss. If the new parameter is not passed, then the cutover experience would be the same as earlier i.e. automatic cutover as soon as Hyperscale database is ready. 4. Granular progress reporting Another improvement in the conversion process is that now customers can monitor the entire conversion process at a granular level. Whether using T-SQL, REST API, PowerShell, Azure CLI or Azure portal, detailed progress information about conversion phases is available, providing greater transparency and control over the process. Customer feedback Throughout the private preview phase, we have received overwhelmingly positive feedback from several customers about this improvement. John Nafa, Cloud Architect, Asurgent says: The new database conversion experience from Microsoft has been incredibly seamless and efficient, making the conversion process to Azure SQL Database Hyperscale smooth and straightforward. The progress reporting and manual cutover features were especially valuable, providing real-time insights and ensuring a smooth transition. It’s been a pleasure working with this improvement, and I’m excited to see it become available to a wider audience. Get started Out of the four key improvements mentioned above, most are automatic. To utilize the manual cutover option, you need to use a new optional parameter in T-SQL, PowerShell, Azure CLI, or REST API while initiating the conversion process. Azure portal also provides a new option to select manual cutover. Granular progress reporting is available irrespective of the cutover mode. Use manual cutover Let us go through new options available in various interfaces with this improvement. Azure Portal To use manual cutover, a new Cutover mode option is provided in the Azure portal. The following screenshot shows the steps to convert a database to Hyperscale. The new option is shown under step 3. If you have not seen this option in the Azure portal yet, don’t worry. We are enabling this capability now and you can expect it to be available within days. Commands A new parameter has been introduced in each interface to initiate the conversion process with the manual cutover option. The following table provides sample commands to convert a database named WideWorldImporters on a logical server called contososerver to an 8-vcore serverless Hyperscale database. Method Command T-SQL ALTER DATABASE WideWorldImporters MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_S_Gen5_8') WITH MANUAL_CUTOVER PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -Edition "Hyperscale" -requestedServiceObjectiveName "HS_S_Gen5_2" -ManualCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --edition Hyperscale --service-objective HS_S_Gen5_2 --manual-cutover Note: Additional parameters like backup storage redundancy or zone redundancy etc. can also be added. Refer documentation Set-AzSqlDatabase (Az.Sql) | Microsoft Learn (for PowerShell) and az sql db | Microsoft Learn (for Azure CLI). REST API also has new properties manualCutover and performCutover. Refer Databases - Create Or Update - REST API (Azure SQL Database) | Microsoft Learn for more details. Monitor the conversion progress The progress during this conversion can be monitored using various interfaces. Detailed phase information is available in the Dynamic Management View (DMV) sys.dm_operation_status for those using T-SQL. Similar command options are available for PowerShell and Azure CLI users. Azure Portal Progress of the conversion process can be seen by clicking on Details hyperlink as shown in the below screenshot at step 2. Monitoring programmatically As a part of this improvement, we have introduced new columns in the sys.dm_operation_status DMV called phase_code, phase_desc and phase_info which are populated during conversion process. Refer to documentation for more details. Similarly, new output columns, phase and phaseInformation, are available if using PowerShell under OperationPhaseDetails and if using Azure CLI under operationPhaseDetails. Here is the quick reference for commands via various interfaces. Method Command T-SQL SELECT state_desc, phase_code, phase_desc, JSON_VALUE(phase_info, '$.currentStep') AS currentStep, JSON_VALUE(phase_info, '$.totalSteps') AS totalSteps, phase_info, start_time, error_code, error_desc, last_modify_time FROM sys.dm_operation_status WHERE resource_type = 0 AND operation='ALTER DATABASE' AND major_resource_id = 'WideWorldImporters' PowerShell (Get-AzSqlDatabaseActivity -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters").OperationPhaseDetails Azure CLI (az sql db op list --resource-group ResourceGroup01 --server contososerver --database WideWorldImporters | ConvertFrom-Json).operationPhaseDetails.phase Perform manual cutover The manual cutover, if specified, can be performed using the Azure portal or programmatically, ensuring a smooth transition to the Hyperscale tier. Azure Portal Cutover can be initiated from the same screen where progress of conversion is reported. Commands Here is the quick reference of the commands to perform cutover. Method Command T-SQL ALTER DATABASE WideWorldImporters PERFORM_CUTOVER PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -PerformCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --perform-cutover Conclusion This update marks a significant step forward in the Hyperscale conversion process, offering faster cutover time, enhanced control with a manual cutover option, and improved progress visibility. We encourage you to try these features and provide your valuable feedback and help us refine this feature for general availability. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1.8KViews3likes0CommentsGeneral availability: Database compatibility level 160 in Azure SQL Database
Database compatibility level 160 is now the default for new databases created in Azure SQL Database across all public regions. The alignment of SQL versions to default compatibility levels are as follows: 100: in SQL Server 2008 and Azure SQL Database 110: in SQL Server 2012 and Azure SQL Database 120: in SQL Server 2014 and Azure SQL Database 130: in SQL Server 2016 and Azure SQL Database 140: in SQL Server 2017 and Azure SQL Database 150: in SQL Server 2019 and Azure SQL Database 160: in SQL Server 2022 and Azure SQL Database For details about which feature, or features compatibility level 160 enables, please see Intelligent query processing in SQL databases. The IQP family of features includes multiple features that improve the performance of existing workloads with minimal or no implementation effort. Once this new database compatibility default goes into effect, if you still wish to use database compatibility level 150 (or lower), please follow the instructions detailed here: View or Change the Compatibility Level of a Database . For example, you may wish to ensure that new databases created on the same logical server use the same compatibility level as other Azure SQL Databases to ensure consistent query optimization and execution behavior across development, QA and production versions of your databases. With this example in mind, we recommend that any database configuration scripts in use explicitly designate the COMPATIBILITY_LEVEL rather than rely on the defaults, in order to ensure consistent application behavior. For new databases supporting new applications, we recommend using the latest compatibility level, 160. For pre-existing databases running at lower compatibility levels, the recommended workflow for upgrading the query processor to a higher compatibility level is detailed in the article Change the Database Compatibility Mode and Use the Query Store. Note that this article refers to database compatibility level 130 and SQL Server, but the same methodology that is described applies to database compatibility 160 for SQL Server and Azure SQL Database. To determine the current database compatibility level, query the compatibility_level column of sys.databases system catalog view. SELECT name, compatibility_level FROM sys.databases; Ok, we believe that there may be a few questions that we have not directly answered with this announcement. Maybe questions such as: What do you mean by “database compatibility level 160 is now the default”? If you create a new database and don’t explicitly designate COMPATIBILITY_LEVEL, the database compatibility level 160 will be used. Does Microsoft automatically update the database compatibility level for existing databases? No. We do not update the database compatibility level for existing databases. This is up to you as an owner of your database to do at your own discretion. With that said, we highly recommend that you plan on moving to the latest database compatibility level in order to leverage the latest improvements that are enabled with the latest compatibility level. I created a logical server before 160 was the default database compatibility level. What impact does this have? The master database of your logical server will reflect the database compatibility level that was the default when the logical server was created. New databases created on this logical server with an older compatibility level for the master database will use database compatibility level 160 if the database compatibility level is not explicitly specified. The master database compatibility cannot be changed without recreating the logical server. Having master at an older database compatibility level will not impact user database behavior. Would the database compatibility level change to 160 if I restore a database from a point in time backup before the default changed? No. We will preserve the compatibility level that was in effect when the backup was performed.7.4KViews3likes0Comments