azure sql database
290 TopicsDatabase compatibility level 170 in Azure SQL Database and SQL database in Microsoft Fabric
The alignment of SQL versions to default compatibility levels are as follows: 100: in SQL Server 2008 and Azure SQL Database 110: in SQL Server 2012 and Azure SQL Database 120: in SQL Server 2014 and Azure SQL Database 130: in SQL Server 2016 and Azure SQL Database 140: in SQL Server 2017and Azure SQL Database 150: in SQL Server 2019 and Azure SQL Database 160: in SQL Server 2022, Azure SQL Database and SQL database in Microsoft Fabric 170: in SQL Server 2025 (Preview), Azure SQL Database and SQL database in Microsoft Fabric For details about which feature, or features compatibility level 170 enables, please see what is new under database compatibility 170. The Intelligent query processing (IQP) family of features also include multiple features that improve the performance of existing workloads with minimal or no implementation effort. Once this new database compatibility default goes into effect, if you still wish to use database compatibility level 160 (or lower), please follow the instructions detailed here: View or Change the Compatibility Level of a Database. For example, you may wish to ensure that new databases created on the same logical server use the same compatibility level as other Azure SQL Databases to ensure consistent query optimization and execution behavior across development, QA and production versions of your databases. With this example in mind, we recommend that any database configuration scripts in use explicitly designate the COMPATIBILITY_LEVEL rather than rely on the defaults, in order to ensure consistent application behavior. For new databases supporting new applications, we recommend using the latest compatibility level, 170. For pre-existing databases running at lower compatibility levels, the recommended workflow for upgrading the query processor to a higher compatibility level is detailed in the article Change the Database Compatibility Mode and Use the Query Store. Note that this article refers to database compatibility level 130 and SQL Server, but the same methodology that is described applies to database compatibility 170 for SQL Server and Azure SQL Database. To determine the current database compatibility level, query the compatibility_level column of sys.databases system catalog view. SELECT [name], compatibility_level FROM sys.databases; So, there may be a few questions that we have not directly answered with this announcement. Maybe questions such as: What do you mean by “database compatibility level 170 is now the default”? If you create a new database and don’t explicitly designate the COMPATIBILITY_LEVEL, the database compatibility level 170 will be used. Does Microsoft automatically update the database compatibility level for existing databases? No. We do not update the database compatibility level for existing databases. This is up to you as an owner of your database to do at your own discretion. With that said, we highly recommend that you plan on moving to the latest database compatibility level in order to leverage the latest improvements that are enabled with the latest compatibility level. I created a logical server before 170 was the default database compatibility level. What impact does this have? The master database of your logical server will reflect the database compatibility level that was the default when the logical server was created. New databases created on this logical server with an older compatibility level for the master database will use database compatibility level 170 if the database compatibility level is not explicitly specified. The master database compatibility cannot be changed without recreating the logical server. Having master database operating at an older database compatibility level will not impact user database behavior. Would the database compatibility level change to 170 if I restore a database from a point in time backup before the default changed? No. We will preserve the compatibility level that was in effect when the backup was performed.3.2KViews0likes8Comments🔐 Public Preview: Backup Immutability for Azure SQL Database LTR Backups
The Ransomware Threat Landscape Ransomware attacks have become one of the most disruptive cybersecurity threats in recent years. These attacks typically follow a destructive pattern: Attackers gain unauthorized access to systems. They encrypt or delete critical data. They demand ransom in exchange for restoring access. Organizations without secure, tamper-proof backups are often left with no choice but to pay the ransom or suffer significant data loss. This is where immutable backups play a critical role in defense. 🛡️ What Is Backup Immutability? Backup immutability ensures that once a backup is created, it cannot be modified or deleted for a specified period. This guarantees: Protection against accidental or malicious deletion. Assurance that backups remain intact and trustworthy. Compliance with regulatory requirements for data retention and integrity. 🚀 Azure SQL Database LTR Backup Immutability (Public Preview) Microsoft has introduced backup immutability for Long-Term Retention (LTR) backups in Azure SQL Database, now available in public preview. This feature allows organizations to apply Write Once, Read Many (WORM) policies to LTR backups stored in Azure Blob Storage. Key Features: Time-based immutability: Locks backups for a defined duration (e.g., 30 days). Legal hold immutability: Retains backups indefinitely until a legal hold is explicitly removed. Tamper-proof storage: Backups cannot be deleted or altered, even by administrators. This ensures that LTR backups remain secure and recoverable, even in the event of a ransomware attack. 📜 Regulatory Requirements for Backup Immutability Many global regulations mandate immutable storage to ensure data integrity and auditability. Here are some key examples: Region Regulation Requirement USA SEC Rule 17a-4(f) Requires broker-dealers to store records in WORM-compliant systems. FINRA Mandates financial records be preserved in a non-rewriteable, non-erasable format. HIPAA Requires healthcare organizations to ensure the integrity and availability of electronic health records. EU GDPR Emphasizes data integrity and the ability to demonstrate compliance through audit trails. Global ISO 27001, PCI-DSS Require secure, tamper-proof data retention for audit and compliance purposes. Azure’s immutable storage capabilities help organizations meet these requirements by ensuring that backup data remains unchanged and verifiable. 🕒 Time-Based vs. Legal Hold Immutability ⏱️ Time-Based Immutability Locks data for a predefined period (e.g., 30 days). Ideal for routine compliance and operational recovery. Automatically expires after the retention period. 📌 Legal Hold Immutability Retains data indefinitely until the hold is explicitly removed. Used in legal investigations, audits, or regulatory inquiries. Overrides time-based policies to ensure data preservation. Both types can be applied to Azure SQL LTR backups, offering flexibility and compliance across different scenarios. 🧩 How Immutability Protects Against Ransomware Immutable backups are a critical component of a layered defense strategy: Tamper-proof: Even if attackers gain access, they cannot delete or encrypt immutable backups. Reliable recovery: Organizations can restore clean data from immutable backups without paying ransom. Compliance-ready: Meets regulatory requirements for data retention and integrity. By enabling immutability for Azure SQL LTR backups, organizations can significantly reduce the risk of data loss and ensure business continuity. ✅ Final Thoughts The public preview of backup immutability for Azure SQL Database LTR backups is a major step forward in ransomware resilience and regulatory compliance. With support for both time-based and legal hold immutability, Azure empowers organizations to: Protect critical data from tampering or deletion. Meet global compliance standards. Recover quickly and confidently from cyberattacks. Immutability is not just a feature—it’s a foundational pillar of modern data protection. Documentation is available at - Backup Immutability for Long-Term Retention Backups - Azure SQL Database | Microsoft Learn623Views4likes1CommentConfigure Auditing for Azure SQL Database series - Part2
In the previous blog, we covered how to configure server level audit and database level audit for Azure SQL database with default settings. In this blog we will see how you can manage auditing using PowerShell cmdlets. PowerShell cmdlets to manage auditing for Azure SQL database: Get-AzSqlServerAudit - Get Server Auditing Policy Get-AzSqlDatabaseAudit - Get Database Auditing Policy Set-AzSqlDatabaseAudit - Create or Update Database Auditing Policy Set-AzSqlServerAudit - Create or Update Server Auditing Policy Remove-AzSqlDatabaseAudit - Remove Database Auditing Policy Remove-AzSqlServerAudit - Remove Server Auditing Policy Refer manage Azure SQL database auditing using Azur PowerShell documentation here for more details. Sometimes you might have a requirement to customize auditing for Azure SQL database to meet compliance or achieve parity with your on-premises servers or just audit specific details to meet business needs. Using PowerShell cmdlets, you can configure auditing to audit specific action groups and use predicate expressions to filter queries, logins, schemas, and databases. Before we modify the audit settings let us verify the existing server and database audit configuration Review the existing configuration Set the context to your subscription Set-AzContext -Subscription "xxx-xxxx-xxxx-xxxx" Check server audit configuration Get-AzSqlServerAudit "resource group name " "servername" You can see server audit is enabled with default action groups and target as log analytics workspace 3. Check database audit configuration Get-AzSqlDatabaseAudit "resource group name" "servername" "database name" Database audit is disabled, and you can see no action groups configured and all targets are in disabled state. Here we are discussing customizing audit settings based on a few sample scenarios and you can extend it further based on your requirements. Scenario 1: Exclude a specific login from auditing. Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "test" -BlobStorageTargetState Enabled -StorageAccountResourceId "xxxxxx" -PredicateExpression "[server_principal_name]!= 'dba'" Scenario 2: Exclude a login and specific database from auditing Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "test" -BlobStorageTargetState Enabled -StorageAccountResourceId "xxxxxx" -PredicateExpression "[server_principal_name] != ‘dbadmin’ and [database_name] != 'audit'" Scenario 3: Exclude select statements from auditing Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "sravstestdb" -PredicateExpression "statement not like '[select ]%'" Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "sravstestdb" -PredicateExpression "statement <> 'exec sp_executesql N’SELECT%’'" Scenario 4: Audit different action groups instead of default ones Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "test" -LogAnalyticsTargetState Enabled -WorkspaceResourceId "/subscriptions/fa58cf66-caaf-xxxxxxx-xxxxxxxxx/resourceGroups/test/providers/Microsoft.OperationalInsights/workspaces/test" -AuditActionGroup SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP","FAILED_DATABASE_AUTHENTICATION_GROUP","DATABASE_OBJECT_CHANGE_GROUP" Scenario 5: use predicate expression to exclude transaction related actions from batch completed action group Set-AzSqlDatabaseAudit -ResourceGroupName "test" -ServerName "test" -LogAnalyticsTargetState Enabled -WorkspaceResourceId "/subscriptions/fa58cf66-caaf-xxxxxxx-xxxxxxxxx/resourceGroups/test/providers/Microsoft.OperationalInsights/workspaces/test" -AuditActionGroup "Batch_completed_group" -PredicateExpression "additional_information not like '%<transaction_info>%" Scenario 6: use predicate expression to exclude action from a given action group. the below example shows filtering RPC completed events from batch completed action group. to get action id, please refer Filter SQL Server Audit on action_id Set-AzSqlServerAudit -ResourceGroupName "RESOURCE_GROUP" -ServerName "SERVER_NAME" -BlobStorageTargetState Enabled -StorageAccountResourceId "STORAGE_ACCOUNT_RESOURCE_ID" -PredicateExpression "action_id != 541934418" Note: Both Set-AzSqlServerAudit and Set-AzSqlDatabaseAudit overwrites the existing configuration, so when you modify you need to add the default action groups if you still want to audit them. Review the server audit settings, you can see auditing is configured with different action groups and filters. In this blog we have successfully configured server audit with the required audit action groups and filter expressions.10KViews2likes4CommentsConvert geo-replicated databases to Hyperscale
Update: On 22 October 2025 we announced the General Availability for this improvement. We’re excited to introduce the next improvement in Hyperscale conversion: a new feature that allows customers to convert Azure SQL databases to Hyperscale by keeping active geo-replication or failover group configurations intact. This builds on our earlier improvements and directly addresses one of the most requested capabilities from customers. With this improvement, customers can now modernize your database architecture with Hyperscale while maintaining business continuity. Overview We have heard feedback from customers about possible improvements we could make while converting their databases to Hyperscale. Customers complained about the complex steps they needed to perform to convert a database to Hyperscale when the database is geo-replicated by active geo-replication or failover groups. Previously, converting to Hyperscale required tearing down geo-replication links and recreating them after the conversion. Now, that’s no longer necessary. This improvement allows customers to preserve their cross-region disaster recovery or read scale-out configurations and still allows conversion to Hyperscale which helps in minimizing downtime and operational complexity. This feature is especially valuable for applications that rely on failover group endpoints for connectivity. Before this improvement, if application needs to be available during conversion, then connection string needed modifications as a part of conversion because the failover group and its endpoints had to be removed. With this new improvement, the conversion process is optimized for minimal disruption, with telemetry showing majority of cutover times under one minute. Even with a geo-replication configuration in place, you can still choose between automatic and manual cutover modes, offering flexibility in scheduling the transition. Progress tracking is now more granular, giving customers better visibility into each stage of the conversion, including the conversion of the geo-secondary to Hyperscale. Customer feedback Throughout the preview phase, we have received overwhelmingly positive feedback from several customers about this improvement. Viktoriia Kuznetcova, Senior Automation Test Engineer from Nintex says: We needed a low-downtime way to move our databases from the Premium tier to Azure SQL Database Hyperscale, and this new feature delivered perfectly; allowing us to complete the migration in our test environments safely and smoothly, even while the service remained under continuous load, without any issues and without needing to break the failover group. We're looking forward to the public release so we can use it in production, where Hyperscale’s ability to scale storage both up and down will help us manage peak loads without overpaying for unused capacity. Get started The good news is that there are no changes needed to the conversion process. The workflow automatically detects that a geo-secondary is present and converts it to Hyperscale. There are no new parameters, and the method remains the same as the existing conversion process which works for non-geo-replicated databases. All you need is to make sure that: You have only one geo-secondary replica because Hyperscale doesn't support more than one geo-secondary replica. If a chained geo-replication configuration exists, it must be removed before starting the conversion to Hyperscale. Creating a geo-replica of a geo-replica (also known as "geo-replica chaining") isn't supported in Hyperscale. Once the above requirements are satisfied, you can use any of the following methods to initiate the conversion process. Conversion to Hyperscale must be initiated starting from the primary geo-replica. The following table provides sample commands to convert a database named WideWorldImporters on a logical server called contososerver to an 8-vcore Hyperscale database with manual cutover option. Method Command T-SQL ALTER DATABASE WideWorldImporters MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen5_8') WITH MANUAL_CUTOVER; PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -Edition "Hyperscale" -RequestedServiceObjectiveName "HS_Gen5_8" -ManualCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --edition Hyperscale --service-objective HS_Gen5_8 --manual-cutover Here are some notable details of this improvement: The geo-secondary database is automatically converted to Hyperscale with the same service level objective as the primary. All database configurations such as maintenance window, zone-resiliency, backup redundancy etc. remain the same as earlier (i.e., both geo-primary and geo-secondary would inherit from their own earlier configuration). A planned failover isn't possible while the conversion to Hyperscale is in progress. A forced failover is possible. However, depending on the state of the conversion when the forced failover occurs, the new geo-primary after failover might use either the Hyperscale service tier, or its original service tier. If the geo-secondary database is in an elastic pool before conversion, it is taken out of the pool and might need to be added back to a Hyperscale elastic pool separately after the conversion. The deployment of the feature is in progress and is expected to be completed in all Azure regions in a few weeks. In case you see error (Update to service objective '<SLO name>' with source DB geo-replicated is not supported for entity '<Database Name>') while converting primary to Hyperscale, wait for this new improvement to become available in your region. If you don’t want to use this capability, make sure to remove any geo-replication configuration before converting your databases to Hyperscale. Conclusion This update marks a significant step forward in the Hyperscale conversion process, offering simple steps, less downtime and keeping the geo-secondary available during the conversion process. We encourage you to try this capability and provide your valuable feedback and help us refine this feature. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1.8KViews2likes0CommentsMultiple geo-replicas for Azure SQL Hyperscale is now in public preview
Active geo-replication for Azure SQL is a business continuity solution that lets you perform quick disaster recovery of individual databases if there is a regional disaster or a large-scale outage. Up to four geo-secondaries could be created for an Azure SQL database except for Hyperscale, until now. We’re excited to announce that Azure SQL Database Hyperscale support for up to four geo-replicas is available now in public preview. This enhancement gives you greater flexibility for disaster recovery, regional scale-out, and migration scenarios. What’s New? Create up to four read-only geo-replicas for each Hyperscale database, in the same or different Azure regions. Use the additional geo-replicas to add read scale-out capabilities to additional regions. More flexibility to facilitate database migrations and zone redundancy changes. How to Get Started Getting started with multiple geo-replicas in Azure SQL Hyperscale is straightforward. In the Azure Portal, you can add multiple geo-replicas using the same process that you currently use to add a geo-replica. Go to your Hyperscale database in the Azure portal. Open the “Replicas” blade under “Data management”. Click “Create replica”. TIP: If you want the geo-replica enabled for zone redundancy, remember to click on "Configure database" in the "Compute + storage" section during this step. The zone redundancy setting is not automatically inherited from the primary database when creating the geo-replica. Select the target server Review and create Creating a geo-replica can also be done with command line tools like PowerShell. Example: New-AzSqlDatabaseSecondary -DatabaseName mydb1 -ServerName sqlserver1 -ResourceGroupName myrg -PartnerResourceGroupName myrg -PartnerServerName sqlserver2 -AllowConnections “All” Performing a Failover Performing a failover in the portal is the same process with multiple geo-replicas. Click on the ellipses in the row of the replica to which you want to fail over and choose Failover or Forced failover. For PowerShell, use the Set-AzSqlDatabaseSecondary cmdlet. Example: Set-AzSqlDatabaseSecondary -DatabaseName mydb1 -PartnerResourceGroupName myrg -ServerName sqlserver2 -ResourceGroupName myrg -Failover Limitations & Notes You can create up to four geo-replicas per database. Geo-replicas must be created on different logical servers. Chaining (creating a geo-replica of a geo-replica) is not supported. If you would like zone redundancy enabled for the geo-replica, you must configure that setting during the create geo-replica process since the setting is not automatically inherited from the primary database. Learn More: Hyperscale secondary replicas - Azure SQL Database | Microsoft Learn Active Geo-Replication - Azure SQL Database | Microsoft Learn312Views1like0CommentsGeo-Replication and Transparent Data Encryption Key Management in Azure SQL Database
Transparent data encryption (TDE) in Azure SQL with customer-managed key (CMK) enables Bring Your Own Key (BYOK) scenario for data protection at rest. With customer-managed TDE, the customer is responsible for and in a full control of a key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing of operations on keys. Geo-replication and Failover Groups in Azure SQL Database creates readable secondary replicas across different regions to support disaster recovery and high availability. There are several considerations when configuring geo-replication for Azure SQL logical servers that use Transparent Data Encryption (TDE) with Customer-Managed Keys (CMK). This blog post provides detailed information about setting up geo-replication for TDE-enabled databases. Understanding the difference between a TDE protector and a server/database Key To understand the geo-replication considerations, I first need to explain the roles of the TDE protector and a server key. The TDE protector is the key responsible for encrypting the Database Encryption Key (DEK), which in turn encrypts the actual database files and transaction logs. It sits at the top of the encryption hierarchy. A server or database key is a broader concept that refers to any key registered at the server or database level in Azure SQL. One of the registered server or database keys is designated as the TDE protector. Multiple keys can be registered, but only one acts as the active protector at any given time. Geo-replication considerations Azure Key Vault considerations In active geo-replication and failover group scenarios, the primary and secondary SQL logical servers can be linked to an Azure Key Vault in any region — they do not need to be located in the same region. Connecting both SQL logical servers to the same key vault reduces the potential for discrepancies in key material that can occur when using separate key vaults. Azure Key Vault incorporates multiple layers of redundancy to ensure the continued availability of keys and key vaults in the event of service or regional failures. The following diagram represents a configuration for paired region (primary and secondary) for an Azure Key Vault cross-failover with Azure SQL setup for geo-replication using a failover group. Azure SQL considerations The primary consideration is to ensure that the server or database keys are present on both the primary and secondary SQL logical servers or databases, and that appropriate permissions have been granted for these keys within Azure Key Vault. The TDE protector used on the primary does not need to be identical to the one used on the secondary. It is sufficient to have the same key material available on both the primary and secondary systems. You can add keys to a SQL logical server with the Azure Portal, PowerShell, Azure CLI or REST API. Assign user-assigned managed identities (UMI) to primary and secondary SQL servers for flexibility across regions and failover scenarios. Grant Get, WrapKey, UnwrapKey permissions to these identities on the key vault. For Azure Key Vaults using Azure RBAC for access configuration, the Key Vault Crypto Service Encryption User role is needed by the server identity to be able to use the key for encryption and decryption operations. Different encryption key combinations Based on interactions with customers and analysis of livesite incidents involving geo-replication with TDE CMK, we noticed that many clients do not configure the primary and secondary servers with the same TDE protector, due to differing compliance requirements. In this chapter, I will explain how you can set up a failover group with 2 different TDE protectors. This scenario will use TDECMK key as the TDE protector on the primary server (tdesql) and TDECMK2 key as the TDE protector on the secondary server (tdedr). As a sidenote, you can also enable the Auto-rotate key to allow end-to-end, zero-touch rotation of the TDE protector Both logical SQL Servers have access to the Azure Key Vault keys. Next step is to create the failover group and replicate 1 database called ContosoHR. The setup of the failover group failed because the CMK key from the primary server was (intentionally) not added to the secondary server. Adding a server key can easily be done with the Add-AzSqlServerKeyVaultKey command. For example: Add-AzSqlServerKeyVaultKey -KeyId 'https://xxxxx.vault.azure.net/keys/TDECMK/ 01234567890123456789012345678901' -ServerName 'tdedr' -ResourceGroupName 'TDEDemo' After the failover group setup, the sample database on the secondary server becomes available and uses the primary's CMK (TDECMK), regardless of the secondary's CMK configuration. You can verify this by running the following query both on primary and secondary server. SELECT DB_NAME(db_id()) AS database_name, dek.encryption_state, dek.encryption_state_desc, -- SQL 2019+ / Azure SQL dek.key_algorithm, dek.key_length, dek.encryptor_type, -- CERTIFICATE (service-managed) or ASYMMETRIC KEY (BYOK/CMK) dek.encryptor_thumbprint FROM sys.dm_database_encryption_keys AS dek WHERE database_id <> 2 ; database_name encryption_state_desc encryptor_type encryptor_thumbprint ContosoHR ENCRYPTED ASYMMETRIC KEY 0xC8F041FB93531FA26BF488740C9AC7D3B5827EF5 The encryptor_type column shows ASYMMETRIC KEY, which means the database uses a CMK for encryption. The encryptor_thumbprint should match on both the primary and secondary servers, indicating that the secondary database is encrypted using the CMK from the primary server. The TDE protector on the secondary server (TDECMK2) becomes active only in the event of a failover, when the primary and secondary server roles are reversed. As illustrated in the image below, the roles of my primary and secondary servers have been reversed. If the above query is executed again following a failover, the encryptor_thumbprint value will be different, which indicates that the database is now encrypted using the TDE protector (TDECMK2) from the secondary server. database_name encryption_state_desc encryptor_type encryptor_thumbprint ContosoHR ENCRYPTED ASYMMETRIC KEY 0x788E5ACA1001C87BA7354122B7D93B8B7894918D As previously mentioned, please ensure that the server or database keys are available on both the primary and secondary SQL logical servers or databases. Additionally, verify that the appropriate permissions for these keys have been granted within Azure Key Vault. This scenario is comparable to other configurations, such as: The primary server uses SMK while the secondary server uses CMK. The primary server uses CMK while the secondary server uses SMK. Conclusion Understanding the distinction between the TDE protector and server keys is essential for geo-replication with Azure SQL. The TDE protector encrypts the database encryption key, while server keys refer to any key registered at the server or database level in Azure SQL. For successful geo-replication setup and failover, all necessary keys must be created and available on both primary and secondary servers. It is possible and, in certain cases, required to configure different TDE protectors on replicas, as long as the key material is available on each server.314Views0likes0CommentsMSSQL Extension for VS Code: Fabric Integration and GitHub Copilot Slash Commands (Public Preview)
The MSSQL Extension for VS Code continues to evolve, delivering features that make SQL development more integrated, more consistent, and more developer-friendly. In version v1.36.0, we’re announcing the Public Preview of Fabric Connectivity (Browse), SQL Database in Fabric Provisioning, and GitHub Copilot Slash Commands — three capabilities that bring Microsoft Fabric and AI-powered assistance directly into your development workflow inside Visual Studio Code. What’s new in MSSQL extension for VS Code v1.36 This release introduces three major capabilities designed to streamline the SQL development experience: Fabric Connectivity (Public Preview) — Browse and connect to Fabric workspaces directly from the Connection dialog using Microsoft Entra ID, with tree-view navigation and search. SQL Database in Fabric Provisioning (Public Preview) — Create new SQL databases in Fabric directly from the Deployments page, with instant connection in VS Code. GitHub Copilot Slash Commands (Public Preview) — Use structured slash commands in GitHub Copilot Agent Mode to connect, explore schemas, and run queries directly from chat. In addition to these major features, this release includes multiple quality and performance improvements: Enhanced performance and usability of the query results grid. Fixed accessibility issues affecting error messages and UI feedback. Addressed edge case errors in GitHub Copilot Agent Mode when switching connections. Fabric Connectivity (Public Preview) The MSSQL extension for Visual Studio Code now includes Fabric connectivity, making it easier than ever to browse and connect to your Fabric workspaces directly from your development environment. This new integration eliminates the friction of connection strings and brings Fabric resources into the familiar VS Code experience. With this update, you can authenticate once, browse all your workspaces in an intuitive tree view, and establish connections to your Fabric resources in just a few clicks—all without leaving VS Code. Key highlights Dedicated Fabric Experience – A new “Fabric” connection type joins the existing “Parameters” and “Azure” options in the connection dialog. Seamless Authentication – Leverage Microsoft Entra ID for secure, one-click authentication with persistent sign-in. Intelligent Workspace Browsing – Explore Fabric workspaces in an intuitive tree view that loads resources on-demand. Smart Search & Discovery – Find workspaces quickly with real-time search and relevant results at the top. Cross-extension Support – Use the “Open in MSSQL” option directly from the Fabric extension or Portal. SQL database in Fabric Provisioning (Public Preview) With Fabric Provisioning, you can now create SQL databases in Fabric without leaving VS Code. The new provisioning flow allows you to authenticate, s elect or create a workspace, name your database, and connect instantly—all in under three minutes. By bringing provisioning directly into the Deployments page, this experience eliminates the need to use the Fabric Portal or Azure Portal, reducing context switching and making early prototyping significantly faster. Key highlights Simple Provisioning Flow – Authenticate, select or create a workspace, name your database, and create. Immediate Connection – The new database is automatically added to your connections for instant querying. Consistent Experience – Provisioning flow aligns with other backend options, like local SQL Server containers. Capacity Awareness – Disabled workspaces are clearly shown with guidance if capacity isn’t available. GitHub Copilot Slash Commands (Public Preview) The MSSQL extension now contributes slash commands to GitHub Copilot Agent Mode, giving you a faster, more structured way to interact with your databases directly in chat. Instead of typing full prompts, you can invoke commands like /connect, /listDatabases, or /runQuery to perform common tasks with less effort and greater predictability. Key highlights Connection management – Establish, switch, or disconnect database connections with structured commands. Schema exploration – List schemas, tables, views, functions, or generate schema diagrams directly in chat. Query execution – Run queries or request optimizations with simple commands. Other updates Connection reliability – Fixed issues that prevented users from establishing connections. Fabric connectivity stability – Resolved query failures when working with Fabric tables. Authentication persistence – Fixed a bug where saved passwords were lost after failed connections. Performance & memory – Addressed excessive memory usage and unresponsive queries. Conclusion The v1.36 release marks a major milestone by introducing Fabric Connectivity (Browse), SQL Database in Fabric Provisioning, and GitHub Copilot Slash Commands in Public Preview. Together, these capabilities simplify connectivity, accelerate prototyping, and bring AI-powered assistance directly into your SQL development workflows in VS Code. Combined with important reliability fixes, this update makes the MSSQL extension more powerful, stable, and developer-friendly than ever. If there’s something you’d love to see in a future update, here’s how you can contribute: 💬 GitHub discussions – Share your ideas and suggestions to improve the extension ✨ New feature requests – Request missing capabilities and help shape future updates 🐞 Report bugs – Help us track down and fix issues to make the extension more reliable Want to see these features in action? Fabric Connectivity demo SQL database in Fabric provisioning demo Full playlist of demos Thanks for being part of the journey—happy coding! 🚀213Views0likes0CommentsEverything you need to know about TDE key management for database restore
Transparent data encryption (TDE) in Azure SQL with customer-managed key (CMK) supports Bring Your Own Key (BYOK) for data protection at rest and facilitates separation of duties in key and data management. With customer-managed TDE, the user manages the lifecycle of keys (creation, upload, rotation, deletion), usage permissions, and auditing of key operations. The key used for encrypting the Database Encryption Key (DEK), known as the TDE protector, is an asymmetric key managed by the customer and stored in Azure Key Vault (AKV). Once a database is encrypted with TDE using a Key Vault key, new backups are encrypted with the same TDE protector. Changing the TDE protector does not update old backups to use the new protector. To restore a backup encrypted with a Key Vault TDE protector, ensure that the key material is accessible to the target server. The TDE feature was designed with the requirement that both the current and previous TDE protectors are necessary for successful restores. It is recommended to retain all previous versions of the TDE protector in the key vault to enable the restore of database backups. This blog post will provide detailed information on which keys should be available for a database restore and the reasons why they are necessary. Encryption of the transaction log file To understand which keys are required for a point-in-time restore, it is necessary to first explain how the transaction log encryption operates. The SQL Server Database Engine divides each physical log file into several virtual log files (VLFs). Each VLF has its own header. Encrypting the entire log file in one single sweep is not possible, so each VLF is encrypted individually and the encryptor information is stored in the VLF header. When the log manager needs to read a particular VLF for recovery, it uses the encryptor information in the VLF header to locate the encryptor and decrypt the VLF. Unencrypted transaction log Consider the following sequence of blocks as the logical log file, where each block represents a Virtual Log File (VLF). Initially, we are in VLF1, and the current Log Sequence Number (LSN) is within VLF1. Transparent data encryption enabled When TDE is enabled on the database, the current VLF is filled with non-operational log records, and a new VLF (VLF2) is created. Each VLF has one header containing the encryptor information, so whenever the encryptor information changes, the log rolls over to the next VLF boundary. The subsequent VLF will contain the new DEK (DEK_1) and the thumbprint of the encryptor of the DEK in the header. Any additions to the log file will be added to VLF2 and will be encrypted. When VLF2 reaches capacity, a new VLF (VLF3) will be generated. Since encryption is enabled, the new VLF will contain the DEK and its information in its header, and it will also be encrypted. Key rotation When a new DEK is generated or its encryptor changes, the log rolls over to the next VLF boundary. The new VLF (VLF4) will contain the updated DEK and encryptor information. For example, if a new DEK (DEK_2) is generated via key rotation in the Azure Portal, VLF3 will fill with non-operational commands before VLF4 is created and encrypted by the new DEK. A database can use multiple keys at a single time Currently, for server and database level CMK, after a key rotation, some of the active VLFs may still be encrypted with an old key. As key rotations are allowed before these VLFs are flushed, the database can end up using multiple keys simultaneously. To ensure that at a certain point in time, the database is using only the current encryption protector (primary generation p) and the old encryption protector (generation p-1) we used the following approach: Block a protector change operation when there is any active VLF using an old thumbprint different from the current encryption protector. When a customer attempts a protector change or the key is being auto rotated, we will verify if there are any VLFs using the old thumbprint that are "active". If so, the protector change will be blocked. If there are no "active VLFs" using the old thumbprint, we take a log backup to flush the inactive VLFs, then rotate the protector and wait for it to fully rotate. This approach ensures that the database will use 2 keys at any given time. Example Time t0 = DB is created without encryption Time t1 = DB is protected by Thumbprint A Time t2 = DB protector is rotated to Thumbprint B Time t3 = Customer requests a protector change to Thumbprint C We check the active VLFs, they are using Thumbprint A and we block the change This ensures that currently the DB is only using Thumbprint A and Thumbprint B Time t4 = Customer requests a protector change to Thumbprint C We check the active VLFs, and none of them are using thumbprint A. We solicit a log backup, that should flush the inactive VLFs using thumbprint A Then we rotate the encryption protector and succeed the operation only when both (b) and (c) are fully complete This approach ensures that after time t4, the DB is only using Thumbprint B and Thumbprint C Point-in-time restore Based on the provided information, it is evident that multiple keys are necessary for a restore if key rotations have taken place. To restore a backup encrypted with a TDE protector from Azure Key Vault, ensure that the key material is accessible to the target server. Therefore, we recommend that you keep the old versions of the TDE protector in the Azure Key Vault, so database backups can be restored. To mitigate it, run the Get-AzSqlServerKeyVaultKey cmdlet for the target server or Get-AzSqlInstanceKeyVaultKey for the target managed instance to return the list of available keys and identify the missing ones. To ensure all backups can be restored, make sure the target server for the restore has access to all of keys needed. These keys don't need to be marked as TDE protector. Backed up log files remain encrypted with the original TDE protector, even if it was rotated and the database is now using a new TDE protector. At restore time, both keys are needed to restore the database. If the log file is using a TDE protector stored in Azure Key Vault, this key is needed at restore time, even if the database has been changed to use service-managed TDE in the meantime. Point-in-time restore example When a customer wants to restore data to a specific point in time (tx), they will need the current encryption protector (p) and the old encryption protector (p-1) from the period [tx-8 days] to [tx]. The reason for using tx-8 is that there is a full backup every 7 days, so we expect to have a complete backup within the last 8 days. Because VLFs may remain active with the earlier key, the system is designed to use the two latest thumbprints (p-2 and p-3) from outside the buffer period. Consider the following timeline: The PITR request is made for 8/20/2025 (tx), at which point Thumbprint D (p) is active. To ensure we have a full backup, we subtract 8 days, bringing us to 8/12/2025 (tx-8), when Thumbprint C (p-1) is active. Since VLFs might still be active with the previous key, we also need Thumbprint B (p-2) and Thumbprint A (p-3). The required thumbprints for this point-in-time restore are A, B, C and D. Conclusion To restore a backup encrypted with a Key Vault TDE protector, it is essential to ensure that the key material is accessible to the target server. It is recommended to retain all old versions of the TDE protector in the key vault to facilitate the restore of database backups.434Views2likes0CommentsIntroducing the Azure SQL hub: A simpler, guided entry into Azure SQL
Choosing the right Azure SQL service can be challenging. To make this easier, we built the Azure SQL hub, a new home for everything related to Azure SQL in the Azure portal. Whether you’re new to Azure SQL or an experienced user, the hub helps you find the right service quickly and decide, without disrupting your existing workflows. For existing users: Your current workflows remain unchanged. The only visible update is a streamlined navigation pane where you access Azure SQL resources. For new users: Start from the Azure SQL hub home page. Get personalized recommendations by answering a few quick questions or chatting with Azure portal Copilot. Or compare services side by side and explore key resources, all without leaving the portal. This is one way to find it: Searching for "azure sql" in main search box or marketplace is also efficient way to get to Azure SQL hub Answer a few questions to get our recommendation and use Copilot to refine your requirements. Get a detailed side-by-side comparison without leaving the hub. Still deciding? Explore a selection of Azure SQL services for free. This option takes you straight to the resource creation page with a pre-applied free offer. Try the Azure SQL hub today in the Azure portal, and share your feedback in the comments!1.6KViews3likes0Comments