azure sql database
489 TopicsLessons Learned #538: Printing One Line Per Field in T-SQL in Azure SQL Database.
Working on a case with a huge number of columns I needed to dump records as one line per field. Happy to share this script that it was useful. For example, I would like to have the data in this way. Id: 123 Status: Open Title: Connectivity issue Running the following TSQL, I was able to obtain the info as I expected. ;WITH src AS ( SELECT top 10000 * FROM dbo.Incidents ORDER BY IncidentId ) SELECT s.IncidentId, f.ord, CONCAT(f.field, ': ', COALESCE(f.value, '')) AS line FROM src AS s CROSS APPLY (VALUES ( 1, N'IncidentId', CONVERT(nvarchar(50), s.IncidentId)), ( 2, N'Status', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Status), CHAR(13), ' '), CHAR(10), ' ')), ( 3, N'Path', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Path), CHAR(13), ' '), CHAR(10), ' ')), ( 4, N'Title', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Title), CHAR(13), ' '), CHAR(10), ' ')), ( 5, N'Severity', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Severity),CHAR(13), ' '), CHAR(10), ' ')), ( 6, N'IsResolved', CONVERT(nvarchar(10), s.IsResolved)), ( 7, N'ResolvedAt', CONVERT(varchar(19), s.ResolvedAt, 126)), -- ISO 8601 ( 8, N'Owner', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Owner), CHAR(13), ' '), CHAR(10), ' ')), ( 9, N'Source', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Source), CHAR(13), ' '), CHAR(10), ' ')), (10, N'Tags', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Tags), CHAR(13), ' '), CHAR(10), ' ')), (11, N'Notes', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Notes), CHAR(13), ' '), CHAR(10), ' ')) ) AS f(ord, field, value) ORDER BY s.IncidentId, f.ord;đ Public Preview: Backup Immutability for Azure SQL Database LTR Backups
The Ransomware Threat Landscape Ransomware attacks have become one of the most disruptive cybersecurity threats in recent years. These attacks typically follow a destructive pattern: Attackers gain unauthorized access to systems. They encrypt or delete critical data. They demand ransom in exchange for restoring access. Organizations without secure, tamper-proof backups are often left with no choice but to pay the ransom or suffer significant data loss. This is where immutable backups play a critical role in defense. đĄď¸ What Is Backup Immutability? Backup immutability ensures that once a backup is created, it cannot be modified or deleted for a specified period. This guarantees: Protection against accidental or malicious deletion. Assurance that backups remain intact and trustworthy. Compliance with regulatory requirements for data retention and integrity. đ Azure SQL Database LTR Backup Immutability (Public Preview) Microsoft has introduced backup immutability for Long-Term Retention (LTR) backups in Azure SQL Database, now available in public preview. This feature allows organizations to apply Write Once, Read Many (WORM) policies to LTR backups stored in Azure Blob Storage. Key Features: Time-based immutability: Locks backups for a defined duration (e.g., 30 days). Legal hold immutability: Retains backups indefinitely until a legal hold is explicitly removed. Tamper-proof storage: Backups cannot be deleted or altered, even by administrators. This ensures that LTR backups remain secure and recoverable, even in the event of a ransomware attack. đ Regulatory Requirements for Backup Immutability Many global regulations mandate immutable storage to ensure data integrity and auditability. Here are some key examples: Region Regulation Requirement USA SEC Rule 17a-4(f) Requires broker-dealers to store records in WORM-compliant systems. FINRA Mandates financial records be preserved in a non-rewriteable, non-erasable format. HIPAA Requires healthcare organizations to ensure the integrity and availability of electronic health records. EU GDPR Emphasizes data integrity and the ability to demonstrate compliance through audit trails. Global ISO 27001, PCI-DSS Require secure, tamper-proof data retention for audit and compliance purposes. Azureâs immutable storage capabilities help organizations meet these requirements by ensuring that backup data remains unchanged and verifiable. đ Time-Based vs. Legal Hold Immutability âąď¸ Time-Based Immutability Locks data for a predefined period (e.g., 30 days). Ideal for routine compliance and operational recovery. Automatically expires after the retention period. đ Legal Hold Immutability Retains data indefinitely until the hold is explicitly removed. Used in legal investigations, audits, or regulatory inquiries. Overrides time-based policies to ensure data preservation. Both types can be applied to Azure SQL LTR backups, offering flexibility and compliance across different scenarios. đ§Š How Immutability Protects Against Ransomware Immutable backups are a critical component of a layered defense strategy: Tamper-proof: Even if attackers gain access, they cannot delete or encrypt immutable backups. Reliable recovery: Organizations can restore clean data from immutable backups without paying ransom. Compliance-ready: Meets regulatory requirements for data retention and integrity. By enabling immutability for Azure SQL LTR backups, organizations can significantly reduce the risk of data loss and ensure business continuity. â Final Thoughts The public preview of backup immutability for Azure SQL Database LTR backups is a major step forward in ransomware resilience and regulatory compliance. With support for both time-based and legal hold immutability, Azure empowers organizations to: Protect critical data from tampering or deletion. Meet global compliance standards. Recover quickly and confidently from cyberattacks. Immutability is not just a featureâitâs a foundational pillar of modern data protection. Documentation is available at - Backup Immutability for Long-Term Retention Backups - Azure SQL Database | Microsoft Learn567Views4likes1CommentConfigure Auditing for Azure SQL Database series - Part2
In the previous blog, we covered how to configure server level audit and database level audit for Azure SQL database with default settings. In this blog we will see how you can manage auditing using PowerShell cmdlets. PowerShell cmdlets to manage auditing for Azure SQL database: Get-AzSqlServerAudit - Get Server Auditing Policy Get-AzSqlDatabaseAudit - Get Database Auditing Policy Set-AzSqlDatabaseAudit - Create or Update Database Auditing Policy Set-AzSqlServerAudit - Create or Update Server Auditing Policy Remove-AzSqlDatabaseAudit - Remove Database Auditing Policy Remove-AzSqlServerAudit - Remove Server Auditing Policy Refer manage Azure SQL database auditing using Azur PowerShell documentation here for more details. Sometimes you might have a requirement to customize auditing for Azure SQL database to meet compliance or achieve parity with your on-premises servers or just audit specific details to meet business needs. Using PowerShell cmdlets, you can configure auditing to audit specific action groups and use predicate expressions to filter queries, logins, schemas, and databases. Before we modify the audit settings let us verify the existing server and database audit configuration Review the existing configuration Set the context to your subscription Set-AzContext -Subscription "xxx-xxxx-xxxx-xxxx" Check server audit configuration Get-AzSqlServerAudit "resource group name " "servername" You can see server audit is enabled with default action groups and target as log analytics workspace 3. Check database audit configuration Get-AzSqlDatabaseAudit "resource group name" "servername" "database name" Database audit is disabled, and you can see no action groups configured and all targets are in disabled state. Here we are discussing customizing audit settings based on a few sample scenarios and you can extend it further based on your requirements. Scenario 1: Exclude a specific login from auditing. Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "test" -BlobStorageTargetState Enabled -StorageAccountResourceId "xxxxxx" -PredicateExpression "[server_principal_name]!= 'dba'" Scenario 2: Exclude a login and specific database from auditing Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "test" -BlobStorageTargetState Enabled -StorageAccountResourceId "xxxxxx" -PredicateExpression "[server_principal_name] != âdbadminâ and [database_name] != 'audit'" Scenario 3: Exclude select statements from auditing Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "sravstestdb" -PredicateExpression "statement not like '[select ]%'" Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "sravstestdb" -PredicateExpression "statement <> 'exec sp_executesql NâSELECT%â'" Scenario 4: Audit different action groups instead of default ones Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "test" -LogAnalyticsTargetState Enabled -WorkspaceResourceId "/subscriptions/fa58cf66-caaf-xxxxxxx-xxxxxxxxx/resourceGroups/test/providers/Microsoft.OperationalInsights/workspaces/test" -AuditActionGroup SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP","FAILED_DATABASE_AUTHENTICATION_GROUP","DATABASE_OBJECT_CHANGE_GROUP" Scenario 5: use predicate expression to exclude transaction related actions from batch completed action group Set-AzSqlDatabaseAudit -ResourceGroupName "test" -ServerName "test" -LogAnalyticsTargetState Enabled -WorkspaceResourceId "/subscriptions/fa58cf66-caaf-xxxxxxx-xxxxxxxxx/resourceGroups/test/providers/Microsoft.OperationalInsights/workspaces/test" -AuditActionGroup "Batch_completed_group" -PredicateExpression "additional_information not like '%<transaction_info>%" Scenario 6: use predicate expression to exclude action from a given action group. the below example shows filtering RPC completed events from batch completed action group. to get action id, please refer Filter SQL Server Audit on action_id Set-AzSqlServerAudit -ResourceGroupName "RESOURCE_GROUP" -ServerName "SERVER_NAME" -BlobStorageTargetState Enabled -StorageAccountResourceId "STORAGE_ACCOUNT_RESOURCE_ID" -PredicateExpression "action_id != 541934418" Note: Both Set-AzSqlServerAudit and Set-AzSqlDatabaseAudit overwrites the existing configuration, so when you modify you need to add the default action groups if you still want to audit them. Review the server audit settings, you can see auditing is configured with different action groups and filters. In this blog we have successfully configured server audit with the required audit action groups and filter expressions.10KViews2likes4CommentsConvert geo-replicated databases to Hyperscale
Update: On 22 October 2025 we announced the General Availability for this improvement. Weâre excited to introduce the next improvement in Hyperscale conversion: a new feature that allows customers to convert Azure SQL databases to Hyperscale by keeping active geo-replication or failover group configurations intact. This builds on our earlier improvements and directly addresses one of the most requested capabilities from customers. With this improvement, customers can now modernize your database architecture with Hyperscale while maintaining business continuity. Overview We have heard feedback from customers about possible improvements we could make while converting their databases to Hyperscale. Customers complained about the complex steps they needed to perform to convert a database to Hyperscale when the database is geo-replicated by active geo-replication or failover groups. Previously, converting to Hyperscale required tearing down geo-replication links and recreating them after the conversion. Now, thatâs no longer necessary. This improvement allows customers to preserve their cross-region disaster recovery or read scale-out configurations and still allows conversion to Hyperscale which helps in minimizing downtime and operational complexity. This feature is especially valuable for applications that rely on failover group endpoints for connectivity. Before this improvement, if application needs to be available during conversion, then connection string needed modifications as a part of conversion because the failover group and its endpoints had to be removed. With this new improvement, the conversion process is optimized for minimal disruption, with telemetry showing majority of cutover times under one minute. Even with a geo-replication configuration in place, you can still choose between automatic and manual cutover modes, offering flexibility in scheduling the transition. Progress tracking is now more granular, giving customers better visibility into each stage of the conversion, including the conversion of the geo-secondary to Hyperscale. Customer feedback Throughout the preview phase, we have received overwhelmingly positive feedback from several customers about this improvement. Viktoriia Kuznetcova, Senior Automation Test Engineer from Nintex says: We needed a low-downtime way to move our databases from the Premium tier to Azure SQL Database Hyperscale, and this new feature delivered perfectly; allowing us to complete the migration in our test environments safely and smoothly, even while the service remained under continuous load, without any issues and without needing to break the failover group. We're looking forward to the public release so we can use it in production, where Hyperscaleâs ability to scale storage both up and down will help us manage peak loads without overpaying for unused capacity. Get started The good news is that there are no changes needed to the conversion process. The workflow automatically detects that a geo-secondary is present and converts it to Hyperscale. There are no new parameters, and the method remains the same as the existing conversion process which works for non-geo-replicated databases. All you need is to make sure that: You have only one geo-secondary replica because Hyperscale doesn't support more than one geo-secondary replica. If a chained geo-replication configuration exists, it must be removed before starting the conversion to Hyperscale. Creating a geo-replica of a geo-replica (also known as "geo-replica chaining") isn't supported in Hyperscale. Once the above requirements are satisfied, you can use any of the following methods to initiate the conversion process. Conversion to Hyperscale must be initiated starting from the primary geo-replica. The following table provides sample commands to convert a database named WideWorldImporters on a logical server called contososerver to an 8-vcore Hyperscale database with manual cutover option. Method Command T-SQL ALTER DATABASE WideWorldImporters MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen5_8') WITH MANUAL_CUTOVER; PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -Edition "Hyperscale" -RequestedServiceObjectiveName "HS_Gen5_8" -ManualCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --edition Hyperscale --service-objective HS_Gen5_8 --manual-cutover Here are some notable details of this improvement: The geo-secondary database is automatically converted to Hyperscale with the same service level objective as the primary. All database configurations such as maintenance window, zone-resiliency, backup redundancy etc. remain the same as earlier (i.e., both geo-primary and geo-secondary would inherit from their own earlier configuration). A planned failover isn't possible while the conversion to Hyperscale is in progress. A forced failover is possible. However, depending on the state of the conversion when the forced failover occurs, the new geo-primary after failover might use either the Hyperscale service tier, or its original service tier. If the geo-secondary database is in an elastic pool before conversion, it is taken out of the pool and might need to be added back to a Hyperscale elastic pool separately after the conversion. The deployment of the feature is in progress and is expected to be completed in all Azure regions in a few weeks. In case you see error (Update to service objective '<SLO name>' with source DB geo-replicated is not supported for entity '<Database Name>') while converting primary to Hyperscale, wait for this new improvement to become available in your region. If you donât want to use this capability, make sure to remove any geo-replication configuration before converting your databases to Hyperscale. Conclusion This update marks a significant step forward in the Hyperscale conversion process, offering simple steps, less downtime and keeping the geo-secondary available during the conversion process. We encourage you to try this capability and provide your valuable feedback and help us refine this feature. You can contact us by commenting on this blog post and weâll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1.7KViews2likes0CommentsLessons Learned #537: Copilot Prompts for Troubleshooting on Azure SQL Database
We had the opportunity to share our experience in several community sessions how SSMS Copilot can help across multiple phases of troubleshooting. In this article, I would like to share a set of prompts we found in those sessions and show how to apply them to an example query. During a performance incident, we captured the following query, generated by PowerBI. SELECT TOP (1000001) * FROM ( SELECT [t2].[Fiscal Month Label] AS [c38], SUM([t5].[Total Excluding Tax]) AS [a0], SUM([t5].[Total Including Tax]) AS [a1] FROM ( SELECT [$Table].[Sale Key] as [Sale Key], [$Table].[City Key] as [City Key], [$Table].[Customer Key] as [Customer Key], [$Table].[Bill To Customer Key] as [Bill To Customer Key], [$Table].[Stock Item Key] as [Stock Item Key], [$Table].[Invoice Date Key] as [Invoice Date Key], [$Table].[Delivery Date Key] as [Delivery Date Key], [$Table].[Salesperson Key] as [Salesperson Key], [$Table].[WWI Invoice ID] as [WWI Invoice ID], [$Table].[Description] as [Description], [$Table].[Package] as [Package], [$Table].[Quantity] as [Quantity], [$Table].[Unit Price] as [Unit Price], [$Table].[Tax Rate] as [Tax Rate], [$Table].[Total Excluding Tax] as [Total Excluding Tax], [$Table].[Tax Amount] as [Tax Amount], [$Table].[Profit] as [Profit], [$Table].[Total Including Tax] as [Total Including Tax], [$Table].[Total Dry Items] as [Total Dry Items], [$Table].[Total Chiller Items] as [Total Chiller Items], [$Table].[Lineage Key] as [Lineage Key] FROM [Fact].[Sale] as [$Table] ) AS [t5] INNER JOIN ( SELECT [$Table].[Date] as [Date], [$Table].[Day Number] as [Day Number], [$Table].[Day] as [Day], [$Table].[Month] as [Month], [$Table].[Short Month] as [Short Month], [$Table].[Calendar Month Number] as [Calendar Month Number], [$Table].[Calendar Month Label] as [Calendar Month Label], [$Table].[Calendar Year] as [Calendar Year], [$Table].[Calendar Year Label] as [Calendar Year Label], [$Table].[Fiscal Month Number] as [Fiscal Month Number], [$Table].[Fiscal Month Label] as [Fiscal Month Label], [$Table].[Fiscal Year] as [Fiscal Year], [$Table].[Fiscal Year Label] as [Fiscal Year Label], [$Table].[ISO Week Number] as [ISO Week Number] FROM [Dimension].[Date] as [$Table] ) AS [t2] ON [t5].[Delivery Date Key] = [t2].[Date] GROUP BY [t2].[Fiscal Month Label] ) AS [MainTable] WHERE ( NOT([a0] IS NULL) OR NOT([a1] IS NULL) ) I structure the investigation in three areas: Analysis â understand the data model, sizes, and relationships. List all tables in the 'Fact' and 'Dimension' schemas with space usage in MB and number of rows. The name of the tables and their relations among them. Please, provide a textual representation for all relations. List all foreign key relationships between tables in the 'Fact' and 'Dimension' schemas, showing the cardinality and referenced columns. Could you please let me know what is the meaning of every table? Describe all schemas in this database, listing the number of tables and views per schema. Create a textual data model (ER-style) representation showing how all Fact and Dimension tables are connected. Maintenance Plan Check â verify statistics freshness, index health/fragmentation, partition layout, and data quality. List all statistics in the database that have not been updated in the last 7 days, showing table name, number of rows, and last update date. List all indexes in the database with fragmentation higher than 30%, including table name, index name, and page count. Please, provide the T-SQL to rebuild all indexes in ONLINE mode and UPDATE STATISTICS for all tables that are automatic statistics. Check for fact table rows that reference dimension keys which no longer exist (broken foreign key integrity). Find queries that perform table scans on large tables where no indexes are used, based on recent execution plans. Performance Improvements â simplify/reshape the query and consider indexed views, columnstore, partitioning, and missing indexes. In this part, I would like to spend more time about these prompts, for example the following ones, help me to understand the performance issue, simplify the query text and also, explains what the query is doing. Identify the longest-running query in the last 24 hours provide the full text of the query Please simplify the query Explain me the query Explain in plain language what the following SQL query does, including the purpose of each subquery and the final WHERE clause. Show a histogram of data distribution for key columns used in joins or filters, such as SaleDate, ProductCategory, or Region. Finally, using this prompt I could find a lot of useful information how to improve the execution of this query: Analyze the following SQL query and provide a detailed performance review tailored for Azure SQL Database Hyperscale and Power BI DirectQuery scenarios. For each recommendation, estimate the potential performance improvement as a percentage (e.g. query runtime reduction, I/O savings, etc.). 1. Could this query benefit from a schemabound indexed view or a materialized view? Estimate the performance gain if implemented. 2. Is there any missing index on the involved tables that would improve join or filter efficiency? Include the suggested index definition and expected benefit. 3. Would using a clustered or nonclustered columnstore index on the main fact table improve performance? Estimate the potential gain in query time or storage. 4. Could partitioning the fact table improve performance by enabling partition elimination? If so, suggest the partition key and scheme, and estimate improvement. 5. Are current statistics sufficient for optimal execution plans? Recommend updates if needed and estimate impact. 6. Does this query preserve query folding when used with Power BI DirectQuery? If not, identify what breaks folding and suggest how to fix it. 7. Recommend any query rewrites or schema redesigns, along with estimated performance improvements for each. I got a lot of improvements suggestions about it: Evaluated a schemabound indexed view that preâaggregates by month (see Reference Implementations), then pointed Power BI to the view. Ensured clustered columnstore on Fact.Sale; considered a targeted rowstore NCI on [Delivery Date Key] INCLUDE ([Total Excluding Tax], [Total Including Tax]) when columnstore alone wasnât sufficient. Verified statistics freshness on join/aggregate columns and enabled incremental stats for partitions. Checked partitioning by date to leverage elimination for common slicers.193Views0likes0CommentsMultiple geo-replicas for Azure SQL Hyperscale is now in public preview
Active geo-replication for Azure SQL is a business continuity solution that lets you perform quick disaster recovery of individual databases if there is a regional disaster or a large-scale outage. Up to four geo-secondaries could be created for an Azure SQL database except for Hyperscale, until now. Weâre excited to announce that Azure SQL Database Hyperscale support for up to four geo-replicas is available now in public preview. This enhancement gives you greater flexibility for disaster recovery, regional scale-out, and migration scenarios. Whatâs New? Create up to four read-only geo-replicas for each Hyperscale database, in the same or different Azure regions. Use the additional geo-replicas to add read scale-out capabilities to additional regions. More flexibility to facilitate database migrations and zone redundancy changes. How to Get Started Getting started with multiple geo-replicas in Azure SQL Hyperscale is straightforward. In the Azure Portal, you can add multiple geo-replicas using the same process that you currently use to add a geo-replica. Go to your Hyperscale database in the Azure portal. Open the âReplicasâ blade under âData managementâ. Click âCreate replicaâ. TIP: If you want the geo-replica enabled for zone redundancy, remember to click on "Configure database" in the "Compute + storage" section during this step. The zone redundancy setting is not automatically inherited from the primary database when creating the geo-replica. Select the target server Review and create Creating a geo-replica can also be done with command line tools like PowerShell. Example: New-AzSqlDatabaseSecondary -DatabaseName mydb1 -ServerName sqlserver1 -ResourceGroupName myrg -PartnerResourceGroupName myrg -PartnerServerName sqlserver2 -AllowConnections âAllâ Performing a Failover Performing a failover in the portal is the same process with multiple geo-replicas. Click on the ellipses in the row of the replica to which you want to fail over and choose Failover or Forced failover. For PowerShell, use the Set-AzSqlDatabaseSecondary cmdlet. Example: Set-AzSqlDatabaseSecondary -DatabaseName mydb1 -PartnerResourceGroupName myrg -ServerName sqlserver2 -ResourceGroupName myrg -Failover Limitations & Notes You can create up to four geo-replicas per database. Geo-replicas must be created on different logical servers. Chaining (creating a geo-replica of a geo-replica) is not supported. If you would like zone redundancy enabled for the geo-replica, you must configure that setting during the create geo-replica process since the setting is not automatically inherited from the primary database. Learn More: Hyperscale secondary replicas - Azure SQL Database | Microsoft Learn Active Geo-Replication - Azure SQL Database | Microsoft Learn248Views1like0CommentsGeo-Replication and Transparent Data Encryption Key Management in Azure SQL Database
Transparent data encryption (TDE) in Azure SQL with customer-managed key (CMK) enables Bring Your Own Key (BYOK) scenario for data protection at rest. With customer-managed TDE, the customer is responsible for and in a full control of a key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing of operations on keys. Geo-replication and Failover Groups in Azure SQL Database creates readable secondary replicas across different regions to support disaster recovery and high availability. There are several considerations when configuring geo-replication for Azure SQL logical servers that use Transparent Data Encryption (TDE) with Customer-Managed Keys (CMK). This blog post provides detailed information about setting up geo-replication for TDE-enabled databases. Understanding the difference between a TDE protector and a server/database Key To understand the geo-replication considerations, I first need to explain the roles of the TDE protector and a server key. The TDE protector is the key responsible for encrypting the Database Encryption Key (DEK), which in turn encrypts the actual database files and transaction logs. It sits at the top of the encryption hierarchy. A server or database key is a broader concept that refers to any key registered at the server or database level in Azure SQL. One of the registered server or database keys is designated as the TDE protector. Multiple keys can be registered, but only one acts as the active protector at any given time. Geo-replication considerations Azure Key Vault considerations In active geo-replication and failover group scenarios, the primary and secondary SQL logical servers can be linked to an Azure Key Vault in any region â they do not need to be located in the same region. Connecting both SQL logical servers to the same key vault reduces the potential for discrepancies in key material that can occur when using separate key vaults. Azure Key Vault incorporates multiple layers of redundancy to ensure the continued availability of keys and key vaults in the event of service or regional failures. The following diagram represents a configuration for paired region (primary and secondary) for an Azure Key Vault cross-failover with Azure SQL setup for geo-replication using a failover group. Azure SQL considerations The primary consideration is to ensure that the server or database keys are present on both the primary and secondary SQL logical servers or databases, and that appropriate permissions have been granted for these keys within Azure Key Vault. The TDE protector used on the primary does not need to be identical to the one used on the secondary. It is sufficient to have the same key material available on both the primary and secondary systems. You can add keys to a SQL logical server with the Azure Portal, PowerShell, Azure CLI or REST API. Assign user-assigned managed identities (UMI) to primary and secondary SQL servers for flexibility across regions and failover scenarios. Grant Get, WrapKey, UnwrapKey permissions to these identities on the key vault. For Azure Key Vaults using Azure RBAC for access configuration, the Key Vault Crypto Service Encryption User role is needed by the server identity to be able to use the key for encryption and decryption operations. Different encryption key combinations Based on interactions with customers and analysis of livesite incidents involving geo-replication with TDE CMK, we noticed that many clients do not configure the primary and secondary servers with the same TDE protector, due to differing compliance requirements. In this chapter, I will explain how you can set up a failover group with 2 different TDE protectors. This scenario will use TDECMK key as the TDE protector on the primary server (tdesql) and TDECMK2 key as the TDE protector on the secondary server (tdedr). As a sidenote, you can also enable the Auto-rotate key to allow end-to-end, zero-touch rotation of the TDE protector Both logical SQL Servers have access to the Azure Key Vault keys. Next step is to create the failover group and replicate 1 database called ContosoHR. The setup of the failover group failed because the CMK key from the primary server was (intentionally) not added to the secondary server. Adding a server key can easily be done with the Add-AzSqlServerKeyVaultKey command. For example: Add-AzSqlServerKeyVaultKey -KeyId 'https://xxxxx.vault.azure.net/keys/TDECMK/ 01234567890123456789012345678901' -ServerName 'tdedr' -ResourceGroupName 'TDEDemo' After the failover group setup, the sample database on the secondary server becomes available and uses the primary's CMK (TDECMK), regardless of the secondary's CMK configuration. You can verify this by running the following query both on primary and secondary server. SELECT DB_NAME(db_id()) AS database_name, dek.encryption_state, dek.encryption_state_desc, -- SQL 2019+ / Azure SQL dek.key_algorithm, dek.key_length, dek.encryptor_type, -- CERTIFICATE (service-managed) or ASYMMETRIC KEY (BYOK/CMK) dek.encryptor_thumbprint FROM sys.dm_database_encryption_keys AS dek WHERE database_id <> 2 ; database_name encryption_state_desc encryptor_type encryptor_thumbprint ContosoHR ENCRYPTED ASYMMETRIC KEY 0xC8F041FB93531FA26BF488740C9AC7D3B5827EF5 The encryptor_type column shows ASYMMETRIC KEY, which means the database uses a CMK for encryption. The encryptor_thumbprint should match on both the primary and secondary servers, indicating that the secondary database is encrypted using the CMK from the primary server. The TDE protector on the secondary server (TDECMK2) becomes active only in the event of a failover, when the primary and secondary server roles are reversed. As illustrated in the image below, the roles of my primary and secondary servers have been reversed. If the above query is executed again following a failover, the encryptor_thumbprint value will be different, which indicates that the database is now encrypted using the TDE protector (TDECMK2) from the secondary server. database_name encryption_state_desc encryptor_type encryptor_thumbprint ContosoHR ENCRYPTED ASYMMETRIC KEY 0x788E5ACA1001C87BA7354122B7D93B8B7894918D As previously mentioned, please ensure that the server or database keys are available on both the primary and secondary SQL logical servers or databases. Additionally, verify that the appropriate permissions for these keys have been granted within Azure Key Vault. This scenario is comparable to other configurations, such as: The primary server uses SMK while the secondary server uses CMK. The primary server uses CMK while the secondary server uses SMK. Conclusion Understanding the distinction between the TDE protector and server keys is essential for geo-replication with Azure SQL. The TDE protector encrypts the database encryption key, while server keys refer to any key registered at the server or database level in Azure SQL. For successful geo-replication setup and failover, all necessary keys must be created and available on both primary and secondary servers. It is possible and, in certain cases, required to configure different TDE protectors on replicas, as long as the key material is available on each server.240Views0likes0CommentsGeneral Availability - DMS's PowerShell, Azure CLI, and Python SDK
Weâre excited to announce the General Availability (GA) of DMS client tools - PowerShell, Azure CLI, Python SDK and more. This milestone unlocks efficient, stable, and scalable automation options for database migration workflowsâmaking it easier than ever to integrate DMS into your DevOps pipelines and enterprise migration strategies. đĄIntroduction: With the general availability of DMS client tools - PowerShell, Azure CLI, Python SDK, users can now use stable release of: PowerShell module 1.0.0 (https://www.powershellgallery.com/packages/Az.DataMigration/1.0.0) Azure CLI extension 1.0.0 (https://learn.microsoft.com/en-us/cli/azure/datamigration?view=azure-cli-latest) DMS V2 APIs (version 2025-06-30) SDKs for multiple languages (listed below) SDKs Releases: Language GA Package / Link .Net https://www.nuget.org/packages/Azure.ResourceManager.DataMigration/1.0.0 Java https://central.sonatype.com/artifact/com.azure.resourcemanager/azure-resourcemanager-datamigration/1.1.0 Go https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/datamigration/armdatamigration/v2 Python azure-mgmt-datamigration ¡ PyPI JavaScript https://www.npmjs.com/package/@azure/arm-datamigration/v/3.0.0 đ§ Whatâs New? Three new commands have been introduced in the latest releases of the SDK, PowerShell module, and CLI extension, as outlined below: New CLI Commands: az datamigration sql-db retry - Retry the failed SQL DB migrations. az datamigration sql-managed-instance delete - Delete Azure SQL MIâs Database Migration resource. az datamigration sql-vm delete - Delete Azure SQL VMâs Database Migration resource. New PowerShell Commands: Invoke-AzDataMigrationRetryToSqlDb - Retry the failed SQL DB migrations. Remove-AzDataMigrationToSqlManagedInstance - Delete Azure SQL MIâs Database Migration resource. Remove-AzDataMigrationToSqlVM - Delete Azure SQL VMâs Database Migration resource. đConclusion: With this GA / stable release, users can now: Use them to configure and execute migrations with full control. Automate migrations: DevOps teams can embed migration steps into CI/CD pipelines. Integrate into custom applications and orchestration tools. These support all the DMS migration scenariosâfrom simple lift-and-shift operations to complex logical migrationsâwhile ensuring stability, and repeatability. For more details, refer: Documentation: Migrate databases at scale using Azure PowerShell / CLI PowerShell: Az.DataMigration Module Azure CLI: az datamigration Python SDK: azure-mgmt-datamigration ¡ PyPI201Views2likes0CommentsLessons Learned #536: Error the database type is not understood by the data reader.
This week we have been working on a support case where our customer had the following message 50006: Error the database type is not understood by the data reader when exporting using SQL Server Management Studio to a bacpac. We also observed that other tools showed the same message Checking SQL Server Management Studio, we faced the following error screen: Could not export schema and data from database. (Microsoft.SqlServer.Dac) ADDITIONAL INFORMATION: 50006: Error the database type '' is not understood by the data reader. (Microsoft.Data.Tools.Schema.Sql) As part of our investigation, we validated that the native export tool SQLPackage did not produce the same error. This indicated that the issue was not directly related to the database itself, but rather to the version of SSMS (or the specific client tool being used). After further analysis, we identified the root cause: This error occurs due to a limitation in SSMS v20 (and some third-party export/backup tools). Specifically, these tools do not support the JSON data type in SQL database columns. As a result, any attempt to export or back up a database containing JSON columns with these versions will fail. After the analysis, we've found to be a limitation in the SSMS v.20 and other third-party backup/export tool, which does not support the JSON data type in SQL database columns. As a result, any attempt to export or back up a database containing JSON columns using this tool would fail. In our case, the issue was resolved after the customer upgraded to SSMS v21, which includes support for JSON data types during bacpac exports. Afterthe upgrade, the export process worked as expected.