<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Azure Database Support Blog articles</title>
    <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/bg-p/AzureDBSupport</link>
    <description>Azure Database Support Blog articles</description>
    <pubDate>Tue, 28 Apr 2026 00:13:40 GMT</pubDate>
    <dc:creator>AzureDBSupport</dc:creator>
    <dc:date>2026-04-28T00:13:40Z</dc:date>
    <item>
      <title>How to recover Azure SQL Managed Instance access when the admin SQL login is disabled</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/how-to-recover-azure-sql-managed-instance-access-when-the-admin/ba-p/4513905</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Overview&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Some of you have probably ran into the issue of losing administrative access to Azure SQL Managed Instance because the SQL login(s) are disabled, or because the login is no longer a member of the &lt;STRONG&gt;sysadmin &lt;/STRONG&gt;fixed server role. Since we have received some cases on the matter, I'll be explaining one possible way to recover access in this scenario by using a Microsoft Entra administrator. Azure SQL Managed Instance supports both SQL logins and Microsoft Entra authentication, and the Microsoft Entra administrator for the managed instance can be set from the Azure portal, PowerShell, Azure CLI, or REST API.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Symptom&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Customers might see an error similar to the following when trying to connect with the SQL login:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Login failed for user 'sqlmiadmin'. Reason: The account is disabled. (Microsoft SQL Server, Error: 18470)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Connection ID: 'XYZ'&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Scenario Details&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In the scenario discussed here, the SQL login(s) on the managed instance were either disabled, or their membership in the &lt;STRONG&gt;sysadmin &lt;/STRONG&gt;fixed server role was removed. As a result, there was no remaining SQL principal available with sufficient permissions to reverse the change directly. One possible recovery method is to configure, or reconfigure, a Microsoft Entra administrator for the managed instance and use that path to regain administrative access. Setting the Microsoft Entra administrator enables Microsoft Entra authentication for Azure SQL Managed Instance.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Root Cause&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The issue occurs because the required T-SQL statements can only be executed by principals that already have sufficient server-level permissions:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;ALTER LOGIN ... ENABLE requires ALTER ANY LOGIN. If the target login is a member of sysadmin, enabling or disabling that login also requires CONTROL SERVER.&lt;/LI&gt;
&lt;LI&gt;ALTER SERVER ROLE [sysadmin] ADD MEMBER ... can only be executed by a principal that is already a member of sysadmin or of that same fixed server role. CONTROL SERVER and ALTER ANY SERVER ROLE are not sufficient for adding members to a fixed server role.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Because of this, when no existing SQL principal can perform these actions or all logins gets disabled, an alternate administrative way is needed.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Resolution&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;One possible way to resolve the issue is to configure a Microsoft Entra administrator for the managed instance by following the steps in &lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/authentication-aad-configure?view=azuresql&amp;amp;tabs=azure-portal#azure-sql-database-and-azure-synapse-analytics" target="_blank" rel="noopener"&gt;Configure Microsoft Entra Authentication - Azure SQL Database &amp;amp; SQL Managed Instance &amp;amp; Azure Synapse Analytics | Microsoft Learn&lt;/A&gt;. In the Azure portal, go to the SQL managed instance resource, open &lt;STRONG&gt;Microsoft Entra ID&lt;/STRONG&gt; under &lt;STRONG&gt;Settings&lt;/STRONG&gt;, choose &lt;STRONG&gt;Set admin&lt;/STRONG&gt;, select the required user or group, and then select &lt;STRONG&gt;Save&lt;/STRONG&gt;. The same article also documents that you can remove the current Microsoft Entra admin and set it again if needed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After the Microsoft Entra administrator is configured, connect to the managed instance by using a supported Microsoft Entra authentication method, such as Microsoft Entra Password or Microsoft Entra MFA.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If the issue is that the SQL login is disabled, run the following T-SQL after connecting with a principal that has the required permissions:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;ALTER LOGIN [sqlmiadmin] ENABLE;
GO&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If the issue is that the login is no longer a member of &lt;STRONG&gt;sysadmin&lt;/STRONG&gt;, run the following T-SQL:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;ALTER SERVER ROLE [sysadmin] ADD MEMBER [sqlmiadmin];
GO&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;If the Microsoft Entra administrator was already configured and the Microsoft Entra login was also disabled. In such cases, you can remove the Microsoft Entra administrator from the Azure portal and set it again. This refreshes the Microsoft Entra administrator configuration for the managed instance and restores the Microsoft Entra authentication path.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Additional Resources&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/authentication-aad-configure?view=azuresql&amp;amp;tabs=azure-portal#azure-sql-database-and-azure-synapse-analytics" target="_blank" rel="noopener"&gt;Configure Microsoft Entra Authentication - Azure SQL Database &amp;amp; SQL Managed Instance &amp;amp; Azure Synapse Analytics | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/authentication-microsoft-entra-connect-to-azure-sql?view=azuresql" target="_blank" rel="noopener"&gt;Connect with Microsoft Entra Authentication - Azure SQL Database &amp;amp; SQL Managed Instance &amp;amp; Azure Synapse Analytics | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-login-transact-sql?view=sql-server-ver17" target="_blank" rel="noopener"&gt;ALTER LOGIN (Transact-SQL) - SQL Server | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-server-role-transact-sql?view=sql-server-ver17" target="_blank" rel="noopener"&gt;ALTER SERVER ROLE (Transact-SQL) - SQL Server | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Disclaimer&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Please note that products, features, and configuration options discussed in this article are subject to change. This article reflects the state of Azure SQL Managed Instance as of April 2026.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I hope you found this article helpful. Please feel free to share your feedback in the comments section.&lt;/P&gt;</description>
      <pubDate>Thu, 23 Apr 2026 12:05:50 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/how-to-recover-azure-sql-managed-instance-access-when-the-admin/ba-p/4513905</guid>
      <dc:creator>Abdullah_Qtaishat</dc:creator>
      <dc:date>2026-04-23T12:05:50Z</dc:date>
    </item>
    <item>
      <title>Azure SQL (LTR):
You Don’t Need to Copy LTR Backups Across Regions to Restore Them</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-ltr-you-don-t-need-to-copy-ltr-backups-across-regions/ba-p/4513774</link>
      <description>&lt;H3&gt;&lt;STRONG&gt;Summary&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Customers sometimes attempt to &lt;STRONG&gt;copy Azure SQL Long-Term Retention (LTR) backups across regions&lt;/STRONG&gt; using &lt;STRONG&gt;Copy-AzSqlDatabaseLongTermRetentionBackup&lt;/STRONG&gt;, only to hit the error:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;LongTermRetentionMigrationRequestNotSupported&lt;/STRONG&gt;&lt;BR /&gt;&lt;EM&gt;LTR backup migration copy feature is not supported on subscription&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;This blog clarifies &lt;STRONG&gt;why this happens&lt;/STRONG&gt;, &lt;STRONG&gt;when LTR backup copy is actually supported&lt;/STRONG&gt;, and most importantly &lt;STRONG&gt;the correct and supported way to restore an LTR backup into a different region without copying it&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;The Common Scenario&lt;/H2&gt;
&lt;P&gt;A customer has:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;An &lt;STRONG&gt;LTR backup stored in Region A&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;A need to &lt;STRONG&gt;restore the database into Region B&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;The assumption that the LTR backup must first be copied cross-region&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;They attempt:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Copy-AzSqlDatabaseLongTermRetentionBackup&lt;/LI-CODE&gt;
&lt;P&gt;and immediately receive a platform validation error stating the feature isn’t supported on their subscription.&lt;/P&gt;
&lt;H2&gt;Why This Error Happens&lt;/H2&gt;
&lt;P&gt;The key misunderstanding is &lt;STRONG&gt;what the LTR backup copy API is actually for&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H3&gt;Copy-AzSqlDatabaseLongTermRetentionBackup is&amp;nbsp;&lt;STRONG&gt;NOT a general-purpose feature&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;This API is:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Backend-gated&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Allowlist-only&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Intended only for region decommissioning scenarios&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In other words:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;It is &lt;STRONG&gt;not&lt;/STRONG&gt; supported for normal customer-driven migrations&lt;/LI&gt;
&lt;LI&gt;There is &lt;STRONG&gt;no portal toggle or feature registration&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Subscriptions are only allowlisted &lt;STRONG&gt;when Microsoft is retiring a region&lt;/STRONG&gt;, and LTR backups must be preserved elsewhere.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Because of this, most subscriptions - will receive:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;LongTermRetentionMigrationRequestNotSupported&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;The Correct &amp;amp; Supported Solution&lt;/H2&gt;
&lt;H3&gt;&lt;STRONG&gt;Good news:&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;You do NOT need to copy the LTR backup to another region to restore it there.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure SQL allows you to:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Restore an LTR backup directly to any Azure SQL logical server, in any region.&lt;/STRONG&gt;&lt;/P&gt;
&lt;H3&gt;Supported Approach: Restore LTR Backup Directly&lt;/H3&gt;
&lt;P&gt;Use &lt;STRONG&gt;Restore-AzSqlDatabase&lt;/STRONG&gt; with the -FromLongTermRetentionBackup switch.&lt;/P&gt;
&lt;H3&gt;Example (PowerShell)&lt;/H3&gt;
&lt;LI-CODE lang="powershell"&gt;Restore-AzSqlDatabase `
    -FromLongTermRetentionBackup `
    -ResourceId $ltrBackup.ResourceId `
    -ServerName $serverName `
    -ResourceGroupName $resourceGroup `
    -TargetDatabaseName "Test" `
    -ServiceObjectiveName P1&lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI&gt;This works&amp;nbsp;&lt;STRONG&gt;across regions&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;No backend enablement required&lt;/LI&gt;
&lt;LI&gt;Fully supported and documented&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;How This Works (Important Concept)&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;LTR backups are stored in geo-redundant storage&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;The restore operation does &lt;STRONG&gt;not depend on the original region&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;The platform automatically handles data access and restores placement&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;So, while the &lt;STRONG&gt;backup physically originated in Region A&lt;/STRONG&gt;, you are free to restore it to &lt;STRONG&gt;Region B, C, or any supported Azure region&lt;/STRONG&gt; without copying it first.&lt;/P&gt;
&lt;H2&gt;When Is LTR Backup Copy Actually Used?&lt;/H2&gt;
&lt;P&gt;Only in this scenario:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Microsoft-initiated region decommissioning&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In that case:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;LTR backups must be relocated to remain available&lt;/LI&gt;
&lt;LI&gt;Subscriptions are temporarily &lt;STRONG&gt;allowlisted&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Copy-AzSqlDatabaseLongTermRetentionBackup&lt;/STRONG&gt; is enabled at the backend&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Outside of this scenario, the API is &lt;STRONG&gt;intentionally restricted&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;Key Takeaways&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;You can restore an LTR backup to any region directly&lt;/LI&gt;
&lt;LI&gt;You do not need (and usually cannot use) LTR backup copy&lt;/LI&gt;
&lt;LI&gt;Backup copy is gated and reserved for region retirement scenarios&lt;/LI&gt;
&lt;LI&gt;Use &lt;STRONG&gt;Restore-AzSqlDatabase -FromLongTermRetentionBackup&lt;/STRONG&gt; instead&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Final Recommendation for Customers&lt;/H2&gt;
&lt;P&gt;If customers encounter this error:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Reassure them this is &lt;STRONG&gt;not a misconfiguration or permission issue&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Explain that &lt;STRONG&gt;LTR restore is the correct solution&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Avoid escalation for feature enablement unless a &lt;STRONG&gt;region retirement&lt;/STRONG&gt; is involved&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 22 Apr 2026 18:31:59 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-ltr-you-don-t-need-to-copy-ltr-backups-across-regions/ba-p/4513774</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-04-22T18:31:59Z</dc:date>
    </item>
    <item>
      <title>Azure Data Sync: Fixing “Cannot find the user ‘DataSync_executor’” When Creating a New Sync Group</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-data-sync-fixing-cannot-find-the-user-datasync-executor/ba-p/4513757</link>
      <description>&lt;H2&gt;Summary&lt;/H2&gt;
&lt;P&gt;When creating a &lt;STRONG&gt;new Azure SQL Data Sync group&lt;/STRONG&gt;, customers may encounter the following error during setup—even when no active sync groups exist:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;“Failed to perform data sync operation: Cannot find the user 'DataSync_executor', because it does not exist or you do not have permission.”&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;This failure typically occurs during&amp;nbsp;&lt;STRONG&gt;certificate and symmetric key creation&lt;/STRONG&gt; as Azure attempts to grant permissions to the DataSync_executor role. In this post, we’ll walk through:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The &lt;STRONG&gt;common scenario&lt;/STRONG&gt; where this issue appears&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Why cleanup scripts alone may not fix it&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;A &lt;STRONG&gt;supported, reliable resolution approach&lt;/STRONG&gt; to restore Data Sync successfully&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;The Problem Scenario&lt;/H2&gt;
&lt;P&gt;A customer attempts to create a &lt;STRONG&gt;brand-new Azure SQL Data Sync group&lt;/STRONG&gt; (hub + members), but the operation fails with an error similar to:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Cannot find the user 'DataSync_executor', because it does not exist or you do not have permission. Creating certificate Creating symmetric key Granting permission to [DataSync_executor] on certificate&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Key observations from affected cases:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;No active sync group exists&lt;/LI&gt;
&lt;LI&gt;Cleanup scripts (including &lt;STRONG&gt;Data Sync complete cleanup.sql&lt;/STRONG&gt;) were already executed&lt;/LI&gt;
&lt;LI&gt;The failure persists even after retrying the setup&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Why This Happens&lt;/H2&gt;
&lt;P&gt;Azure SQL Data Sync depends on &lt;STRONG&gt;system-managed database roles&lt;/STRONG&gt; that must be created and configured &lt;STRONG&gt;only by the Azure Data Sync service itself&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;If these roles (or related permissions) are:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Missing&lt;/LI&gt;
&lt;LI&gt;Partially deleted&lt;/LI&gt;
&lt;LI&gt;Left in an inconsistent state&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;then Data Sync may fail while attempting to create certificates or grant required permissions.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Important:&lt;/STRONG&gt;&lt;BR /&gt;Manually creating or partially restoring these roles is &lt;STRONG&gt;not supported&lt;/STRONG&gt; and often leads to repeated failures.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;How to Detect the Issue&lt;/H2&gt;
&lt;P&gt;Before troubleshooting further, confirm whether the required Data Sync roles are missing.&lt;/P&gt;
&lt;H3&gt;1. Run the Data Sync Health Checker&lt;/H3&gt;
&lt;P&gt;Ask the customer to run&amp;nbsp;&lt;A class="lia-external-url" href="https://github.com/Microsoft/AzureSQLDataSyncHealthChecker" target="_blank"&gt;&lt;STRONG&gt;Data Sync Health Checker&lt;/STRONG&gt;&lt;/A&gt;, then review SyncDB_Log.&lt;/P&gt;
&lt;P&gt;Common warnings include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;DataSync_reader IS MISSING&lt;/LI&gt;
&lt;LI&gt;DataSync_executor IS MISSING&lt;/LI&gt;
&lt;LI&gt;Missing EXECUTE/SELECT permissions on dss and TaskHosting schemas&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This confirms the root cause is&amp;nbsp;&lt;STRONG&gt;role and permission inconsistency&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;Supported and Effective Resolution&lt;/H2&gt;
&lt;H3&gt;Step 1: Verify Roles Are Missing&lt;/H3&gt;
&lt;P&gt;Run the following query on &lt;STRONG&gt;each affected database&lt;/STRONG&gt; (hub and members):&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT name
FROM sys.database_principals
WHERE name IN ('DataSync_executor', 'DataSync_reader');&lt;/LI-CODE&gt;
&lt;P&gt;If&amp;nbsp;&lt;STRONG&gt;no rows are returned&lt;/STRONG&gt;, the roles are missing and must be recovered by Azure Data Sync itself - not manually.&lt;/P&gt;
&lt;H3&gt;Step 2: Fully Clean Up Leftover Data Sync Objects&lt;/H3&gt;
&lt;P&gt;Do this&amp;nbsp;&lt;STRONG&gt;only if the database is not actively syncing&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;-- Remove roles if partially present
DROP ROLE IF EXISTS DataSync_executor;
DROP ROLE IF EXISTS DataSync_reader;

-- Drop DataSync schema
IF EXISTS (SELECT 1 FROM sys.schemas WHERE name = 'DataSync')
BEGIN
    DROP SCHEMA DataSync;
END&lt;/LI-CODE&gt;
&lt;P&gt;This ensures there are&amp;nbsp;&lt;STRONG&gt;no partial or orphaned Data Sync objects&lt;/STRONG&gt; left behind that could interfere with setup.&lt;/P&gt;
&lt;H3&gt;Step 3: Recreate the Sync Group (Critical Step)&lt;/H3&gt;
&lt;P&gt;Do&amp;nbsp;&lt;STRONG&gt;not&lt;/STRONG&gt; manually recreate roles or permissions&lt;BR /&gt;Instead:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Delete the existing (failed) Sync Group from the &lt;STRONG&gt;Azure Portal&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Recreate the Sync Group from scratch&lt;/LI&gt;
&lt;LI&gt;Re-add the hub and member databases&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;During this process, Azure will automatically:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Recreate DataSync_executor and DataSync_reader&lt;/LI&gt;
&lt;LI&gt;Assign all required permissions&lt;/LI&gt;
&lt;LI&gt;Deploy the correct schemas, certificates, and procedures&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Key Takeaways&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;DataSync_executor and DataSync_reader are &lt;STRONG&gt;service-managed roles&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Cleanup scripts alone may not fully reset a broken state&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Manual role creation is not supported&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Deleting and recreating the Sync Group is the &lt;STRONG&gt;only reliable recovery method&lt;/STRONG&gt; once roles are missing&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Final Recommendation&lt;/H2&gt;
&lt;P&gt;If you encounter &lt;STRONG&gt;Data Sync setup failures referencing DataSync_executor&lt;/STRONG&gt;, always:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Validate role existence&lt;/LI&gt;
&lt;LI&gt;Fully clean up broken artifacts&lt;/LI&gt;
&lt;LI&gt;Let &lt;STRONG&gt;Azure Data Sync&lt;/STRONG&gt; recreate everything by rebuilding the Sync Group&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;This approach consistently resolves the issue and restores a healthy Data Sync deployment.&lt;/P&gt;</description>
      <pubDate>Wed, 22 Apr 2026 17:17:34 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-data-sync-fixing-cannot-find-the-user-datasync-executor/ba-p/4513757</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-04-22T17:17:34Z</dc:date>
    </item>
    <item>
      <title>Data Migration - From SQL MI to SQL DB migration</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/data-migration-from-sql-mi-to-sql-db-migration/ba-p/4461627</link>
      <description>&lt;H1&gt;Overview&lt;/H1&gt;
&lt;P&gt;In this post, I’ll show you how to perform a data migration from Azure SQL Managed Instance (SQL MI) to Azure SQL Database (SQL DB) using a combination of schema + security migration steps and Azure Data Factory (ADF) for moving the data. In an earlier post (&lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/azuredbsupport/data-migration---azure-sql-mi-and-azure-sql-db/4461457" target="_blank" rel="noopener" data-lia-auto-title="Data Migration - Azure SQL MI and Azure SQL DB | Microsoft Community Hub" data-lia-auto-title-active="0"&gt;Data Migration - Azure SQL MI and Azure SQL DB | Microsoft Community Hub&lt;/A&gt;), I briefly covered several approaches for moving data from SQL MI to SQL DB. This article focuses on an approach that works well when you want a repeatable data-movement pipeline (ADF) but still need to migrate database objects and users as part of a complete cutover. Using ADF is a good option when you need a repeatable pipeline or when the source data volume is large. This article walks one of the methods you can use, particularly if source data size is large. This post will walk you through the key configuration steps to copy data from SQL MI to SQL DB using ADF.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;: If you want to migrate in the opposite direction (Azure SQL Database → SQL Managed Instance), you can follow the same general flow in this article—just swap which platform is the source and which is the destination and adjust any feature-compatibility work accordingly.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The steps below assume that you are migrating data over a private endpoint so that all traffic remains on a secure, private network path. In this configuration, you deploy a Windows VM to host a &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/data-factory/choose-the-right-integration-runtime-configuration#self-hosted-integration-runtime" target="_blank" rel="noopener"&gt;Self‑Hosted Integration Runtime&lt;/A&gt; (SHIR). If your requirements permit data movement over the public network, you can instead use the Azure Integration Runtime (Azure IR), which removes the need for a Windows VM and SHIR. In that case, Azure IR handles the data movement, and traffic flows over the public network rather than through a private endpoint.&lt;/P&gt;
&lt;H2&gt;High-level steps&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Assess compatibility and feature differences (SQL MI vs SQL DB)&lt;/LI&gt;
&lt;LI&gt;Migrate schema and database objects (tables, views, procs, functions, etc.)&lt;/LI&gt;
&lt;LI&gt;Migrate users and permissions (and map logins to users where applicable)&lt;/LI&gt;
&lt;LI&gt;Deploy Azure Data Factory (ADF)&lt;/LI&gt;
&lt;LI&gt;(If using private endpoints) Provision a Windows VM to host SHIR and register it in ADF&lt;/LI&gt;
&lt;LI&gt;Create linked services (connections) for SQL MI and SQL DB&lt;/LI&gt;
&lt;LI&gt;Run the data copy using the Copy Data tool (or a pipeline), validate, and cut over&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;Pre-migration assessment (SQL MI → SQL DB)&lt;/H1&gt;
&lt;P&gt;There is no automated assessment tool available to evaluate readiness for migrating between Azure SQL Managed Instance and Azure SQL Database. As a result, pre‑migration assessment is a manual exercise. Before migrating, you should validate high‑level compatibility, including supported features, removal of instance‑level or cross‑database dependencies, security model alignment, target service tier sizing, and application connectivity behavior. Differences in platform capabilities and architecture should be reviewed early to avoid migration or post‑migration issues.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here is check list to consider:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;☐ Feature compatibility reviewed (no unsupported MI features in use)
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/features-comparison?view=azuresql" target="_blank" rel="noopener"&gt;Compare SQL Database Engine Features - Azure SQL Database &amp;amp; Azure SQL Managed Instance | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/connect-query-ssms?view=azuresql" target="_blank" rel="noopener"&gt;SSMS: Connect and query data - Azure SQL Database &amp;amp; Azure SQL Managed Instance | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;☐ No instance‑level or cross‑database dependencies required&lt;/LI&gt;
&lt;LI&gt;☐ Security model validated (database‑scoped users and authentication)&lt;/LI&gt;
&lt;LI&gt;☐ Target SQL Database service tier identified and sized&lt;/LI&gt;
&lt;LI&gt;☐ Application connectivity updated (connection strings, retries, HA behavior)&lt;/LI&gt;
&lt;LI&gt;☐ Proof‑of‑concept testing completed with a representative database&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;Migrate schema and database objects&lt;/H1&gt;
&lt;P&gt;ADF is primarily a data movement service, so migrate your (*1) schema and programmable objects first. Common options include deploying a DACPAC (SqlPackage) to Azure SQL Database, using SQL Server Data Tools (SSDT) to publish a database project, or generating scripts from SSMS (tables, views, stored procedures, functions, synonyms, sequences, user-defined types, etc.). Ensure the target schema matches the source (data types, collations, constraints) before you start the bulk data copy.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;(*1) In the Copy Data tool step (described later), ADF can auto-create the table schemas (and views) based on the source tables. So, you can skip migrating the schema upfront; however, programmable objects (stored procedures, function, etc.) and other non-schema objects have to be migrated.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;OL&gt;
&lt;LI&gt;Create the target Azure SQL Database (and choose the correct server, tier, and size).&lt;/LI&gt;
&lt;LI&gt;Deploy the schema and objects to SQL DB (DACPAC/SSDT/scripts).&lt;/LI&gt;
&lt;LI&gt;Create supporting objects needed for the load (schemas, filegroups aren’t applicable in SQL DB, but schemas and tables are).&lt;/LI&gt;
&lt;LI&gt;Decide when to create indexes and foreign keys: for very large loads, creating them after the data copy can significantly speed up ingestion.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H1&gt;Deploy ADF&lt;/H1&gt;
&lt;P&gt;Follow the steps in &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory" target="_blank" rel="noopener"&gt;Create an Azure Data Factory - Azure Data Factory | Microsoft Learn&lt;/A&gt; to deploy ADF. After deployment, open Azure Data Factory Studio—we’ll use it to configure the integration runtime, linked services, and the copy activity. Following ADF deployment, launch the Azure Data Factory Studio. We will come back to what needs to be configured in ADF Studio&lt;/P&gt;
&lt;H1&gt;Provision Windows VM – used to host SHIR&lt;/H1&gt;
&lt;P&gt;This Azure VM hosts the &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/data-factory/choose-the-right-integration-runtime-configuration#self-hosted-integration-runtime" target="_blank" rel="noopener"&gt;Self-hosted Integration Runtime (SHIR)&lt;/A&gt;, which ADF uses to connect to your SQL MI and SQL DB over private networking and to run the copy operation.&lt;/P&gt;
&lt;H1&gt;Configure SHIR in Azure Data Factory and install SHIR on Azure VM&lt;/H1&gt;
&lt;P&gt;On the page for your data factory, select &lt;STRONG&gt;Launch Studio&lt;/STRONG&gt; to open &lt;STRONG&gt;Azure Data Factory Studio&lt;/STRONG&gt;. The steps below assume you are working from the Azure VM that will host SHIR. If the VM can’t download the SHIR installer (for example, it has no outbound internet access), download the installer from a machine that has internet access and then copy it to the VM.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Select (1) &lt;STRONG&gt;Manage&lt;/STRONG&gt;, and then (2) &lt;STRONG&gt;Integration runtimes&lt;/STRONG&gt; to display the integration runtimes.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Select &lt;STRONG&gt;New &lt;/STRONG&gt;which will display the &lt;STRONG&gt;Integration runtime&amp;nbsp;setup&lt;/STRONG&gt; page at right.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Select &lt;STRONG&gt;Azure, Self-Hosted&lt;/STRONG&gt;, and then select &lt;STRONG&gt;Continue&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Select &lt;STRONG&gt;Self-Hosted&lt;/STRONG&gt;, and then select &lt;STRONG&gt;Continue&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Enter a name for the integration runtime, and then select &lt;STRONG&gt;Create&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;For this example, use &lt;STRONG&gt;Manual setup&lt;/STRONG&gt; (Option 2) rather than express setup. Download the integration runtime installer from the provided link.&lt;/LI&gt;
&lt;/OL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After downloading the SHIR installer, close the page for now. Back on the &lt;STRONG&gt;Integration runtimes&lt;/STRONG&gt; page (in ADF studio), you’ll see the new runtime listed with a status of 'Unavailable'. It will change to 'Running' after you install and register SHIR on the VM.&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Run the SHIR installer (the .msi file) and complete the installation wizard. At the end of the wizard, you’ll be prompted to register the integration runtime.&lt;/P&gt;
&lt;P&gt;To get the authentication key, return to ADF Studio, select your integration runtime on the &lt;STRONG&gt;Integration runtimes&lt;/STRONG&gt; page to open &lt;STRONG&gt;Edit integration runtime&lt;/STRONG&gt;, and then copy the key. Paste the key into the SHIR registration dialog on the VM. Select &lt;STRONG&gt;Register &lt;/STRONG&gt;to complete registration.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;After registration completes, select &lt;STRONG&gt;Finish &lt;/STRONG&gt;to close the wizard. You can leave &lt;STRONG&gt;Enable remote access from intranet&lt;/STRONG&gt; unchecked unless you plan to add additional SHIR nodes for high availability and scalability. You can change this setting later if needed.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;If the registration succeeds, you will see the integration runtimes is at running status.&lt;/P&gt;
&lt;img /&gt;
&lt;H1&gt;Configure Linked Service (connections to source and target)&lt;/H1&gt;
&lt;P&gt;In this step, you configure connections to the source (SQL MI) and the target (Azure SQL DB).&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Important: ADF copy activities migrate data. You must create the target database ahead of time.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P class=""&gt;In ADF Studio, go to &lt;STRONG&gt;Manage&lt;/STRONG&gt; &amp;gt; &lt;STRONG&gt;Linked services&lt;/STRONG&gt;. This will display the&amp;nbsp;&lt;STRONG&gt;New linked service&lt;/STRONG&gt; pane.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Note: If you already have linked services created for both SQL MI and SQL DB, you can reuse them—when configuring the Copy Data tool or a pipeline, simply select the existing linked service for the source and the existing linked service for the destination.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Search for 'sql' to display related connectors. in Data stores. Select the connector for SQL MI, and then select Continue.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Enter required properties to connect to SQL MI.&lt;/P&gt;
&lt;P&gt;For &lt;STRONG&gt;Connect via integration runtime&lt;/STRONG&gt;, select the self-hosted integration runtime you created earlier.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Set the remaining properties as appropriate for your environment (server name, database name, authentication, etc.). For other properties, set/enter appropriate values to connect to your server.&lt;/P&gt;
&lt;P&gt;Example settings used in this demo (key properties only):&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Endpoint type: Private endpoint&lt;/LI&gt;
&lt;LI&gt;Authentication type: SQL authentication (demo only—use your preferred/approved auth method)&lt;/LI&gt;
&lt;LI&gt;Trust server certificate: Selected&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Next, create a linked service for the target server (Azure SQL Database). Follow the same general steps you used for SQL MI, selecting the appropriate connector and the same SHIR (if you are using private endpoints). You can follow the same steps as done with SQL MI&lt;/P&gt;
&lt;P&gt;When finished, you should see two linked services configured—one for SQL MI and one for SQL DB.&lt;/P&gt;
&lt;img /&gt;
&lt;H1&gt;Configure Ingestion and run data migration&lt;/H1&gt;
&lt;P&gt;Now we are ready to perform the data migration. Select [Home] at the ADF Studio and select [Ingest]. This will display the Copy Data tool page.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Important&lt;/STRONG&gt;: Before you start, make sure the target database exists in Azure SQL Database and that you have permissions to create/insert data into the target tables.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;For larger migrations, consider copying in batches (for example, by date/key range) and using parallelism where appropriate to improve throughput while staying within SQL DB resource limits. Plan time for validation (row counts/checksums or targeted queries) and decide how you will handle ongoing changes during cutover (for example, a final delta load or an application downtime window).&lt;/P&gt;
&lt;img /&gt;
&lt;H3&gt;The Copy Data tool page&lt;/H3&gt;
&lt;P&gt;Select &lt;STRONG&gt;Built-in copy task &amp;gt;&amp;nbsp;&lt;/STRONG&gt;&lt;STRONG&gt;Run once now&lt;/STRONG&gt;, and &lt;STRONG&gt;Continue&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Select Source type and connection.&amp;nbsp; Select table(s) to migrate.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Select &lt;STRONG&gt;Next &lt;/STRONG&gt;at the [Apply filter] page&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Select the destination&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Select &lt;STRONG&gt;Next &lt;/STRONG&gt;to get to Column mapping page. Unless you want to&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-schema-and-type-mapping" target="_blank" rel="noopener"&gt;specify explicit mapping&lt;/A&gt; to customize column/field mapping from source to destination, select &lt;STRONG&gt;Next&lt;/STRONG&gt; to accept default settings.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;At the &lt;STRONG&gt;Copy Data tool&lt;/STRONG&gt; page, leave all with default settings and continue &lt;STRONG&gt;Next&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;On the &lt;STRONG&gt;Summary&lt;/STRONG&gt; page, select &lt;STRONG&gt;Next &lt;/STRONG&gt;to start data migration.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;After selecting &lt;STRONG&gt;Next&lt;/STRONG&gt;, the migration starts.&amp;nbsp; You can select &lt;STRONG&gt;Finish &lt;/STRONG&gt;to close this page and continue monitor the migration progress from the Monitor blade.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Checking migration status&lt;/H1&gt;
&lt;P&gt;You can select &lt;STRONG&gt;Finish&lt;/STRONG&gt; to close the page and go to&amp;nbsp;&lt;STRONG&gt;Monitor &lt;/STRONG&gt;to monitor the migration progress.&lt;/P&gt;
&lt;img /&gt;
&lt;H1&gt;Validation and cutover&lt;/H1&gt;
&lt;P&gt;After schema, security, and data are migrated, validate the target before switching applications over to SQL DB. Validation typically includes row counts, checksum sampling, and running key application queries and reports against the new database. For cutover, plan how you will handle changes occurring on SQL MI while the bulk copy is running (for example, schedule downtime, or run a final delta load if your design supports it).&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Validate schema: object counts, constraints, and (if used) post-deployment scripts.&lt;/LI&gt;
&lt;LI&gt;Validate security: users/groups exist, role memberships are correct, and the app can connect with least privilege.&lt;/LI&gt;
&lt;LI&gt;Validate data: row counts per table, spot-check aggregates, and targeted checksum comparisons where feasible.&lt;/LI&gt;
&lt;LI&gt;Cut over: update connection strings/DNS, monitor workload in SQL DB, and keep a rollback plan.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Notes and Observations&lt;/H2&gt;
&lt;H3&gt;Migrating database objects&lt;/H3&gt;
&lt;P&gt;Although this article recommends migrating schemas and database objects before copying data, this demo relies on Azure Data Factory’s automatic schema generation for simplicity. This is useful for testing or learning ADF workflows, but it is not suitable for production migrations.&lt;/P&gt;
&lt;P&gt;The &lt;A class="lia-external-url" href="https://github.com/Microsoft/sql-server-samples/releases/tag/wide-world-importers-v1.0" target="_blank" rel="noopener"&gt;Wide World Importers&lt;/A&gt; sample database, that was used for the source database, includes a memory‑optimized table, but ADF generated a standard table instead. This illustrates a key limitation: ADF creates tables using basic metadata (columns, data types, and keys) and does not generate full, feature‑equivalent schemas.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Best practice&lt;/STRONG&gt;: Use tools such as SSMS, SSDT, DACPAC, or BACPAC to prepare complete schemas and objects in the target database and use ADF strictly for data movement.&lt;/P&gt;
&lt;H3&gt;Online migration&lt;/H3&gt;
&lt;P&gt;Online migration keeps the source database available while data is copied to the target using a repeatable pipeline such as Azure Data Factory. Schema and security are prepared in advance, and the bulk data copy runs without requiring an extended outage.&lt;/P&gt;
&lt;P&gt;Changes made to the source during the copy aren’t automatically synchronized, so a planned cutover—such as a short downtime window or final validation step—is still required to ensure data consistency before switching applications to the target.&lt;/P&gt;
&lt;H1&gt;Summary&lt;/H1&gt;
&lt;P&gt;Migrating from SQL MI to Azure SQL Database is more than a single copy operation. A successful full migration typically includes (1) assessing feature compatibility, (2) deploying schema and objects to the target, (3) recreating users/roles/permissions, and (4) using ADF to move the data in a repeatable, monitorable way—optionally through private networking with SHIR. After the copy completes, validate schema, security, and data, then execute a planned cutover to move applications to SQL DB.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Assess and remediate MI→SQL DB compatibility gaps.&lt;/LI&gt;
&lt;LI&gt;Migrate schema + programmable objects (DACPAC/SSDT/scripts) before loading data.&lt;/LI&gt;
&lt;LI&gt;Migrate security: users, roles, role memberships, and permissions (prefer Entra ID where possible).&lt;/LI&gt;
&lt;LI&gt;Configure ADF (and SHIR if using private endpoints), then run Copy Data/pipelines for the data load.&lt;/LI&gt;
&lt;LI&gt;Validate results and cut over with a rollback plan.&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 22 Apr 2026 05:25:10 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/data-migration-from-sql-mi-to-sql-db-migration/ba-p/4461627</guid>
      <dc:creator>akiohose</dc:creator>
      <dc:date>2026-04-22T05:25:10Z</dc:date>
    </item>
    <item>
      <title>Fix failover group creation errors with TDE CMK on Azure SQL Managed Instance</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/fix-failover-group-creation-errors-with-tde-cmk-on-azure-sql/ba-p/4513322</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Overview&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We have received several support cases where customers encounter the error shown below when attempting to create a failover group for Azure SQL Managed Instance. In this article, we explore one of the possible causes of this error.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Creating instance failover group failed.&lt;/P&gt;
&lt;P&gt;An unexpected error occured while processing the request. Tracking ID: 'XYZ'&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Prerequisites Review&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This article focuses on an additional configuration requirement beyond what is documented in&lt;BR /&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/failover-group-configure-sql-mi?view=azuresql&amp;amp;tabs=azure-portal%2Cazure-portal-modify%2Cazure-powershell-manage#configuration-requirements" target="_blank" rel="noopener"&gt;Configure a failover group - Azure SQL Managed Instance | Microsoft Learn&lt;/A&gt;:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The secondary managed instance must be empty, without any user databases.&lt;/LI&gt;
&lt;LI&gt;The configuration of your primary and secondary instance should be the same to ensure the secondary instance can sustainably process changes replicated from the primary instance, including during periods of peak activity. This includes the compute size, storage size, and service tier.&lt;/LI&gt;
&lt;LI&gt;The IP&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/failover-group-configure-sql-mi?view=azuresql&amp;amp;tabs=azure-portal%2Cazure-portal-modify%2Cazure-powershell-manage#create-virtual-network" target="_blank" rel="noopener"&gt;address range&lt;/A&gt;&amp;nbsp;for the virtual network of the primary instance must not overlap with the address range of the virtual network for the secondary managed instance, or any other virtual network peered with either the primary or secondary virtual network.&lt;/LI&gt;
&lt;LI&gt;Both instances must be in the same DNS zone. When you create your secondary managed instance, you must specify the primary instance's DNS zone ID. If you don't, the zone ID is generated as a random string when the first instance is created in each virtual network and the same ID is assigned to all other instances in the same subnet. Once assigned, the DNS zone can't be modified.&lt;/LI&gt;
&lt;LI&gt;Network Security Groups (NSG) rules for the subnets of both instances must have open inbound and outbound TCP connections for port 5022 and port range 11000-11999 to facilitate communication between the two instances.&lt;/LI&gt;
&lt;LI&gt;Managed instances should be deployed to&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/reliability/cross-region-replication-azure" target="_blank" rel="noopener"&gt;paired regions&lt;/A&gt;&amp;nbsp;for performance reasons. Managed instances that reside in geo-paired regions benefit from a significantly higher geo-replication speed compared to unpaired regions.&lt;/LI&gt;
&lt;LI&gt;Both instances must use the same&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/update-policy?view=azuresql" target="_blank" rel="noopener"&gt;update policy&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the scenario discussed here, all of the above requirements were fully satisfied.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Scenario Details&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The primary SQL Managed Instance was configured to use customer‑managed keys (CMK) for Transparent Data Encryption (TDE), with automatic key rotation enabled.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The secondary SQL Managed Instance was configured to use service‑managed keys (SMK) or it can be configured with a different CMK than the one used by the primary instance for database encryption.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Root Cause&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The issue occurs because all servers participating in geo‑replication must share the same key material as the encryption protector of the primary server. This requirement is documented in&lt;BR /&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/transparent-data-encryption-byok-overview?view=azuresql&amp;amp;tabs=azurekeyvault%2Cazurekeyvaultrequirements%2Cazurekeyvaultrecommendations#geo-dr-and-customer-managed-tde" target="_blank" rel="noopener"&gt;Customer-managed transparent data encryption (TDE) - Azure SQL Database &amp;amp; Azure SQL Managed Instance &amp;amp; Azure Synapse Analytics | Microsoft Learn&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this scenario, the mismatch between CMK on the primary instance and SMK on the secondary instance caused the failover group creation to fail.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Resolution&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;If there is a requirement &lt;STRONG&gt;not&lt;/STRONG&gt; to encrypt the secondary managed instance databases using the same key as the primary for example, to continue using SMK you can still satisfy the failover group requirement without changing the active encryption protector on the secondary instance.&lt;/P&gt;
&lt;P&gt;To achieve this:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Add the &lt;STRONG&gt;primary instance’s TDE key&lt;/STRONG&gt; to the secondary managed instance.&lt;/LI&gt;
&lt;LI&gt;Ensure that the option &lt;STRONG&gt;“Make this key the default TDE protector”&lt;/STRONG&gt; is &lt;STRONG&gt;disabled&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This allows the key to be used solely for failover group operations while keeping SMK or different CMK as the active TDE protector on the secondary instance after failover group creation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After fixing the issue, failover group was successfully created.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Additional Resources&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/failover-group-configure-sql-mi?view=azuresql&amp;amp;tabs=azure-portal%2Cazure-portal-modify%2Cazure-powershell-manage#configuration-requirements" target="_blank" rel="noopener"&gt;Configure a failover group - Azure SQL Managed Instance | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/transparent-data-encryption-byok-overview?view=azuresql&amp;amp;tabs=azurekeyvault%2Cazurekeyvaultrequirements%2Cazurekeyvaultrecommendations#geo-dr-and-customer-managed-tde" target="_blank" rel="noopener"&gt;Customer-managed transparent data encryption (TDE) - Azure SQL Database &amp;amp; Azure SQL Managed Instance &amp;amp; Azure Synapse Analytics | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Disclaimer&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Please note that products, features, and configuration options discussed in this article are subject to change. This article reflects the state of Azure SQL Managed Instance as of April 2026.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We hope you found this article helpful. Please feel free to share your feedback in the comments section.&lt;/P&gt;</description>
      <pubDate>Tue, 21 Apr 2026 11:06:46 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/fix-failover-group-creation-errors-with-tde-cmk-on-azure-sql/ba-p/4513322</guid>
      <dc:creator>Abdullah_Qtaishat</dc:creator>
      <dc:date>2026-04-21T11:06:46Z</dc:date>
    </item>
    <item>
      <title>How to take copy only backups with SQL Managed Instance.</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/how-to-take-copy-only-backups-with-sql-managed-instance/ba-p/4512541</link>
      <description>&lt;P&gt;In Azure SQL Managed Instance, copy-only backups cannot be created for databases encrypted with service-managed Transparent Data Encryption (The default for all newly provisioned SQL MI). Service-managed TDE relies on an internal encryption key that cannot be exported, which means the backup cannot be restored outside that environment. You will need to change your SQL MI TDE service key for this process (I will explain how to do this in Step 4 below).&lt;/P&gt;
&lt;P&gt;In this article, I will explain how to prepare your Azure SQL Managed Instance (SQL MI) to take copy-only backups to Azure Blob Storage by using Managed Identity together with customer-managed encryption keys.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;These steps ensure that SQL MI can encrypt the backup by using your own key and securely write it to your storage account without relying on shared keys or SAS tokens.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1: Create or Identify a Backup Encryption Key in Azure Key Vault&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;First, ensure that you have a Key Vault containing a key that will be used to encrypt the backup. It is essential to have elevated privileges over this key vault, and it's recommended to use a dedicated key for the backup encryption.&lt;/P&gt;
&lt;P&gt;Note the Key Identifier:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;https://contoso.vault.azure.net/keys/contosokey/01234567890123456789012345678901&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;as it will be required later when adding the key to SQL MI. This key will be used by SQL MI to encrypt the backup before it is written to Azure Blob Storage.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2: Grant Key Vault Permissions to the SQL MI Managed Identity&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;SQL Managed Instance uses a system-assigned managed identity. This allows SQL MI to use the key for encryption and decryption operations during backup and restore. This identity must be granted escalated privileges to use the encryption key stored in Key Vault.&lt;/P&gt;
&lt;P&gt;In the Key Vault:&lt;/P&gt;
&lt;P&gt;1) Navigate to Access control (IAM) or Access policies (depending on whether RBAC or Access Policies are enabled).&lt;/P&gt;
&lt;P&gt;2) Choose the SQL MI managed identity.&lt;/P&gt;
&lt;P&gt;3) Assign it this role:&amp;nbsp; "Key Vault Crypto Service Encryption User".&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Note: If you're using Access Policies mode, then the required key permissions are Get, Wrap Key, Unwrap Key.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3: Grant Storage Permissions to the SQL MI Managed Identity&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Next, SQL MI must be authorized to write backup files to the target storage account. This role assignment allows SQL MI to create, write, and manage blobs within the specified container. No storage account keys or SAS tokens are required when using Managed Identity.&lt;/P&gt;
&lt;P&gt;In the storage account that hosts your backup container:&lt;/P&gt;
&lt;P&gt;1) Go to Access control (IAM).&lt;/P&gt;
&lt;P&gt;2) Choose the SQL MI managed identity.&lt;/P&gt;
&lt;P&gt;3) Assign it this role: "Storage Blob Data Contributor"&lt;/P&gt;
&lt;P&gt;Note: Verify that connectivity from the storage account to SQL MI is established successfully. The storage account must allow the SQL MI subnet to connect to it. Use this script to test connectivity to the storage account from SQL MI: &lt;A href="https://github.com/Azure/sqlmi/blob/main/how-to/how-to-test-tcp-connection-from-mi/TestTCPNetworkConnection.sql" target="_blank" rel="noopener"&gt;how-to-test-tcp-connection-from-mi&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4: Add the Encryption Key to SQL Managed Instance&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;After Key Vault access has been configured, the encryption key must be registered within SQL MI. You can add the key in two ways:&lt;/P&gt;
&lt;P&gt;A) From Azure portal:&lt;/P&gt;
&lt;P&gt;Navigate to the relevant SQL Managed Instance, then select the &lt;EM&gt;Security&lt;/EM&gt; option from the left-hand menu. Under &lt;EM&gt;Security&lt;/EM&gt;, open &lt;EM&gt;Transparent Data Encryption&lt;/EM&gt;. From there, you will be able to select the appropriate key from the available drop-down list. Click save once your choice is complete.&lt;/P&gt;
&lt;P&gt;B) From Azure Cloud Shell/PowerShell:&lt;/P&gt;
&lt;P&gt;First, add the key first with this command:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Add-AzSqlInstanceKeyVaultKey -ResourceGroupName 'ContosoResourceGroup' -InstanceName 'ContosoManagedInstanceName' -KeyId 'https://contoso.vault.azure.net/keys/contosokey/01234567890123456789012345678901'&lt;/LI-CODE&gt;
&lt;P&gt;Then set it as the TDE protector:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Set-AzSqlInstanceTransparentDataEncryptionProtector -Type AzureKeyVault -InstanceName "ContosoManagedInstanceName" -ResourceGroupName "ContosoResourceGroup" -KeyId "https://contoso.vault.azure.net/keys/contosokey/01234567890123456789012345678901" -AutoRotationEnabled $true&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Step 5: Create a Credential for the Target Blob Container&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Finally, create a SQL credential that maps the Azure Blob Storage container to SQL MI’s managed identity.&lt;/P&gt;
&lt;P&gt;Run the following statement on the SQL Managed Instance after you verify that your storage blob container has been created already:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE CREDENTIAL [https://contoso.blob.core.windows.net/myfirstcontainer]
WITH IDENTITY = 'Managed Identity';&lt;/LI-CODE&gt;
&lt;P&gt;Important notes: The credential name must exactly match the container URL. This credential is used by SQL Server during BACKUP DATABASE … TO URL.&lt;/P&gt;
&lt;P&gt;You have now successfully configured your SQL MI to be able to take copy only backups! This is an example of how your backup command should like:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;BACKUP DATABASE ... TO URL = '...' WITH COPY_ONLY, COMPRESSION;&lt;/LI-CODE&gt;
&lt;P&gt;The below sections will explain the common errors that you may run into while taking copy only backups.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Common errors that you may encounter when attempting copy-only backups:&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;1) Error#1:&lt;/U&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Msg 3271, Level 16, State 1, Line 1&lt;BR /&gt;A nonrecoverable I/O error occurred on file '&amp;lt;URL to bak file&amp;gt;' Backup to URL received an exception from the remote endpoint. Exception Message: Unable to connect to the remote server.&lt;BR /&gt;Msg 3013, Level 16, State 1, Line 1&lt;BR /&gt;BACKUP DATABASE is terminating abnormally.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;BR /&gt;This error is caused by:&lt;/P&gt;
&lt;P&gt;A) An IP or a port 443 block from NSGs/firewalls.&lt;/P&gt;
&lt;P&gt;B) SQL MI is not able to reach the storage blob container due to some network misconfiguration from either side.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please review your network settings in the storage account and your NSGs and firewall.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;2) Error#2:&lt;/U&gt;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Msg 3201, Level 16, State 1, Line 1&lt;/P&gt;
&lt;P&gt;Cannot open backup device '&amp;lt;URL to the bak file&amp;gt;'. Operating system error 86(The specified network password is not correct.).&lt;/P&gt;
&lt;P&gt;Msg 3013, Level 16, State 1, Line 1&lt;/P&gt;
&lt;P&gt;BACKUP DATABASE is terminating abnormally.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;This error is caused by misconfigured credentials or missing permissions. Please verify that your SQL MI managed identity has the "Storage Blob Data Contributor" role and the above CREATE CREDENTIAL query was executed with the correct URL name.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;3) Error#3:&lt;/U&gt;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Msg 3202, Level 16, State 1, Line 1&lt;BR /&gt;Write on "&amp;lt;URL to bak file&amp;gt;"&amp;nbsp;failed: 1117(The request could not be performed because of an I/O device error.)&lt;BR /&gt;Msg 3013, Level 16, State 1, Line 1&lt;BR /&gt;BACKUP DATABASE is terminating abnormally.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;This error occurs because your database has passed the blob storage block limit. Depending on the transfer size, a single backup file is capped at roughly 50,000 × MAXTRANSFERSIZE. So, when the individual backup file is more than that, you get this error.&lt;/P&gt;
&lt;P&gt;You need to stripe the backup into multiple files to avoid this failure of backup and make sure that MAXTRANSFERSIZE is equals to 4 MB in your TSQL command.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Example:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;BACKUP DATABASE […] TO 
URL = '&amp;lt;storage URL&amp;gt;/backup/DB_part01.bak', 
[…]
URL = '&amp;lt;storage URL&amp;gt;/backup/DB_part20.bak', 
WITH
COPY_ONLY,
COMPRESSION,
MAXTRANSFERSIZE = 4194304,
BLOCKSIZE = 65536;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Disclaimer&lt;/U&gt;&lt;/STRONG&gt;&lt;BR /&gt;Please note that products and options presented in this article are subject to change. This article reflects for Azure SQL Managed Instance in April 2026.&lt;/P&gt;
&lt;P&gt;I hope this article was helpful for you, please feel free to share your feedback in the comments section.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 25 Apr 2026 21:57:06 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/how-to-take-copy-only-backups-with-sql-managed-instance/ba-p/4512541</guid>
      <dc:creator>ahmaddaoud</dc:creator>
      <dc:date>2026-04-25T21:57:06Z</dc:date>
    </item>
    <item>
      <title>Understanding action_id discrepancies in Azure SQL Database Audit Logs (BCM vs AL / CR / DR)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/understanding-action-id-discrepancies-in-azure-sql-database/ba-p/4509932</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Overview&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;While working with Azure SQL Database auditing in enterprise environments, you may encounter an inconsistency in how the action_id field is captured across different PaaS SQL servers.&lt;/P&gt;
&lt;P&gt;In one such scenario, a customer observed:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;PaaS Server&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;action_id observed for similar DDL statements&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Server A&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;AL, CR, DR&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Server B&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;BCM only&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;This inconsistency impacted downstream compliance pipelines, as the audit data was expected to be captured and interpreted uniformly across all servers.&lt;/P&gt;
&lt;P&gt;This article explains:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;How Azure SQL Auditing works by default&lt;/LI&gt;
&lt;LI&gt;What causes BCM to appear instead of AL/CR/DR&lt;/LI&gt;
&lt;LI&gt;How to standardize audit logs across PaaS servers&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;How Azure SQL Database Auditing Works?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure SQL Database auditing uses a managed and fixed audit policy at the service level.&lt;/P&gt;
&lt;P&gt;When auditing is enabled at the server level, the default auditing policy includes the following action groups:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;BATCH_COMPLETED_GROUP&lt;/LI&gt;
&lt;LI&gt;SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP&lt;/LI&gt;
&lt;LI&gt;FAILED_DATABASE_AUTHENTICATION_GROUP&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These groups audit:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;All query execution activity&lt;/LI&gt;
&lt;LI&gt;Successful authentication attempts&lt;/LI&gt;
&lt;LI&gt;Failed authentication attempts&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;As a result, SQL batches — including DDL statements like CREATE, ALTER, or DROP on database objects — are captured under the BATCH_COMPLETED_GROUP and appear with action_id = BCM&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why AL, CR, and DR are not captured by default?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Audit action IDs such as AL, CR and DR are considered Security / DDL-level audit events.&lt;/P&gt;
&lt;P&gt;These events are not included in the default Azure SQL auditing policy.&lt;/P&gt;
&lt;P&gt;Instead, they are generated only when the corresponding Security-related AuditActionGroups are explicitly enabled.&lt;/P&gt;
&lt;P&gt;For example:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;AuditActionGroup&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Captures&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;DATABASE_OBJECT_CHANGE_GROUP&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;CREATE / ALTER / DROP on database objects&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;DATABASE_PRINCIPAL_CHANGE_GROUP&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;User / role changes&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;DATABASE_ROLE_MEMBER_CHANGE_GROUP&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Role membership updates&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;DDL operations such as CREATE / ALTER / DROP on database objects are captured under action groups like DATABASE_OBJECT_CHANGE_GROUP.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Observed Behavior in a Newly Created Test Server&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Running the following PowerShell command on a newly provisioned logical server showed only the default audit action groups enabled.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;(Get-AzSqlServerAudit -ResourceGroupName "RGName" -ServerName "ServerName").AuditActionGroup&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Therefore, DDL statements were audited but recorded as action_id = BCM&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Enabling AL / CR / DR Action IDs&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To capture DDL operations under their respective audit action IDs, configure the required security audit action groups at the SQL Server level.&lt;/P&gt;
&lt;P&gt;For example: In this customer scenario, we executed the following command:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Set-AzSqlServerAudit 
 -ResourceGroupName "RGName" 
 -ServerName "ServerName" 
 -AuditActionGroup 
 "DATABASE_PRINCIPAL_CHANGE_GROUP",
 "DATABASE_ROLE_MEMBER_CHANGE_GROUP",
 "DATABASE_OBJECT_CHANGE_GROUP"&lt;/LI-CODE&gt;
&lt;P&gt;After applying this configuration:&lt;/P&gt;
&lt;P&gt;DDL operations were captured in the audit logs as action_id = CR, AL and DR instead of BCM.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Ensuring Consistent Compliance Across PaaS Servers&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To standardize audit logging behavior across environments:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1: Compare AuditActionGroups&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Run the following command on all servers:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;(Get-AzSqlServerAudit -ResourceGroupName "&amp;lt;RG&amp;gt;" -ServerName "&amp;lt;ServerName&amp;gt;").AuditActionGroup&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2: Align AuditActionGroups&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Configure all server with same AuditActionGroup values. In this case, value used was below:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Set-AzSqlServerAudit 
 -ResourceGroupName "&amp;lt;RG&amp;gt;" 
 -ServerName "&amp;lt;ServerName&amp;gt;" 
 -AuditActionGroup `
 "DATABASE_PRINCIPAL_CHANGE_GROUP",
 "DATABASE_ROLE_MEMBER_CHANGE_GROUP",
 "DATABASE_OBJECT_CHANGE_GROUP"&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3: Validate&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Once aligned, similar SQL statements across all PaaS servers should now generate consistent action_id values in audit logs.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Accepted values for &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fdotnet%2Fapi%2Fmicrosoft.azure.commands.sql.auditing.model.auditactiongroups__%3B!!Nyu6ZXf5!v9WEcJe8GgZGJM30tzome-bmFQE-ncl9S0bmX2T264tPWUaZfMA-3MKTVCRyEKkCO5c2v7ldTdpKr0TinRc%24&amp;amp;data=05%7C02%7Csuagarwal%40microsoft.com%7C92497c36fc774c3a8f9c08de93b5231b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C639110603427303355%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=5o4%2FErXG1YicmzaR4FS2aEaVtRzW8yos2k3UROJx%2Fa4%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;AuditActionGroups. Ensure appropriate groups are enabled based on your organization’s compliance needs.&lt;/A&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Accepted values:&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;BATCH_STARTED_GROUP, BATCH_COMPLETED_GROUP, APPLICATION_ROLE_CHANGE_PASSWORD_GROUP, BACKUP_RESTORE_GROUP, DATABASE_LOGOUT_GROUP, DATABASE_OBJECT_CHANGE_GROUP, DATABASE_OBJECT_OWNERSHIP_CHANGE_GROUP, DATABASE_OBJECT_PERMISSION_CHANGE_GROUP, DATABASE_OPERATION_GROUP, DATABASE_PERMISSION_CHANGE_GROUP, DATABASE_PRINCIPAL_CHANGE_GROUP, DATABASE_PRINCIPAL_IMPERSONATION_GROUP, DATABASE_ROLE_MEMBER_CHANGE_GROUP, FAILED_DATABASE_AUTHENTICATION_GROUP, SCHEMA_OBJECT_ACCESS_GROUP, SCHEMA_OBJECT_CHANGE_GROUP, SCHEMA_OBJECT_OWNERSHIP_CHANGE_GROUP, SCHEMA_OBJECT_PERMISSION_CHANGE_GROUP, SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP, USER_CHANGE_PASSWORD_GROUP, LEDGER_OPERATION_GROUP, DBCC_GROUP, DATABASE_OWNERSHIP_CHANGE_GROUP, DATABASE_CHANGE_GROUP&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Links:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/powershell/module/az.sql/get-azsqlserveraudit?view=azps-15.4.0" target="_blank" rel="noopener"&gt;Get-AzSqlServerAudit (Az.Sql) | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fpowershell%2Fmodule%2Faz.sql%2Fset-azsqlserveraudit%3Fview%3Dazps-15.4.0__%3B!!Nyu6ZXf5!v9WEcJe8GgZGJM30tzome-bmFQE-ncl9S0bmX2T264tPWUaZfMA-3MKTVCRyEKkCO5c2v7ldTdpKoyceg0A%24&amp;amp;data=05%7C02%7Csuagarwal%40microsoft.com%7C92497c36fc774c3a8f9c08de93b5231b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C639110603427328075%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=vCQkjF8z2J1AZZBn7tM6t4eL1HeqGinFdAoAK0XhMCI%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Set-AzSqlServerAudit (Az.Sql) | Microsoft Learn&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 16 Apr 2026 11:41:59 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/understanding-action-id-discrepancies-in-azure-sql-database/ba-p/4509932</guid>
      <dc:creator>Sunaina_Agarwal</dc:creator>
      <dc:date>2026-04-16T11:41:59Z</dc:date>
    </item>
    <item>
      <title>Monitoring Azure SQL Data Sync Errors Using PowerShell</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/monitoring-azure-sql-data-sync-errors-using-powershell/ba-p/4511206</link>
      <description>&lt;P&gt;Azure SQL Data Sync is a powerful service that enables data synchronization between multiple databases across Azure SQL Database and on‑premises SQL Server environments. It supports hybrid architectures and distributed applications by allowing selected data to synchronize bi‑directionally between hub and member databases using a hub‑and‑spoke topology.&lt;/P&gt;
&lt;P&gt;However, one of the most common operational challenges faced by support engineers and customers using Azure SQL Data Sync is:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;❗ Lack of proactive monitoring for sync failures or errors&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;By default, Azure SQL Data Sync does not provide native alerting mechanisms that notify administrators when synchronization operations fail or encounter issues. This can result in silent data drift or synchronization delays that may go unnoticed in production environments.&lt;/P&gt;
&lt;P&gt;In this blog, we’ll walk through how to monitor Azure SQL Data Sync activity and detect synchronization errors using Azure PowerShell commands.&lt;/P&gt;
&lt;H2&gt;Why Monitoring Azure SQL Data Sync Matters&lt;/H2&gt;
&lt;P&gt;Azure SQL Data Sync works by synchronizing data between:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Hub Database (must be Azure SQL Database)&lt;/LI&gt;
&lt;LI&gt;Member Databases (Azure SQL Database or SQL Server)&lt;/LI&gt;
&lt;LI&gt;Sync Metadata Database (stores sync configuration and logs)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;All synchronization activity—including errors, failures, and successes—is logged internally within the Sync Metadata Database and exposed through Azure SQL Sync Group logs.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Monitoring these logs enables:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Detection of sync failures&lt;/LI&gt;
&lt;LI&gt;Identification of schema mismatches&lt;/LI&gt;
&lt;LI&gt;Validation of sync completion&lt;/LI&gt;
&lt;LI&gt;Troubleshooting of sync group issues&lt;/LI&gt;
&lt;LI&gt;Verification of last successful sync activity&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Prerequisites&lt;/H2&gt;
&lt;P&gt;Before monitoring Azure SQL Data Sync activity, ensure the following:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Azure PowerShell module (&lt;STRONG&gt;Az.Sql&lt;/STRONG&gt;) is installed&lt;/LI&gt;
&lt;LI&gt;You have access to the Azure SQL Data Sync resources&lt;/LI&gt;
&lt;LI&gt;Proper authentication and subscription context are configured&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Install and import the required module if not already available:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Install Azure PowerShell module if not already installed
Install-Module -Name Az -Repository PSGallery -Force

# Import the SQL module
Import-Module Az.Sql&lt;/LI-CODE&gt;
&lt;P&gt;Authenticate to Azure:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Login to Azure
Connect-AzAccount -TenantId "&amp;lt;tenant-id&amp;gt;"

# Set subscription context
Set-AzContext -SubscriptionId "&amp;lt;subscription-id&amp;gt;"&lt;/LI-CODE&gt;
&lt;P&gt;These commands enable access to Azure SQL Sync Group monitoring operations.&lt;/P&gt;
&lt;H2&gt;Monitoring Sync Group Status&lt;/H2&gt;
&lt;P&gt;To retrieve Sync Group details, define the required variables:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Define variables
$resourceGroup = "rg-datasync-demo"
$serverName = "&amp;lt;hub-server-name&amp;gt;"
$databaseName = "HubDatabase"
$syncGroupName = "SampleSyncGroup"

# Get sync group details
Get-AzSqlSyncGroup -ResourceGroupName $resourceGroup `
-ServerName $serverName `
-DatabaseName $databaseName `
-SyncGroupName $syncGroupName | Format-List&lt;/LI-CODE&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Note:&lt;BR /&gt;The LastSyncTime property returned by Get-AzSqlSyncGroup may sometimes display a value such as &lt;STRONG&gt;1/1/0001&lt;/STRONG&gt;, even when synchronization operations are completing successfully.&lt;BR /&gt;To obtain accurate synchronization timestamps, it is recommended to use Sync Group Logs instead.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;Monitoring Sync Activity Using Logs (Recommended)&lt;/H2&gt;
&lt;P&gt;To monitor synchronization activity and retrieve detailed sync status, use:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Get sync logs for the last 24 hours
$startTime = (Get-Date).AddHours(-24).ToString("yyyy-MM-ddTHH:mm:ssZ")
$endTime = (Get-Date).ToString("yyyy-MM-ddTHH:mm:ssZ")

Get-AzSqlSyncGroupLog -ResourceGroupName $resourceGroup `
-ServerName $serverName `
-DatabaseName $databaseName `
-SyncGroupName $syncGroupName `
-StartTime $startTime `
-EndTime $endTime&lt;/LI-CODE&gt;
&lt;P&gt;This command retrieves:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Sync operation timestamps&lt;/LI&gt;
&lt;LI&gt;Sync status&lt;/LI&gt;
&lt;LI&gt;Error messages&lt;/LI&gt;
&lt;LI&gt;Activity details&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Sync Group Logs provide more reliable monitoring information than the Sync Group status output alone.&lt;/P&gt;
&lt;H2&gt;Retrieving the Last Successful Sync Time&lt;/H2&gt;
&lt;P&gt;To determine the most recent successful synchronization operation:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Get the most recent successful sync timestamp
$startTime = (Get-Date).AddDays(-7).ToString("yyyy-MM-ddTHH:mm:ssZ")
$endTime = (Get-Date).ToString("yyyy-MM-ddTHH:mm:ssZ")

Get-AzSqlSyncGroupLog -ResourceGroupName $resourceGroup `
-ServerName $serverName `
-DatabaseName $databaseName `
-SyncGroupName $syncGroupName `
-StartTime $startTime `
-EndTime $endTime |
Where-Object { $_.Details -like "*completed*" -or $_.Type -eq "Success" } |
Select-Object -First 1 Timestamp, Type, Details&lt;/LI-CODE&gt;
&lt;P&gt;This helps administrators validate whether synchronization is occurring as expected across the sync topology.&lt;/P&gt;
&lt;H2&gt;Filtering for Synchronization Errors&lt;/H2&gt;
&lt;P&gt;To identify failed or problematic sync operations:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Get only error logs
Get-AzSqlSyncGroupLog -ResourceGroupName $resourceGroup `
-ServerName $serverName `
-DatabaseName $databaseName `
-SyncGroupName $syncGroupName `
-StartTime $startTime `
-EndTime $endTime |
Where-Object { $_.LogLevel -eq "Error" }&lt;/LI-CODE&gt;
&lt;P&gt;Filtering logs by error type allows for:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Rapid identification of failed sync attempts&lt;/LI&gt;
&lt;LI&gt;Analysis of failure causes&lt;/LI&gt;
&lt;LI&gt;Early detection of data consistency risks&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Key Takeaways&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Azure SQL Data Sync does not provide native alerting for sync failures&lt;/LI&gt;
&lt;LI&gt;Sync Group Logs offer detailed monitoring of sync operations&lt;/LI&gt;
&lt;LI&gt;Get-AzSqlSyncGroupLog provides accurate timestamps and status&lt;/LI&gt;
&lt;LI&gt;Monitoring logs enables detection of silent sync failures&lt;/LI&gt;
&lt;LI&gt;PowerShell can be used to proactively monitor synchronization health&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;References&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://github.com/mhaywardhill/azure-sql-datasync-error-monitoring/tree/main/monitoring" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Azure SQL Data Sync Error Monitoring GitHub Repository&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/sql-data-sync-data-sql-server-sql-database?view=azuresql" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;What is SQL Data Sync for Azure?&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 14 Apr 2026 13:27:59 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/monitoring-azure-sql-data-sync-errors-using-powershell/ba-p/4511206</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-04-14T13:27:59Z</dc:date>
    </item>
    <item>
      <title>Fixing “There is not enough space on the disk” during Azure Data Sync initial sync (On‑prem ➜ Azure)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/fixing-there-is-not-enough-space-on-the-disk-during-azure-data/ba-p/4510866</link>
      <description>&lt;P&gt;When you run an&amp;nbsp;&lt;STRONG&gt;initial&lt;/STRONG&gt; (first-time) sync from an on‑premises SQL Server database to &lt;STRONG&gt;Azure SQL Database&lt;/STRONG&gt; using &lt;STRONG&gt;SQL Data Sync&lt;/STRONG&gt;, the local agent may fail with a disk-space error—even when the disk “looks” like it has free space. The reason is that the initial sync can generate &lt;STRONG&gt;large temporary files&lt;/STRONG&gt; in the Windows &lt;STRONG&gt;TEMP&lt;/STRONG&gt; location used by the Data Sync Agent.&lt;/P&gt;
&lt;P&gt;This post explains the symptom, what’s happening under the hood, and the most practical mitigation:&amp;nbsp;&lt;STRONG&gt;move the Data Sync Agent’s TEMP/TMP to a drive with sufficient space&lt;/STRONG&gt; and restart the service.&lt;/P&gt;
&lt;H2&gt;Symptom&lt;/H2&gt;
&lt;P&gt;During an initial sync (commonly on-premises ➜ Azure), the sync fails while applying a batch file.&lt;/P&gt;
&lt;H2&gt;Error&lt;/H2&gt;
&lt;P&gt;You may see an error similar to:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Sync failed with the exception:&lt;BR /&gt;“An unexpected error occurred when applying batch file … .batch. See the inner exception for more details. Inner exception: &lt;STRONG&gt;There is not enough space on the disk&lt;/STRONG&gt; …”&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Microsoft Learn also calls out “disk insufficient space” scenarios for SQL Data Sync and points to the&amp;nbsp;&lt;STRONG&gt;%TEMP%&lt;/STRONG&gt; directory as the key location to check.&lt;/P&gt;
&lt;H2&gt;What’s actually happening (Root Cause)&lt;/H2&gt;
&lt;H3&gt;1) Initial sync uses temp files on the agent machine&lt;/H3&gt;
&lt;P&gt;During initialization, the local agent can &lt;STRONG&gt;load data and store it as temp files&lt;/STRONG&gt; in the system temp folder. This is explicitly called out in the Azure SQL Data Sync scalability guidance.&lt;/P&gt;
&lt;H3&gt;2) The agent can generate&amp;nbsp;&lt;EM&gt;more&lt;/EM&gt; than “just the batch files”&lt;/H3&gt;
&lt;P&gt;In practice, you’ll often see:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Batch files&lt;/STRONG&gt; (e.g., sync_*.batch)&lt;/LI&gt;
&lt;LI&gt;Extra temp files under folders like &lt;STRONG&gt;MAT_ / MATS_&lt;/STRONG&gt; that are used for internal processing (commonly described as “sorting”/intermediate work).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Internal field experience shared in the Data Sync support channel highlights that the MAT/MATS files can be &lt;STRONG&gt;much larger&lt;/STRONG&gt; than the batch files—sometimes &lt;STRONG&gt;8–10× larger&lt;/STRONG&gt; than the data being synced for that table (especially during initialization).&lt;/P&gt;
&lt;H3&gt;3) Why “I still have free disk space” can be misleading&lt;/H3&gt;
&lt;P&gt;If your Data Sync Agent’s TEMP points to a system drive (often C:), it can fill quickly with temp batches + MAT/MATS files during the first sync—particularly for large tables or many tables being initialized. The Azure SQL Data Sync “large scale” guidance recommends ensuring the temp folder has enough space &lt;EM&gt;before&lt;/EM&gt; starting initialization and notes you &lt;STRONG&gt;can move TEMP/TMP&lt;/STRONG&gt; to another drive.&lt;/P&gt;
&lt;H2&gt;Mitigation (Recommended)&lt;/H2&gt;
&lt;H3&gt;Option A — Move TEMP/TMP to a larger drive (recommended)&lt;/H3&gt;
&lt;P&gt;The &lt;STRONG&gt;Microsoft Azure Blog&lt;/STRONG&gt; guidance for large-scale initialization is clear: move the temp folder by setting TEMP and TMP environment variables and &lt;STRONG&gt;restart the sync service&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H4&gt;Key point: change the variables for the&amp;nbsp;&lt;STRONG&gt;same account running the Data Sync Agent service&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Environment variables exist at &lt;STRONG&gt;user scope&lt;/STRONG&gt; and &lt;STRONG&gt;machine scope&lt;/STRONG&gt;, and the effective TEMP location depends on which account the agent service runs under.&lt;/P&gt;
&lt;P&gt;A simple PowerShell approach (run elevated) is to read and set the variables at the appropriate scope. (Example shown below uses the standard .NET environment APIs.)&lt;/P&gt;
&lt;LI-CODE lang=""&gt;# Run in Administrator mode
# Get current values
[Environment]::GetEnvironmentVariable("TEMP","User")
[Environment]::GetEnvironmentVariable("TEMP","Machine")
# Set new values (examples)
[Environment]::SetEnvironmentVariable("TEMP","D:\TempUser","User")
[Environment]::SetEnvironmentVariable("TMP" ,"D:\TempUser","User")
# or machine scope:
[Environment]::SetEnvironmentVariable("TEMP","D:\TempMachine","Machine")
[Environment]::SetEnvironmentVariable("TMP" ,"D:\TempMachine","Machine")&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Important:&lt;/STRONG&gt; After updating TEMP/TMP, &lt;STRONG&gt;restart the SQL Data Sync agent service&lt;/STRONG&gt; so it picks up the new environment settings.&lt;/P&gt;
&lt;H3&gt;Option B — If you can’t log in as the service account: update TEMP/TMP in the registry for that account&lt;/H3&gt;
&lt;P&gt;If you need to change TEMP/TMP for a specific account without interactive logon, you can update the &lt;STRONG&gt;user environment variables&lt;/STRONG&gt; stored in the registry.&lt;/P&gt;
&lt;P&gt;General Windows guidance indicates:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;User environment variables live under HKEY_CURRENT_USER\Environment (and for other users, under that user’s SID hive loaded under HKEY_USERS).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;A common approach is:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Identify the service account SID (example commands such as WMIC are often used in practice).&lt;/LI&gt;
&lt;LI&gt;Open Registry Editor&lt;/LI&gt;
&lt;LI&gt;Navigate to:&lt;BR /&gt;HKEY_USERS\&amp;lt;SID&amp;gt;\Environment&lt;/LI&gt;
&lt;LI&gt;Update TEMP and TMP to a path on a drive with sufficient space.&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;Restart the Data Sync service.&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;Option C — Clean up leftover sync temp files (when sync is NOT running)&lt;/H3&gt;
&lt;P&gt;In some cases, the “disk out of space” condition is caused by &lt;STRONG&gt;leftover sync files&lt;/STRONG&gt; that were not removed (for example, if something had files open during deletion). Microsoft Learn suggests manually deleting sync files from %temp% and cleaning subdirectories &lt;STRONG&gt;only when sync is not in progress&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;Validation checklist (after the change)&lt;/H2&gt;
&lt;P&gt;After moving TEMP/TMP and restarting the service, confirm:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;New temp path is being used&lt;/STRONG&gt;&lt;BR /&gt;Initiate sync and check that new sync_*.batch / temp artifacts appear under the new folder.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Sufficient free space exists for initialization&lt;/STRONG&gt;&lt;BR /&gt;Especially for large tables, ensure the chosen drive can accommodate temp growth during the first sync. &lt;A href="https://azure.microsoft.com/en-us/blog/sync-sql-data-in-large-scale-using-azure-sql-data-sync/" target="_blank"&gt;&amp;nbsp;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Rerun initial sync&lt;/STRONG&gt;&lt;BR /&gt;Retry the initial sync after making the change.&amp;nbsp;&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Classification&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Symptom type:&lt;/STRONG&gt; Agent side / initialization failure&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Primary root cause:&lt;/STRONG&gt; Insufficient disk space on the &lt;STRONG&gt;TEMP&lt;/STRONG&gt; location used by the Data Sync Agent during initial sync temp-file generation&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Fix type:&lt;/STRONG&gt; Configuration / operational (move TEMP/TMP to a larger drive + restart agent service)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;a { text-decoration: none; color: #464feb; } tr th, tr td { border: 1px solid #e6e6e6; } tr th { background-color: #f5f5f5; }&lt;/P&gt;
&lt;H2&gt;Helpful references&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/sql-data-sync-troubleshoot?view=azuresql" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Troubleshoot SQL Data Sync (Microsoft Learn)&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/blog/sync-sql-data-in-large-scale-using-azure-sql-data-sync/" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Sync SQL data in large scale using Azure SQL Data Sync (Microsoft Azure Blog)&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/sql-data-sync-agent-overview?view=azuresql" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Data Sync Agent for SQL Data Sync (Microsoft Learn)&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 13 Apr 2026 14:31:07 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/fixing-there-is-not-enough-space-on-the-disk-during-azure-data/ba-p/4510866</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-04-13T14:31:07Z</dc:date>
    </item>
    <item>
      <title>Troubleshooting Azure SQL Data Sync Failure: SQL Error 8106 During Bulk Insert</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/troubleshooting-azure-sql-data-sync-failure-sql-error-8106/ba-p/4509665</link>
      <description>&lt;P&gt;Azure SQL Data Sync is widely used to maintain consistency across distributed databases in hub–member topologies. However, synchronization may occasionally fail due to schema mismatches between participating databases — even when everything appears correctly configured at first glance.&lt;/P&gt;
&lt;P&gt;In this post, we’ll walk through a real-world troubleshooting scenario involving a Data Sync failure caused by a schema inconsistency related to an &lt;STRONG&gt;IDENTITY column&lt;/STRONG&gt;, and how it was mitigated.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Sample Error:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-teams="true"&gt;&lt;EM&gt;sync_7726d6cb22124c0f901192c434f49106bd618f8ab16343b2adc03250f8367ff4\3953fb7d-1dba-4656-8150-83153d5d019b.batch. See the inner exception for more details. Inner exception: Failed to execute the command 'BulkInsertCommand' for table 'schema.table_name'; the transaction was rolled back. Ensure that the command syntax is correct. Inner exception: SqlException ID: e19b3677-d67e-4c8e-bc49-13d3df61ad0e, Error Code: -2146232060 - SqlError Number:8106, Message: SQL error with code 8106 For more information, provide tracing ID ‘92e76130-f80a-4372-9a48-ec0ede8b0288’ to customer support."&lt;/EM&gt;&amp;nbsp; &amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;Scenario Overview&lt;/H2&gt;
&lt;P&gt;A synchronization operation began failing for a specific table within an Azure SQL Data Sync group. The failure was observed during the sync process when applying changes using a batch file.&lt;/P&gt;
&lt;P&gt;The error surfaced as part of a failed &lt;STRONG&gt;&lt;EM&gt;BulkInsertCommand&lt;/EM&gt;&lt;/STRONG&gt; execution on a synced table, causing the transaction to roll back.&lt;/P&gt;
&lt;P&gt;Further investigation revealed the following SQL exception:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;SqlError Number: 8106&lt;/STRONG&gt;&lt;BR /&gt;&lt;EM&gt;Table does not have the identity property. Cannot perform SET operation.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;Initial Troubleshooting Steps&lt;/H2&gt;
&lt;P&gt;Before identifying the root cause, the following actions were taken:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The affected table was removed from the sync group.&lt;/LI&gt;
&lt;LI&gt;A sync operation was triggered.&lt;/LI&gt;
&lt;LI&gt;The table was re-added to the sync group.&lt;/LI&gt;
&lt;LI&gt;Sync was triggered again.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Despite performing these steps, the issue persisted with the same error.&lt;/P&gt;
&lt;P&gt;This indicated that the failure was not related to sync metadata or temporary configuration inconsistencies.&lt;/P&gt;
&lt;H2&gt;Root Cause Analysis&lt;/H2&gt;
&lt;P&gt;After reviewing the table definitions across the sync topology, it was discovered that:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;The synchronized table had an&amp;nbsp;&lt;STRONG&gt;IDENTITY column defined on one side of the topology (Hub or Member)&lt;/STRONG&gt; but &lt;STRONG&gt;not on the other&lt;/STRONG&gt;.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;This schema mismatch led to the sync service attempting to apply SET IDENTITY_INSERT operations during the bulk insert phase — which failed on the database where the column lacked the identity property.&lt;/P&gt;
&lt;P&gt;Azure SQL Data Sync relies on consistent schema definitions across all participating databases. Any deviation — particularly involving identity columns — can interrupt data movement operations.&lt;/P&gt;
&lt;H2&gt;Mitigation Approach&lt;/H2&gt;
&lt;P&gt;To resolve the issue, the following corrective steps were applied:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Remove the affected table from the sync group and save the configuration.&lt;/LI&gt;
&lt;LI&gt;Refresh the sync schema.&lt;/LI&gt;
&lt;LI&gt;Recreate the table to include the appropriate IDENTITY property.&lt;/LI&gt;
&lt;LI&gt;Add the corrected table back to the sync group.&lt;/LI&gt;
&lt;LI&gt;Trigger a new sync operation.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;These steps ensured that the table definitions were aligned across all sync participants, allowing the synchronization process to proceed successfully.&lt;/P&gt;
&lt;H2&gt;Best Practices to Avoid Similar Issues&lt;/H2&gt;
&lt;P&gt;To prevent identity-related sync failures in Azure SQL Data Sync:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;✅ Ensure table schemas are &lt;STRONG&gt;identical&lt;/STRONG&gt; across all participating databases before onboarding them into a sync group.&lt;/LI&gt;
&lt;LI&gt;✅ Pay special attention to:
&lt;UL&gt;
&lt;LI&gt;IDENTITY properties&lt;/LI&gt;
&lt;LI&gt;Primary keys&lt;/LI&gt;
&lt;LI&gt;Data types&lt;/LI&gt;
&lt;LI&gt;Nullable constraints&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;✅ Always validate schema consistency when:
&lt;UL&gt;
&lt;LI&gt;Adding new tables to a sync group&lt;/LI&gt;
&lt;LI&gt;Modifying existing table definitions&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Final Thoughts&lt;/H2&gt;
&lt;P&gt;Schema mismatches — especially those involving identity columns — are a common but often overlooked cause of Data Sync failures. By ensuring consistent table definitions across your hub and member databases, you can significantly reduce the risk of synchronization errors and maintain reliable data movement across regions.&lt;/P&gt;</description>
      <pubDate>Wed, 08 Apr 2026 15:55:41 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/troubleshooting-azure-sql-data-sync-failure-sql-error-8106/ba-p/4509665</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-04-08T15:55:41Z</dc:date>
    </item>
    <item>
      <title>Azure SQL Audit Columns missing? Understanding AzureDiagnostics 500-Column Limit and Spill</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-audit-columns-missing-understanding-azurediagnostics/ba-p/4508825</link>
      <description>&lt;H3&gt;👋 Introduction&lt;/H3&gt;
&lt;P&gt;Have you ever written a perfectly valid KQL query against the AzureDiagnostics table — only to find that one or more columns return &lt;STRONG&gt;empty results or don't exist at all&lt;/STRONG&gt;? You double-check the column name, verify the resource filter, and everything looks right — yet the data simply isn't there.&lt;/P&gt;
&lt;P&gt;This is one of the more &lt;STRONG&gt;silent and confusing behaviors&lt;/STRONG&gt; of the AzureDiagnostics table in Azure Monitor Log Analytics, and it doesn't just affect one specific column — &lt;STRONG&gt;it can affect any column&lt;/STRONG&gt;, including commonly queried ones like data_sensitivity_information_s, deadlock_xml_s, query_hash_s, error_message_s, and others.&lt;/P&gt;
&lt;P&gt;In this post, I'll explain exactly why this happens, how to detect it, how to write resilient KQL queries that always retrieve your data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;🔍 The Scenario: Two Queries, Two Different Outcomes&lt;/H3&gt;
&lt;P&gt;Here's a real-world example from a customer case involving Azure SQL audit logs and data sensitivity classification.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Query 1 — The "Non-Working" Query:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;AzureDiagnostics
| where Category == 'SQLSecurityAuditEvents'
| where ResourceId == '/SUBSCRIPTIONS/&amp;lt;SubscriptionID&amp;gt;/RESOURCEGROUPS/&amp;lt;RG&amp;gt;/PROVIDERS/MICROSOFT.SQL/SERVERS/&amp;lt;ServerName&amp;gt;/DATABASES/&amp;lt;DBName&amp;gt;'
| project
    event_time_t,
    statement_s,
    succeeded_s,
    affected_rows_d,
    server_principal_name_s,
    client_ip_s,
    application_name_s,
    additional_information_s,
    data_sensitivity_information_s
| order by event_time_t desc
| take 100&lt;/LI-CODE&gt;
&lt;P&gt;data_sensitivity_information_s returns &lt;STRONG&gt;empty&lt;/STRONG&gt; — or the column doesn't exist at all in the schema.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Not only restricted to one column as you see. Another column "affected_rows_d" failed to resolve.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Query 2 — The "Working" Query&lt;/STRONG&gt;:&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;AzureDiagnostics
| where Category == "SQLSecurityAuditEvents"
| where ResourceId == "/SUBSCRIPTIONS/&amp;lt;SubscriptionID&amp;gt;/RESOURCEGROUPS/&amp;lt;RG&amp;gt;/PROVIDERS/MICROSOFT.SQL/SERVERS/&amp;lt;ServerName&amp;gt;/DATABASES/&amp;lt;DBName&amp;gt;"
| extend DataSensitivityInfo = tostring(parse_json(AdditionalFields).data_sensitivity_information)
| where isnotempty(DataSensitivityInfo)
| project event_time_t, DataSensitivityInfo
| take 100&lt;/LI-CODE&gt;
&lt;P&gt;This one&amp;nbsp;&lt;STRONG&gt;works perfectly&lt;/STRONG&gt;. The difference? It reads from AdditionalFields instead of directly from the column.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;💡 &lt;STRONG&gt;Important:&lt;/STRONG&gt; This is NOT a bug with data_sensitivity_information_s specifically. This behavior can happen to &lt;STRONG&gt;any column&lt;/STRONG&gt; in the AzureDiagnostics table — and understanding why is key to writing reliable audit queries.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;🧩 Root Cause: The AzureDiagnostics 500-Column Limit — A Default Platform Behavior&lt;/H3&gt;
&lt;P&gt;The &lt;STRONG&gt;AzureDiagnostics&lt;/STRONG&gt; table&amp;nbsp;in Azure Monitor Log Analytics is a &lt;STRONG&gt;shared, multi-resource table&lt;/STRONG&gt;. It collects diagnostic data from every Azure resource type that sends logs to your workspace — Azure SQL, Key Vault, App Service, Storage Accounts, Firewalls, and many more. Each resource type contributes its own set of columns, all landing in this single table.&lt;/P&gt;
&lt;P&gt;Because of this design, the AzureDiagnostics table enforces a &lt;STRONG&gt;hard limit of 500 columns per workspace&lt;/STRONG&gt;. This is a &lt;STRONG&gt;default, platform-wide behavior&lt;/STRONG&gt; — not specific to any one Azure service or column and is designed to avoid any data loss due to the number of active columns exceeding this 500 column limit.&lt;/P&gt;
&lt;P&gt;Here's what happens when that limit is reached:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;th&gt;Workspace Column Count&lt;/th&gt;&lt;th&gt;Behavior&lt;/th&gt;&lt;/tr&gt;&lt;tr&gt;&lt;th&gt;&lt;STRONG&gt;&amp;lt; 500 columns&lt;/STRONG&gt;&lt;/th&gt;&lt;td&gt;New columns are created normally (e.g., data_sensitivity_information_s)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;th&gt;&lt;STRONG&gt;≥ 500 columns&lt;/STRONG&gt;&lt;/th&gt;&lt;td&gt;&lt;STRONG&gt;Any new or overflow column data is silently redirected&lt;/STRONG&gt; into a dynamic JSON property bag called AdditionalFields&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;⚠️&amp;nbsp;&lt;STRONG&gt;This redirection is completely silent.&lt;/STRONG&gt; No error is thrown. No warning appears in your query results. The data is &lt;STRONG&gt;NOT lost&lt;/STRONG&gt; — it's simply stored differently, inside AdditionalFields as a JSON key-value pair.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This means that in a busy Log Analytics workspace shared across many Azure resource types, &lt;STRONG&gt;virtually any column&lt;/STRONG&gt; that exceeds the 500-column threshold will have its data moved to AdditionalFields — whether it's data_sensitivity_information_s, deadlock_xml_s, query_hash_s, error_message_s, or any other field your resource type emits.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;BR /&gt;⚠️ &lt;EM&gt;"Because AzureDiagnostics is a single shared table, &lt;STRONG&gt;all Azure resource types&lt;/STRONG&gt; sending logs to your workspace — Azure Firewall, Key Vault, App Service, AKS, and others — contribute their columns to the &lt;STRONG&gt;same 500-column pool&lt;/STRONG&gt;. As Microsoft's documentation notes, this is precisely why AzureDiagnostics is&lt;/EM&gt; &lt;EM&gt;&lt;STRONG&gt;'much more susceptible to exceeding the 500-column limit'&lt;/STRONG&gt;&lt;/EM&gt; &lt;EM&gt;compared to other tables — your SQL columns can spill even if your SQL workload alone would never breach the limit."&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H3&gt;🔎 Step 1: Diagnose — Has Your Workspace Hit the 500-Column Limit?&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Run this query in your Log Analytics workspace to check the current column count:&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;// Check total column count in AzureDiagnostics
AzureDiagnostics
| getschema
| summarize
    TotalColumns = count(),
    RemainingCapacity = 500 - count(),
    LimitReached = iff(count() &amp;gt;= 500, "⚠️ YES - Data is going to AdditionalFields", "✅ NO - Under the limit")&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Then check whether a specific column you're looking for exists in the schema at all:&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;// Replace the your_column_name with any column you're investigating
AzureDiagnostics 
| getschema 
| where ColumnName == "your_column_name"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If this returns&amp;nbsp;&lt;STRONG&gt;no rows&lt;/STRONG&gt;, that column's data has been redirected to AdditionalFields. The same check applies to &lt;STRONG&gt;any column&lt;/STRONG&gt; you suspect is missing.&lt;/P&gt;
&lt;P&gt;Use this query to &lt;STRONG&gt;discover all fields&lt;/STRONG&gt; currently spilled to AdditionalFields for SQL audit events:&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;AzureDiagnostics 
| where Category == "SQLSecurityAuditEvents" 
| where isnotempty(AdditionalFields) 
| extend ParsedAF = parse_json(AdditionalFields) 
| extend AF_Keys = bag_keys(ParsedAF) 
| mv-expand AF_Keys to typeof(string) 
| summarize SpilledColumns = make_set(AF_Keys), Count = count() by Category 
| project Category, Count, SpilledColumns&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;✅ Step 2: Fix — Use AdditionalFields to Retrieve Any Spilled Column&lt;/H3&gt;
&lt;P&gt;Once you've confirmed the spill, update your queries to extract the missing field from AdditionalFields using parse_json. This approach works for &lt;STRONG&gt;any column&lt;/STRONG&gt; that has been redirected.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Recommended Pattern — Resilient to Both States:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;AzureDiagnostics
| where Category == "SQLSecurityAuditEvents"
| extend 
    DataSensitivityInfo = coalesce(data_sensitivity_information_s,   tostring(parse_json(AdditionalFields).data_sensitivity_information)),
    QueryHash           = coalesce(query_hash_s,                     tostring(parse_json(AdditionalFields).query_hash)),
    AdditionalInfo      = coalesce(additional_information_s,         tostring(parse_json(AdditionalFields).additional_information))
| project
    event_time_t,
    statement_s,
    succeeded_s,
    server_principal_name_s,
    client_ip_s,
    application_name_s,
    DataSensitivityInfo,
    QueryHash,
    AdditionalInfo
| order by event_time_t desc
| take 100&lt;/LI-CODE&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;A shorter version:&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;AzureDiagnostics 
| where Category == "SQLSecurityAuditEvents" 
| where ResourceId == "/SUBSCRIPTIONS/&amp;lt;SubscriptionID&amp;gt;/RESOURCEGROUPS/&amp;lt;RG&amp;gt;/PROVIDERS/MICROSOFT.SQL/SERVERS/&amp;lt;ServerName&amp;gt;/DATABASES/&amp;lt;DBName&amp;gt;" 
| extend DataSensitivityInfo = tostring(parse_json(AdditionalFields).data_sensitivity_information) 
| where isnotempty(DataSensitivityInfo) 
| project event_time_t, DataSensitivityInfo 
| take 100&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;💡&amp;nbsp;&lt;STRONG&gt;coalesce()&lt;/STRONG&gt; tries the direct column first. If null, it falls back to AdditionalFields. This makes your query &lt;STRONG&gt;portable across workspaces&lt;/STRONG&gt; — whether at the limit or not.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;📚 References&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/azurediagnostics" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Azure Monitor Logs — AzureDiagnostics table reference&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/sqlsecurityauditevents" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;SQLSecurityAuditEvents table reference&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/auditing-analyze-audit-logs?view=azuresql" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Analyze Azure SQL audit logs in Log Analytics&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/platform/resource-logs?tabs=log-analytics#send-to-log-analytics-workspace" target="_blank"&gt;Resource logs in Azure Monitor - Azure Monitor | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Apr 2026 07:11:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-audit-columns-missing-understanding-azurediagnostics/ba-p/4508825</guid>
      <dc:creator>Ashriti_Jamwal</dc:creator>
      <dc:date>2026-04-06T07:11:20Z</dc:date>
    </item>
    <item>
      <title>Connect to Azure SQL Database using a custom domain name
with Microsoft Entra ID authentication</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/connect-to-azure-sql-database-using-a-custom-domain-name-with/ba-p/4507679</link>
      <description>&lt;P&gt;&lt;SPAN data-olk-copy-source="MailCompose"&gt;Many of us might prefer to connect to Azure SQL Server using a custom domain name (like &lt;STRONG&gt;&lt;EM&gt;devsqlserver.mycompany.com&lt;/EM&gt;&lt;/STRONG&gt;) rather than the default fully qualified domain name (&lt;STRONG&gt;&lt;EM&gt;devsqlserver.database.windows.net&lt;/EM&gt;&lt;/STRONG&gt;), often because of application-specific or compliance reasons. This article details how you can accomplish this when logging in with &lt;STRONG&gt;Microsoft Entra ID&lt;/STRONG&gt;&amp;nbsp;(for example, &lt;EM&gt;&lt;U&gt;user@mycompany.com&lt;/U&gt;&lt;/EM&gt;) in Azure SQL Database specific environment. Frequently, users encounter errors similar to the one described below during this process.&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG data-olk-copy-source="MailCompose"&gt;Before you start&lt;/STRONG&gt;: If you use &lt;STRONG&gt;SQL authentication&lt;/STRONG&gt; (SQL username/password), the steps are different. Refer the following article for that scenario:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/blog/azuredbsupport/how-to-use-different-domain-name-to-connect-to-azure-sql-db-server/1217805" target="_blank" rel="noopener"&gt;&lt;U&gt;How to use different domain name to connect to Azure SQL DB Server | Microsoft Community Hub&lt;/U&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;With SQL authentication, you can include the server name in the login (for example, &lt;EM&gt;username@&lt;/EM&gt;&lt;STRONG&gt;&lt;EM&gt;servername&lt;/EM&gt;&lt;/STRONG&gt;). With Microsoft Entra ID authentication, you don’t do that—so your custom DNS name must follow one important rule.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key requirement for Microsoft Entra ID authentication&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In an Azure SQL Database (PaaS) environment, the platform relies on the &lt;STRONG&gt;server name portion &lt;/STRONG&gt;of the Fully Qualified Domain Name (FQDN) to correctly route incoming connection requests to the appropriate logical server.&lt;/P&gt;
&lt;P&gt;When you use a custom DNS name, it is important that the name &lt;STRONG&gt;starts with the exact Azure SQL server name&lt;/STRONG&gt; (the part before &lt;EM&gt;.database.windows.net&lt;/EM&gt;).&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why this is required:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Azure SQL Database is a &lt;STRONG&gt;multi-tenant PaaS service&lt;/STRONG&gt;, where multiple logical servers are hosted behind shared infrastructure.&lt;/LI&gt;
&lt;LI&gt;During the connection process (especially with &lt;STRONG&gt;Microsoft Entra ID authentication&lt;/STRONG&gt;), Azure SQL uses the server name extracted from the FQDN to:&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;Identify the correct logical server&lt;/LI&gt;
&lt;LI&gt;Route the connection internally within the platform&lt;/LI&gt;
&lt;LI&gt;Validate the authentication context&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;This behavior aligns with how Azure SQL endpoints are designed and resolved within Microsoft’s managed infrastructure.&lt;/P&gt;
&lt;P&gt;If your custom DNS name doesn’t start with the Azure SQL server name, Azure can’t route the connection to the correct server. Sign-in may fail and you might see error &lt;STRONG&gt;40532&lt;/STRONG&gt; (&lt;EM&gt;as shown above&lt;/EM&gt;). To fix this, change the custom DNS name so it starts with your Azure SQL server name.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Example:&lt;/EM&gt;&amp;nbsp;if your server is &lt;EM&gt;devsqlserver.database.windows.net&lt;/EM&gt;, your custom name must start with '&lt;STRONG&gt;&lt;EM&gt;devsqlserver'&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;devsqlserver&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;.mycompany.com&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;devsqlserver&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;.contoso.com&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;devsqlserver&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;.mydomain.com&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step-by-step: set up and connect&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL data-editing-info="{&amp;quot;orderedStyleType&amp;quot;:1}"&gt;
&lt;LI&gt;&lt;STRONG&gt;Pick the custom name.&lt;/STRONG&gt;&amp;nbsp;It must start with your server name. Example: use &lt;STRONG&gt;&lt;EM&gt;devsqlserver&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;.mycompany.com&lt;/EM&gt;&amp;nbsp;(not &lt;EM&gt;othername.mycompany.com&lt;/EM&gt;).&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Create DNS records for the custom name.&lt;/STRONG&gt; Create a CNAME or DNS alias to point the custom name to your Azure SQL server endpoint (public) or to the private endpoint IP (private) as per the blog mentioned above.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Check DNS from your computer.&lt;/STRONG&gt;&amp;nbsp;Make sure &lt;EM&gt;devsqlserver.mycompany.com&lt;/EM&gt;&amp;nbsp;resolves to the right address before you try to connect.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Connect with Microsoft Entra ID.&lt;/STRONG&gt;&amp;nbsp;In SSMS/Azure Data Studio, set &lt;EM&gt;Server&lt;/EM&gt;&amp;nbsp;to your custom server name and select a Microsoft Entra ID authentication option (for example, &lt;EM&gt;Universal with MFA&lt;/EM&gt;).&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Sign in and connect.&lt;/STRONG&gt;&amp;nbsp;Use your Entra ID (for example, &lt;A href="mailto:user@mycompany.com" target="_blank" rel="noopener"&gt;&lt;U&gt;&lt;EM&gt;user@mycompany.com&lt;/EM&gt;&lt;/U&gt;&lt;/A&gt;).&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;EM&gt;Example:&lt;/EM&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Also, when you connect to Azure SQL Database using a custom domain name, you might see the following error:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;“The target principal name is incorrect”&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Example&lt;/EM&gt;&lt;STRONG&gt;:&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;This happens because Azure SQL’s SSL/TLS certificate is issued for the default server name (for example,&amp;nbsp;&lt;EM&gt;servername.database.windows.net&lt;/EM&gt;), not for your custom DNS name.&lt;/P&gt;
&lt;P&gt;During the secure connection process, the client validates that the server name you are connecting to matches the name in the certificate. Since the custom domain does not match the certificate, this validation fails, resulting in the error.&lt;/P&gt;
&lt;P&gt;This is expected behavior and is part of standard security checks to prevent connecting to an untrusted or impersonated server.&lt;/P&gt;
&lt;P&gt;To proceed with the connection, you can configure the client to trust the server certificate by:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Setting &lt;STRONG&gt;Trust Server Certificate = True&lt;/STRONG&gt; in the client settings, or&lt;/LI&gt;
&lt;LI&gt;Adding TrustServerCertificate=True in the connection string&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This bypasses the strict name validation and allows the connection to succeed.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Note: &lt;/EM&gt;Please use the latest client drivers (ODBC/JDBC/.NET, etc.). In some old driver versions, the '&lt;EM&gt;TrustServerCertificate&lt;/EM&gt;' setting may not work properly, and you may still face connection issues with the same '&lt;EM&gt;target principal name is incorrect&lt;/EM&gt;' error. So, it is always better to keep drivers updated for smooth connectivity with Azure SQL.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Applies to both public and private endpoints&lt;/STRONG&gt;: This naming requirement and approach work whether you connect over the public endpoint or through a private endpoint for Azure SQL Database scenario, as long as DNS resolution for the custom name is set up correctly for your network.&lt;/P&gt;</description>
      <pubDate>Fri, 03 Apr 2026 11:02:36 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/connect-to-azure-sql-database-using-a-custom-domain-name-with/ba-p/4507679</guid>
      <dc:creator>Sunil-Nair</dc:creator>
      <dc:date>2026-04-03T11:02:36Z</dc:date>
    </item>
    <item>
      <title>Script to Export All Azure SQL Whitelisted Public IPs Within Your Subscription</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/script-to-export-all-azure-sql-whitelisted-public-ips-within/ba-p/4504024</link>
      <description>&lt;P&gt;Recently, I worked on an interesting customer case where they had a requirement to export all the IP addresses that are whitelisted on the Azure SQL Servers within their subscription.&lt;/P&gt;
&lt;P&gt;Below is the script that helped customer to export all server-level (public) firewall rules (whitelisted IP ranges) for every Azure SQL logical server in a subscription into a single CSV file.&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Set-AzContext -SubscriptionId "sub_ID"
 
# Ensure context exists
$context = Get-AzContext
if (-not $context) {
    throw "No Azure context found. Please run Connect-AzAccount."
}
 
$subId = $context.Subscription.Id
 
# Get servers safely
$servers = Get-AzSqlServer
if (-not $servers) {
    Write-Output "No SQL Servers found in this subscription."
    return
}
 
$results = New-Object System.Collections.Generic.List[object]
 
foreach ($server in $servers) {
 
    # Skip invalid entries (prevents prompt issue)
    if (-not $server.ServerName -or -not $server.ResourceGroupName) {
        Write-Warning "Skipping invalid server object"
        continue
    }
 
    try {
        Write-Output "Processing: $($server.ServerName)"
 
        $firewallRules = Get-AzSqlServerFirewallRule `
            -ResourceGroupName $server.ResourceGroupName `
            -ServerName $server.ServerName `
            -ErrorAction Stop
 
        foreach ($rule in $firewallRules) {
            $results.Add([PSCustomObject]@{
                SubscriptionId   = $subId
                ResourceGroup    = $server.ResourceGroupName
                SqlServerName    = $server.ServerName
                FirewallRuleName = $rule.FirewallRuleName
                StartIP          = $rule.StartIpAddress
                EndIP            = $rule.EndIpAddress
            })
        }
    }
    catch {
        Write-Warning "Failed for server $($server.ServerName): $($_.Exception.Message)"
 
        $results.Add([PSCustomObject]@{
            SubscriptionId   = $subId
            ResourceGroup    = $server.ResourceGroupName
            SqlServerName    = $server.ServerName
            FirewallRuleName = "ERROR_READING_RULES"
            StartIP          = ""
            EndIP            = ""
        })
    }
}
 
# Export
$path = "./AzureSqlServer_PublicFirewallIPs.csv"
$results | Export-Csv -Path $path -NoTypeInformation
 
Write-Output "Export completed: $path"
&lt;/LI-CODE&gt;
&lt;P&gt;Once the script executed successfully, follow the steps below to download the CSV file.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Clicked on Download&lt;BR /&gt;&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Provide the path as ./AzureSqlServer_PublicFirewallIPs.csv and initiated download&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Overview of what this PowerShell script will exactly do.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;It first sets the target subscription and captures the subscription ID.&lt;/LI&gt;
&lt;LI&gt;It retrieves all Azure SQL logical servers in that subscription.&lt;/LI&gt;
&lt;LI&gt;For each server, it fetches the server‑level firewall rules (whitelisted IP addresses and ranges).&lt;/LI&gt;
&lt;LI&gt;Each firewall rule is recorded with details such as subscription ID, resource group, server name, rule name, start IP, and end IP.&lt;/LI&gt;
&lt;LI&gt;If any server fails during retrieval (for example, due to permission issues), the script logs a warning and continues processing the remaining servers.&lt;/LI&gt;
&lt;LI&gt;Finally, all collected data is exported to a CSV file (AzureSqlServer_PublicFirewallIPs.csv) in the Cloud Shell directory.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This allows you to centrally audit and review all whitelisted IPs across Azure SQL Servers in a subscription without stopping execution due to individual server errors.&lt;/P&gt;</description>
      <pubDate>Fri, 03 Apr 2026 10:12:19 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/script-to-export-all-azure-sql-whitelisted-public-ips-within/ba-p/4504024</guid>
      <dc:creator>Anuradha_A</dc:creator>
      <dc:date>2026-04-03T10:12:19Z</dc:date>
    </item>
    <item>
      <title>Understanding and Monitoring Class 2 Transactions in Azure SQL Database</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/understanding-and-monitoring-class-2-transactions-in-azure-sql/ba-p/4507671</link>
      <description>&lt;P&gt;During a recent customer engagement, we investigated sustained transaction log growth in &lt;STRONG&gt;Azure SQL Database&lt;/STRONG&gt; without obvious large user transactions. The customer was familiar with PostgreSQL diagnostics and wanted to understand how similar insights can be obtained in Azure SQL Database—especially around &lt;STRONG&gt;Class 2 (system) transactions&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;This post summarizes what we discussed, explains &lt;STRONG&gt;why Azure SQL behaves differently&lt;/STRONG&gt;, and walks through &lt;STRONG&gt;practical DMV‑based monitoring patterns&lt;/STRONG&gt; you can use today.&lt;/P&gt;
&lt;H2&gt;Azure SQL Database vs. PostgreSQL: Diagnostic Model Differences&lt;/H2&gt;
&lt;P&gt;One of the first clarifications we made is that &lt;STRONG&gt;Azure SQL Database does not expose diagnostic settings equivalent to PostgreSQL’s system‑level log diagnostics&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;Azure SQL Database is a &lt;STRONG&gt;fully managed PaaS service&lt;/STRONG&gt;, and many internal operations—such as checkpoints, version store cleanup, and background maintenance—are abstracted from direct control. Instead of low‑level engine logs, Azure SQL provides &lt;STRONG&gt;cumulative Dynamic Management Views (DMVs)&lt;/STRONG&gt; that expose the &lt;EM&gt;effects&lt;/EM&gt; of system activity rather than the internal implementation.&lt;/P&gt;
&lt;H2&gt;What Are Class 2 Transactions?&lt;/H2&gt;
&lt;P&gt;In Azure SQL Database, &lt;STRONG&gt;Class 2 transactions&lt;/STRONG&gt; generally refer to &lt;STRONG&gt;system‑generated transactions&lt;/STRONG&gt;, not directly initiated by user workloads. These commonly include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Checkpoint operations&lt;/LI&gt;
&lt;LI&gt;Version store cleanup&lt;/LI&gt;
&lt;LI&gt;Ghost record cleanup&lt;/LI&gt;
&lt;LI&gt;Background metadata maintenance&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Although they are not user‑driven, these transactions still generate &lt;STRONG&gt;transaction log activity&lt;/STRONG&gt;, which can be surprising when log usage grows steadily without large user transactions.&lt;/P&gt;
&lt;H2&gt;Key DMVs to Monitor Class 2 Activity&lt;/H2&gt;
&lt;H3&gt;1. Transaction Log Usage&lt;/H3&gt;
&lt;LI-CODE lang="sql"&gt;SELECT *
FROM sys.dm_db_log_space_usage;&lt;/LI-CODE&gt;
&lt;P&gt;This DMV provides:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Total log size&lt;/LI&gt;
&lt;LI&gt;Used log space&lt;/LI&gt;
&lt;LI&gt;Used log percentage&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If log usage grows steadily &lt;STRONG&gt;without&lt;/STRONG&gt; large user transactions, it is often a signal that &lt;STRONG&gt;background system activity&lt;/STRONG&gt; (Class 2 transactions) is responsible.&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;&lt;STRONG&gt;Checkpoint Activity&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;LI-CODE lang="sql"&gt;SELECT *
FROM sys.dm_exec_requests
WHERE command = 'CHECKPOINT';&lt;/LI-CODE&gt;
&lt;P&gt;Frequent checkpoints result in:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;More frequent log flushes&lt;/LI&gt;
&lt;LI&gt;Increased system log writes&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In Azure SQL Database, &lt;STRONG&gt;checkpoint frequency is system‑managed&lt;/STRONG&gt; and cannot be tuned through configuration or diagnostic settings.&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;&lt;STRONG&gt;Version Store Usage (Common Class 2 Contributor)&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;LI-CODE lang="sql"&gt;SELECT *
FROM sys.dm_tran_version_store_space_usage;&lt;/LI-CODE&gt;
&lt;P&gt;High version store usage often leads to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Background cleanup tasks&lt;/LI&gt;
&lt;LI&gt;Increased system transactions&lt;/LI&gt;
&lt;LI&gt;Additional transaction log generation&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This is especially common in workloads using:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Snapshot Isolation&lt;/LI&gt;
&lt;LI&gt;Read Committed Snapshot Isolation (RCSI)&lt;/LI&gt;
&lt;LI&gt;Long‑running transactions or readers&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Automating Monitoring with Azure Elastic Jobs&lt;/H2&gt;
&lt;P&gt;Because these DMVs are cumulative, &lt;STRONG&gt;capturing them over time is key&lt;/STRONG&gt;. During the call, we discussed automating data collection using &lt;STRONG&gt;Azure Elastic Jobs&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;Elastic Jobs allow you to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Schedule DMV snapshots&lt;/LI&gt;
&lt;LI&gt;Store historical trends&lt;/LI&gt;
&lt;LI&gt;Correlate spikes with workload patterns&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Microsoft provides full guidance on creating and managing Elastic Jobs using T‑SQL here:&lt;BR /&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-jobs-tsql-create-manage?view=azuresql" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Create and manage Elastic Jobs using T‑SQL&lt;/A&gt;&lt;/P&gt;
&lt;H2&gt;Index Management and Class 2 Impact&lt;/H2&gt;
&lt;P&gt;Index maintenance can indirectly increase Class 2 activity by:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Increasing version store usage&lt;/LI&gt;
&lt;LI&gt;Triggering additional background cleanup&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Instead of manual index tuning, we recommended enabling &lt;STRONG&gt;Query Performance Insight – Index recommendations&lt;/STRONG&gt; in the Azure Portal. This allows Azure SQL Database to automatically:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Suggest index creation&lt;/LI&gt;
&lt;LI&gt;Suggest index removal based on real workload patterns.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Why Checkpoints Cannot Be Tuned&lt;/H2&gt;
&lt;P&gt;A common question is whether checkpoint frequency can be reduced to lower system log activity.&lt;/P&gt;
&lt;P&gt;In Azure SQL Database:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Checkpoints are &lt;STRONG&gt;engine‑managed&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;There is &lt;STRONG&gt;no diagnostic or configuration setting&lt;/STRONG&gt; to control their frequency&lt;/LI&gt;
&lt;LI&gt;This design ensures platform stability and predictable recovery behavior&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;As a result, monitoring—not tuning—is the correct approach.&lt;/P&gt;
&lt;H2&gt;Practical Takeaways&lt;/H2&gt;
&lt;P&gt;From this case, the key lessons are:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Not all transaction log growth is user‑driven&lt;/LI&gt;
&lt;LI&gt;Class 2 transactions are a normal part of Azure SQL Database&lt;/LI&gt;
&lt;LI&gt;DMVs provide the best visibility into system behavior&lt;/LI&gt;
&lt;LI&gt;Trend‑based monitoring is more valuable than point‑in‑time checks&lt;/LI&gt;
&lt;LI&gt;Automation via Elastic Jobs is essential for long‑term analysis&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;Class 2 transactions are often misunderstood because they operate quietly in the background. By using the right DMVs and collecting data over time, you can clearly distinguish &lt;STRONG&gt;expected system behavior&lt;/STRONG&gt; from genuine workload issues.&lt;/P&gt;
&lt;P&gt;If you’re coming from PostgreSQL or on‑prem SQL Server, the key mindset shift is this:&lt;BR /&gt;&lt;STRONG&gt;Azure SQL Database exposes outcomes, not internals—and that’s by design.&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 01 Apr 2026 11:51:59 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/understanding-and-monitoring-class-2-transactions-in-azure-sql/ba-p/4507671</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-04-01T11:51:59Z</dc:date>
    </item>
    <item>
      <title>Understanding Azure SQL Data Sync Firewall Requirements</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/understanding-azure-sql-data-sync-firewall-requirements/ba-p/4506841</link>
      <description>&lt;H3&gt;Why IP Whitelisting Is Required and What Customers Should Know&lt;/H3&gt;
&lt;P&gt;Azure SQL Data Sync is commonly used to synchronize data between on‑premises SQL Server databases and Azure SQL Database. While the setup experience is generally straightforward, customers sometimes encounter connectivity or configuration issues that are rooted in &lt;STRONG&gt;network security and firewall behavior&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;This blog explains &lt;STRONG&gt;why Azure SQL Data Sync requires firewall exceptions&lt;/STRONG&gt;, what type of IP addresses may appear in audit logs, and how to approach this topic from a security and documentation standpoint—based on real troubleshooting discussions within the Azure SQL Data Sync ecosystem.&lt;/P&gt;
&lt;H2&gt;The Scenario: Sync Agent Configuration Fails Despite Valid Setup&lt;/H2&gt;
&lt;P&gt;A frequently reported issue occurs when the &lt;STRONG&gt;Azure SQL Data Sync Agent&lt;/STRONG&gt; (installed on an on‑premises server) fails to save its configuration. The error typically indicates that a &lt;EM&gt;valid agent key&lt;/EM&gt; is required—even when:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The agent key was freshly generated from the Azure SQL Data Sync portal&lt;/LI&gt;
&lt;LI&gt;Connection tests succeed&lt;/LI&gt;
&lt;LI&gt;The agent has been reinstalled or the server restarted&lt;/LI&gt;
&lt;LI&gt;New sync groups were created&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Despite these efforts, synchronization does not proceed until a specific &lt;STRONG&gt;public IP address&lt;/STRONG&gt; is allowed through the Azure SQL Database firewall.&lt;/P&gt;
&lt;H2&gt;Why Firewall Rules Matter for Azure SQL Data Sync&lt;/H2&gt;
&lt;P&gt;Azure SQL Database is protected by a &lt;STRONG&gt;server‑level firewall&lt;/STRONG&gt; that blocks all inbound traffic by default. Any external client—including the Data Sync Agent—must be explicitly allowed to connect.&lt;/P&gt;
&lt;P&gt;In Azure SQL Data Sync:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The &lt;STRONG&gt;Data Sync Agent&lt;/STRONG&gt; runs on‑premises&lt;/LI&gt;
&lt;LI&gt;It connects outbound over &lt;STRONG&gt;TCP port 1433&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;It uses the &lt;STRONG&gt;public endpoint&lt;/STRONG&gt; of the Azure SQL logical server&lt;/LI&gt;
&lt;LI&gt;The Azure SQL firewall must allow the &lt;STRONG&gt;public IP address&lt;/STRONG&gt; used by the agent&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If this IP is not allowed, the agent cannot complete configuration or perform synchronization operations—even if authentication and permissions are otherwise correct.&lt;/P&gt;
&lt;H2&gt;Identifying the Required IP Address&lt;/H2&gt;
&lt;P&gt;In the referenced discussion, the required IP address was identified by reviewing &lt;STRONG&gt;Azure SQL audit logs&lt;/STRONG&gt;, which revealed connection attempts being blocked at the firewall layer. Once this IP address was added to the Azure SQL server firewall rules, synchronization completed successfully.&lt;/P&gt;
&lt;P&gt;This highlights an important point:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Audit logs can be a reliable way to identify which IP address must be whitelisted when Data Sync connectivity fails.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;Is This IP Address Owned by Microsoft? Can It Change?&lt;/H2&gt;
&lt;P&gt;A natural follow‑up question is whether the observed IP address is Microsoft‑owned, and whether it can change.&lt;/P&gt;
&lt;P&gt;From the discussion:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Azure SQL Data Sync relies on &lt;STRONG&gt;Microsoft‑managed service infrastructure&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Some outbound connectivity may originate from &lt;STRONG&gt;Azure service IP ranges&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Microsoft publishes official IP ranges and service tags for transparency&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;However, documentation does &lt;STRONG&gt;not&lt;/STRONG&gt; guarantee that a single static IP will always be used. Customers should therefore treat firewall configuration as a &lt;STRONG&gt;network security requirement&lt;/STRONG&gt;, not a one‑time exception.&lt;/P&gt;
&lt;H2&gt;Related Microsoft Resources&lt;/H2&gt;
&lt;P&gt;While Azure SQL Data Sync documentation focuses on setup and troubleshooting, firewall requirements are often implicit rather than explicitly called out.&lt;/P&gt;
&lt;P&gt;The following Microsoft resources were referenced in the discussion to help customers understand Azure service IP ownership and ranges:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/synapse-analytics/security/gateway-ip-addresses" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Gateway IP addresses – Azure Synapse Analytics&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.microsoft.com/en-us/download/details.aspx?id=56519" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Download Azure IP Ranges and Service Tags – Public Cloud&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These resources can help security teams validate Microsoft‑owned IPs and plan firewall policies accordingly.&lt;/P&gt;
&lt;H2&gt;Key Takeaways for Customers&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;✅ Azure SQL Data Sync &lt;STRONG&gt;requires firewall access&lt;/STRONG&gt; to Azure SQL Database&lt;/LI&gt;
&lt;LI&gt;✅ The &lt;STRONG&gt;public IP used by the Data Sync Agent&lt;/STRONG&gt; must be explicitly allowed&lt;/LI&gt;
&lt;LI&gt;✅ Audit logs are useful for identifying blocked IPs&lt;/LI&gt;
&lt;LI&gt;✅ IP addresses may belong to Microsoft infrastructure and &lt;STRONG&gt;can change over time&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;✅ Firewall configuration is a &lt;STRONG&gt;security prerequisite&lt;/STRONG&gt;, not an optional step&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Closing Thoughts&lt;/H2&gt;
&lt;P&gt;Azure SQL Data Sync operates securely by design, leveraging Azure SQL Database firewall protections. While this can introduce configuration challenges, understanding the &lt;STRONG&gt;network flow and firewall requirements&lt;/STRONG&gt; can significantly reduce setup friction and troubleshooting time.&lt;/P&gt;
&lt;P&gt;If you're implementing Azure SQL Data Sync in a locked‑down network environment, we recommend involving your &lt;STRONG&gt;network and security teams early&lt;/STRONG&gt; and validating firewall rules as part of the initial deployment checklist.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Mar 2026 10:26:52 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/understanding-azure-sql-data-sync-firewall-requirements/ba-p/4506841</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-03-30T10:26:52Z</dc:date>
    </item>
    <item>
      <title>Troubleshooting Azure SQL Data Sync Groups Stuck in Progressing State</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/troubleshooting-azure-sql-data-sync-groups-stuck-in-progressing/ba-p/4506820</link>
      <description>&lt;P&gt;Azure SQL Data Sync is commonly used to synchronize data across Azure SQL Databases and on‑premises SQL Server environments. While the service works well in many scenarios, customers may occasionally encounter a situation where a&amp;nbsp;&lt;STRONG&gt;Sync Group remains stuck in a “Progressing” state&lt;/STRONG&gt; and cannot be started, stopped, or refreshed.&lt;/P&gt;
&lt;P&gt;This blog walks through a &lt;STRONG&gt;real-world troubleshooting scenario&lt;/STRONG&gt;, highlights the &lt;STRONG&gt;root cause&lt;/STRONG&gt;, and outlines &lt;STRONG&gt;practical remediation steps&lt;/STRONG&gt; based on actual support investigation and collaboration.&lt;/P&gt;
&lt;H2&gt;Problem Overview&lt;/H2&gt;
&lt;P&gt;In this scenario, the customer reported that:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The &lt;STRONG&gt;Sync Group was stuck in “Progressing”&lt;/STRONG&gt; for multiple days&lt;/LI&gt;
&lt;LI&gt;Sync operations could not be started or stopped&lt;/LI&gt;
&lt;LI&gt;Tables could not be refreshed or reconfigured&lt;/LI&gt;
&lt;LI&gt;Azure Activity Logs showed operations as &lt;EM&gt;Succeeded&lt;/EM&gt;, yet sync never progressed&lt;/LI&gt;
&lt;LI&gt;Our backend telemetry showed the Sync Group as &lt;STRONG&gt;Active&lt;/STRONG&gt;, while hub and member databases were in &lt;STRONG&gt;Reprovisioning&lt;/STRONG&gt; state&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The &lt;STRONG&gt;last successful sync occurred on XX day&lt;/STRONG&gt;, after which the sync pipeline stopped making progress.&lt;/P&gt;
&lt;H2&gt;Initial Investigation Findings&lt;/H2&gt;
&lt;P&gt;During the investigation, several key observations were made:&lt;/P&gt;
&lt;H3&gt;1. High DATA IO Utilization&lt;/H3&gt;
&lt;P&gt;Telemetry and backend checks revealed that &lt;STRONG&gt;DATA IO utilization was pegged at 100%&lt;/STRONG&gt; on one of the sync member databases starting XX day.&lt;/P&gt;
&lt;P&gt;Despite no noticeable change in application workload, the database was under sustained IO pressure, which directly impacted Data Sync operations.&lt;/P&gt;
&lt;H3&gt;2. Deadlocks During Sync Processing&lt;/H3&gt;
&lt;P&gt;Our backend telemetry showed repeated deadlock errors:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;These deadlocks were observed for multiple Sync Member IDs starting the same day IO saturation began.&lt;/P&gt;
&lt;P&gt;This aligned with the hypothesis that&amp;nbsp;&lt;STRONG&gt;resource contention&lt;/STRONG&gt;, not a Data Sync service failure, was the underlying issue.&lt;/P&gt;
&lt;H3&gt;3. Metadata Database Was Healthy&lt;/H3&gt;
&lt;P&gt;The Sync metadata database was running on a &lt;STRONG&gt;serverless Azure SQL Database (1 vCore)&lt;/STRONG&gt; and showed healthy resource usage, ruling it out as a bottleneck.&lt;/P&gt;
&lt;H2&gt;Recommended Troubleshooting Steps&lt;/H2&gt;
&lt;P&gt;Based on the findings, the following steps were recommended and validated:&lt;/P&gt;
&lt;H3&gt;✅ Step 1: Address Database Resource Constraints First&lt;/H3&gt;
&lt;P&gt;Before attempting to recreate or reset the Sync Group, the focus was placed on resolving &lt;STRONG&gt;DATA IO saturation&lt;/STRONG&gt; on the affected database.&lt;/P&gt;
&lt;P&gt;Actions included:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Scaling up the database (DTUs / vCores)&lt;/LI&gt;
&lt;LI&gt;Monitoring IO utilization after scaling&lt;/LI&gt;
&lt;LI&gt;Ensuring sufficient headroom for sync operations&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This was identified as the &lt;STRONG&gt;primary remediation step&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H3&gt;✅ Step 2: Use the Azure SQL Data Sync Health Checker&lt;/H3&gt;
&lt;P&gt;The &lt;STRONG&gt;Azure SQL Data Sync Health Checker&lt;/STRONG&gt; was recommended to validate:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Sync metadata integrity&lt;/LI&gt;
&lt;LI&gt;Table-level configuration issues&lt;/LI&gt;
&lt;LI&gt;Agent and connectivity status&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;GitHub tool: &lt;A href="https://github.com/microsoft/AzureSQLDataSyncHealthChecker" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;AzureSQLDataSyncHealthChecker&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;✅ Step 3: Validate Sync Group and Agent State via PowerShell&lt;/H3&gt;
&lt;P&gt;PowerShell was used to confirm:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Sync Group state&lt;/LI&gt;
&lt;LI&gt;Last successful sync time&lt;/LI&gt;
&lt;LI&gt;On‑premises Sync Agent connectivity&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Example commands used:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Get-AzureRmSqlSyncGroup `
-ResourceGroupName "ResourceGroup01" `
-ServerName "Server01" `
-DatabaseName "Database01" | Format-List&lt;/LI-CODE&gt;&lt;LI-CODE lang="powershell"&gt;Get-AzureRmSqlSyncAgent `
-ResourceGroupName "ResourceGroup01" `
-ServerName "Server01" |
Select ResourceGroupName, SyncState, LastSyncTime&lt;/LI-CODE&gt;
&lt;H2&gt;Resolution&lt;/H2&gt;
&lt;P&gt;After the customer &lt;STRONG&gt;increased the database size&lt;/STRONG&gt;, DATA IO utilization dropped, sync operations resumed normally, and the customer confirmed that the issue was resolved.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Mar 2026 09:59:23 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/troubleshooting-azure-sql-data-sync-groups-stuck-in-progressing/ba-p/4506820</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-03-30T09:59:23Z</dc:date>
    </item>
    <item>
      <title>Why Long-Term Retention (LTR) Backups Don’t Attach After a PITR Restore in Azure SQL Database</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/why-long-term-retention-ltr-backups-don-t-attach-after-a-pitr/ba-p/4506815</link>
      <description>&lt;H2&gt;Summary&lt;/H2&gt;
&lt;P&gt;Customers sometimes expect that after performing a &lt;STRONG&gt;Point‑in‑Time Restore (PITR)&lt;/STRONG&gt; and renaming the restored database back to its original name, &lt;STRONG&gt;existing Long‑Term Retention (LTR) backups&lt;/STRONG&gt; will automatically appear and continue from where they left off.&lt;/P&gt;
&lt;P&gt;This behavior may have worked in older or legacy environments, but in modern Azure SQL Database deployments—especially across &lt;STRONG&gt;new servers or subscriptions&lt;/STRONG&gt;—this expectation can lead to confusion.&lt;/P&gt;
&lt;P&gt;This article explains &lt;STRONG&gt;why LTR backups do not attach to restored databases&lt;/STRONG&gt;, even if the database name is reused, and what customers should expect instead.&lt;/P&gt;
&lt;H2&gt;The Scenario&lt;/H2&gt;
&lt;P&gt;The discussion originated from a common migration pattern:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A customer has an Azure SQL Database with &lt;STRONG&gt;LTR policies configured&lt;/STRONG&gt; (for example, monthly backups retained for 10 years).&lt;/LI&gt;
&lt;LI&gt;The customer performs a &lt;STRONG&gt;Point‑in‑Time Restore (PITR)&lt;/STRONG&gt; of that database.&lt;/LI&gt;
&lt;LI&gt;After the restore, the database is &lt;STRONG&gt;renamed to match the original database name&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;The customer expects the &lt;STRONG&gt;existing LTR backups&lt;/STRONG&gt; to appear under the restored database.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In legacy environments, this behavior appeared to work. However, in newer Azure SQL Database deployments, the LTR backups are &lt;STRONG&gt;not visible&lt;/STRONG&gt; after the restore and rename process.&lt;/P&gt;
&lt;H2&gt;Key Technical Detail: LTR Is Not Based on Database Name&lt;/H2&gt;
&lt;P&gt;The most important concept to understand is this:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;LTR backups are associated with the database’s logical database ID—not the database name.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Each Azure SQL Database is assigned a&amp;nbsp;&lt;STRONG&gt;unique logical database ID&lt;/STRONG&gt; at creation time. When a PITR restore is performed:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A &lt;STRONG&gt;new database is created&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;It receives a &lt;STRONG&gt;new logical database ID&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Even if you rename the database to match the original name, the &lt;STRONG&gt;logical ID remains different&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;As a result, the restored database is treated as a &lt;STRONG&gt;completely new database&lt;/STRONG&gt; from an LTR perspective, and it does &lt;STRONG&gt;not inherit the historical LTR backup chain&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;Why Renaming the Database Does Not Help&lt;/H2&gt;
&lt;P&gt;Renaming a database only changes its &lt;STRONG&gt;display name&lt;/STRONG&gt;. It does not change:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The logical database ID&lt;/LI&gt;
&lt;LI&gt;The internal association used by the LTR system&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Because LTR configuration and backup visibility are tied to the logical database ID, renaming alone cannot reattach historical LTR backups.&lt;/P&gt;
&lt;H2&gt;Subscription Boundaries Matter&lt;/H2&gt;
&lt;P&gt;Another important clarification raised in the discussion:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;LTR backups are scoped to the subscription where the database was created&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;While you can restore LTR backups to a different server &lt;STRONG&gt;within the same subscription&lt;/STRONG&gt;, you &lt;STRONG&gt;cannot carry historical LTR backups across subscriptions&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If a customer migrates to a &lt;STRONG&gt;new subscription&lt;/STRONG&gt;, the historical LTR chain from the old subscription &lt;STRONG&gt;cannot be reused or reattached&lt;/STRONG&gt;. Only &lt;STRONG&gt;new LTR backups created after the move&lt;/STRONG&gt; will exist in the new subscription.&lt;/P&gt;
&lt;H2&gt;What Customers Will Observe&lt;/H2&gt;
&lt;P&gt;After a PITR restore and rename:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;✅ The database is successfully restored&lt;/LI&gt;
&lt;LI&gt;✅ LTR policies can be configured again&lt;/LI&gt;
&lt;LI&gt;❌ Historical LTR backups from the original database are &lt;STRONG&gt;not visible&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;❌ The restored database does &lt;STRONG&gt;not inherit old LTR backups&lt;/STRONG&gt;, even if the name matches&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This is &lt;STRONG&gt;expected behavior&lt;/STRONG&gt; and aligns with the current Azure SQL Database architecture.&lt;/P&gt;
&lt;H2&gt;How to Validate LTR Backups Correctly&lt;/H2&gt;
&lt;P&gt;To avoid confusion caused by portal caching or UI expectations, customers can list LTR backups programmatically using PowerShell or Azure CLI, as documented in Microsoft Learn:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure SQL Database: Manage long-term backup retention&amp;nbsp;&lt;BR /&gt;&lt;/STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/long-term-backup-retention-configure?view=azuresql&amp;amp;tabs=powershell" target="_blank"&gt;Azure SQL Database: Manage long-term backup retention - Azure SQL Database | Microsoft Learn&amp;nbsp;&lt;/A&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This confirms whether LTR backups exist for a specific logical database ID.&lt;/P&gt;
&lt;H2&gt;Best Practices and Recommendations&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Do not rely on database renaming&lt;/STRONG&gt; to preserve LTR history.&lt;/LI&gt;
&lt;LI&gt;Treat any PITR restore as a &lt;STRONG&gt;new database&lt;/STRONG&gt; from an LTR perspective.&lt;/LI&gt;
&lt;LI&gt;If historical LTR backups must remain accessible:
&lt;UL&gt;
&lt;LI&gt;Keep the original database intact&lt;/LI&gt;
&lt;LI&gt;Restore LTR backups directly from the original database when needed&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Plan migrations carefully, especially when moving across subscriptions, as &lt;STRONG&gt;LTR history cannot be migrated&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Final Thoughts&lt;/H2&gt;
&lt;P&gt;LTR backups are a powerful compliance and recovery feature in Azure SQL Database, but they are intentionally designed to be &lt;STRONG&gt;immutable and identity‑based&lt;/STRONG&gt;, not name‑based.&lt;/P&gt;
&lt;P&gt;Understanding that &lt;STRONG&gt;logical database ID—not database name—controls LTR association&lt;/STRONG&gt; helps set correct expectations and avoids surprises during restores or migrations.&lt;/P&gt;
&lt;H2&gt;Frequently Asked Questions (FAQ)&lt;/H2&gt;
&lt;H3&gt;&lt;STRONG&gt;Q1: Why don’t my existing LTR backups appear after I restore a database using PITR?&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Because a Point‑in‑Time Restore (PITR) creates a &lt;STRONG&gt;new database with a new logical database ID&lt;/STRONG&gt;. Long‑Term Retention (LTR) backups are associated with the database’s logical ID—not its name—so the restored database does not inherit the historical LTR backup chain.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Q2: If I rename the restored database to the original name, shouldn’t the LTR backups reappear?&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;No. Renaming a database only changes its &lt;STRONG&gt;display name&lt;/STRONG&gt;. It does not change the &lt;STRONG&gt;logical database ID&lt;/STRONG&gt;, which is what LTR uses to associate backups. As a result, renaming does not reattach existing LTR backups.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Q3: This used to work in our legacy environment—why is it different now?&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;In older environments, the behavior may have appeared to work due to differences in platform implementation. In current Azure SQL Database architecture, &lt;STRONG&gt;LTR association is strictly identity‑based&lt;/STRONG&gt;, which ensures immutability, compliance, and predictable backup behavior.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Q4: Can I attach historical LTR backups to a restored database manually?&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;No. LTR backups are immutable and cannot be reattached or reassigned to a different logical database ID. This behavior is by design.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Q5: What happens if I move my database to a new subscription?&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;LTR backups are &lt;STRONG&gt;scoped to the subscription where the database was created&lt;/STRONG&gt;. If you migrate to a new subscription:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Historical LTR backups from the old subscription &lt;STRONG&gt;cannot be carried over&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Only &lt;STRONG&gt;new LTR backups created after the move&lt;/STRONG&gt; will exist in the new subscription&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;STRONG&gt;Q6: Can I still restore from my old LTR backups?&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Yes. As long as the original database (or its logical identity) still exists in the original subscription, you can restore directly from those LTR backups—even if a newer database with the same name exists elsewhere.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Q7: How can I verify which LTR backups actually exist?&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;The most reliable way is to list LTR backups programmatically using &lt;STRONG&gt;Azure PowerShell or Azure CLI&lt;/STRONG&gt;, which queries backups by logical database ID rather than relying solely on portal views.&lt;BR /&gt;Refer to the official documentation:&lt;BR /&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/long-term-backup-retention-configure?view=azuresql&amp;amp;tabs=powershell" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Azure SQL Database – Manage long‑term backup retention&amp;nbsp;&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Q8: What is the recommended approach if we need long‑term recoverability after PITR?&lt;/STRONG&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Treat every PITR restore as a &lt;STRONG&gt;new database&lt;/STRONG&gt; from an LTR perspective&lt;/LI&gt;
&lt;LI&gt;Keep the original database intact if historical LTR backups must remain accessible&lt;/LI&gt;
&lt;LI&gt;Plan subscription migrations carefully, as LTR history cannot be migrated&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 30 Mar 2026 09:46:25 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/why-long-term-retention-ltr-backups-don-t-attach-after-a-pitr/ba-p/4506815</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-03-30T09:46:25Z</dc:date>
    </item>
    <item>
      <title>Azure SQL Hyperscale: Understanding PITR Retention vs Azure Portal Restore UI</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-hyperscale-understanding-pitr-retention-vs-azure/ba-p/4505849</link>
      <description>&lt;H2&gt;Overview&lt;/H2&gt;
&lt;P&gt;Customers using &lt;STRONG&gt;Azure SQL Database – Hyperscale&lt;/STRONG&gt; may sometimes notice a discrepancy between the &lt;STRONG&gt;configured Point-in-Time Restore (PITR) retention period&lt;/STRONG&gt; and what the &lt;STRONG&gt;Azure Portal displays&lt;/STRONG&gt; as available restore points.&lt;/P&gt;
&lt;P&gt;In some cases:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;PITR retention is configured (for example, 7 days),&lt;/LI&gt;
&lt;LI&gt;Yet the Azure Portal only shows restore points going back a shorter period (for example, 1–2 days),&lt;/LI&gt;
&lt;LI&gt;And the restore UI may allow selecting dates &lt;STRONG&gt;earlier than the configured retention window&lt;/STRONG&gt; without immediately showing an error.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This post explains &lt;STRONG&gt;why this happens&lt;/STRONG&gt;, &lt;STRONG&gt;how to validate backup health&lt;/STRONG&gt;, and &lt;STRONG&gt;what actions to take&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;Key Observation&lt;/H2&gt;
&lt;P&gt;From investigation and internal validation, this behavior is &lt;STRONG&gt;not indicative of backup data loss&lt;/STRONG&gt;. Instead, it is related to &lt;STRONG&gt;Azure Portal UI behavior&lt;/STRONG&gt;, particularly for &lt;STRONG&gt;Hyperscale databases&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;The backups themselves continue to exist and are managed correctly by the service.&lt;/P&gt;
&lt;H2&gt;Important Distinction: Portal UI vs Actual Backup State&lt;/H2&gt;
&lt;H3&gt;What the Azure Portal Shows&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;The &lt;STRONG&gt;restore blade&lt;/STRONG&gt; may show fewer restore points than expected.&lt;/LI&gt;
&lt;LI&gt;The date picker may allow selecting dates &lt;STRONG&gt;outside the PITR retention window&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;No immediate validation error may appear in the UI.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;What Actually Happens&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Backup retention is &lt;STRONG&gt;enforced at the service layer&lt;/STRONG&gt;, not the portal.&lt;/LI&gt;
&lt;LI&gt;If a restore is attempted outside the valid PITR window, the operation will &lt;STRONG&gt;fail during execution&lt;/STRONG&gt;, even if the UI allows selection.&lt;/LI&gt;
&lt;LI&gt;Hyperscale backup metadata is handled differently than General Purpose or Business Critical tiers.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Why This Happens with Hyperscale&lt;/H2&gt;
&lt;P&gt;There are a few important technical reasons:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Hyperscale backup architecture differs&lt;/STRONG&gt;&lt;BR /&gt;Hyperscale uses a distributed storage and backup model optimized for scale and fast restore, which affects how metadata is surfaced.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Some DMVs are not supported&lt;/STRONG&gt;&lt;BR /&gt;Views like sys.dm_database_backups, commonly used for backup visibility, &lt;STRONG&gt;do not support Hyperscale&lt;/STRONG&gt; databases.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Portal relies on metadata projections&lt;/STRONG&gt;&lt;BR /&gt;The portal restore experience depends on backend projections that may lag or behave differently for Hyperscale, leading to UI inconsistencies.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;How to Validate Backup Health (Recommended)&lt;/H2&gt;
&lt;P&gt;Instead of relying solely on the Azure Portal UI, use &lt;STRONG&gt;service-backed validation methods&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H3&gt;Option 1: PowerShell – Earliest Restore Point&lt;/H3&gt;
&lt;P&gt;You can confirm the earliest available restore point directly from the service:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Set your variables
$resourceGroupName = "RG-xxx-xxx-1"
$serverName = "sql-xxx-xxx-01"
$databaseName = "database_Prod"

# Get earliest restore point
$db = Get-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $serverName -DatabaseName $databaseName
$earliestRestore = $db.EarliestRestoreDate

Write-Host "Earliest Restore Point: $earliestRestore"
Write-Host "Days Available: $([math]::Round(((Get-Date) - $earliestRestore).TotalDays, 1)) days"&lt;/LI-CODE&gt;
&lt;P&gt;This reflects the&amp;nbsp;&lt;STRONG&gt;true PITR boundary&lt;/STRONG&gt; enforced by Azure SQL.&lt;/P&gt;
&lt;H3&gt;Option 2: Internal Telemetry / Backup Events (Engineering Validation)&lt;/H3&gt;
&lt;P&gt;Internal monitoring confirms:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Continuous backup events are present.&lt;/LI&gt;
&lt;LI&gt;Coverage aligns with configured PITR retention.&lt;/LI&gt;
&lt;LI&gt;Backup health remains &lt;STRONG&gt;✅ Healthy&lt;/STRONG&gt; even when the portal UI appears inconsistent.&lt;/LI&gt;
&lt;/UL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Key takeaway&lt;/STRONG&gt;: Backup data is intact and retention is honored.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;Is There Any Risk of Data Loss?&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;No.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;There is &lt;STRONG&gt;no evidence of backup loss&lt;/STRONG&gt; or retention policy violation.&lt;BR /&gt;This is a &lt;STRONG&gt;visual/UX issue&lt;/STRONG&gt;, not a data protection issue.&lt;/P&gt;
&lt;H2&gt;Recommended Actions&lt;/H2&gt;
&lt;H3&gt;For Customers&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;✅ Trust the configured PITR retention, not just the portal display.&lt;/LI&gt;
&lt;LI&gt;✅ Use PowerShell or Azure CLI to validate restore boundaries.&lt;/LI&gt;
&lt;LI&gt;❌ Do not assume backup loss based on portal UI alone.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;For Support / Engineering&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Capture a &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/azure-portal/capture-browser-trace" target="_blank"&gt;&lt;STRONG&gt;browser network trace&lt;/STRONG&gt;&lt;/A&gt; when encountering UI inconsistencies.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Raise an incident with the &lt;STRONG&gt;Azure Portal team&lt;/STRONG&gt; for investigation and fix.&lt;/LI&gt;
&lt;LI&gt;Reference Hyperscale-specific behavior during troubleshooting.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Summary&lt;/H2&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;th&gt;Topic&lt;/th&gt;&lt;th&gt;Status&lt;/th&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;PITR retention enforcement&lt;/td&gt;&lt;td&gt;✅ Correct&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Backup data integrity&lt;/td&gt;&lt;td&gt;✅ Safe&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Azure Portal restore UI&lt;/td&gt;&lt;td&gt;⚠️ May be misleading&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Hyperscale backup visibility&lt;/td&gt;&lt;td&gt;✅ Validate via service tools&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H2&gt;Final Thoughts&lt;/H2&gt;
&lt;P&gt;Azure SQL Hyperscale continues to provide &lt;STRONG&gt;robust, reliable backup and restore capabilities&lt;/STRONG&gt;, even when the Azure Portal UI does not fully reflect the underlying state.&lt;/P&gt;
&lt;P&gt;When in doubt:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Validate via service APIs&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Rely on enforcement logic, not UI hints&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Escalate portal inconsistencies appropriately&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Thu, 26 Mar 2026 09:11:21 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-hyperscale-understanding-pitr-retention-vs-azure/ba-p/4505849</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-03-26T09:11:21Z</dc:date>
    </item>
    <item>
      <title>Hyperscale Azure SQL database shrink process- Tips &amp; Tricks</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/hyperscale-azure-sql-database-shrink-process-tips-tricks/ba-p/4505338</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="none"&gt;By&amp;nbsp;Tanayankar&amp;nbsp;Chakraborty &amp;amp; Dimitri Furman&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Issue&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;We recently worked on a customer case involving challenges with shrinking an&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Azure SQL Database (Hyperscale)&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;. The primary concern was that the shrink operation was progressing slowly and was not reclaiming storage space as expected. In some scenarios, customers also&amp;nbsp;encountered&amp;nbsp;error messages during the shrink operation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Observed Error&lt;/SPAN&gt;&lt;/STRONG&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The cx noticed this&amp;nbsp;error&amp;nbsp;after executing the shrink command:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-8"&gt;Sql&amp;nbsp;error number: 1222. Error Message: Lock request time out&amp;nbsp;period exceeded.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-8"&gt;Lock request time out period exceeded.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-8"&gt;Lock request time out period exceeded.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-8"&gt;Lock request time out period exceeded.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;In some cases, the operation was aborted with the following message:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-8"&gt;The shrink operation was aborted because a page to be moved by shrink is in use by an active transaction on the primary replica or on one or more secondary replicas.&amp;nbsp;Retry&amp;nbsp;shrink&amp;nbsp;later.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Recommendations and Mitigation Steps&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Here are some of the Recommendations&amp;nbsp;to be followed while handling the DB shrink of a Large Azure SQL DB:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The following best practices are recommended when performing shrink operations on large&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Azure SQL Database Hyperscale&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;databases:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Assess Used vs. Allocated Space&lt;/SPAN&gt;&lt;/STRONG&gt; - &amp;nbsp;&lt;SPAN data-contrast="auto"&gt;Before&amp;nbsp;initiating&amp;nbsp;the shrink operation, review the used and&amp;nbsp;allocated&amp;nbsp;space for each data file and document the results:&lt;/SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;SELECT&amp;nbsp;file_id,&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;      &amp;nbsp;CAST(SUM(FILEPROPERTY(name, 'SpaceUsed')) AS&amp;nbsp;bigint) * 8 / 1024. AS&amp;nbsp;space_used_mb,&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;       CAST(SUM(size) AS&amp;nbsp;bigint) * 8 / 1024. AS&amp;nbsp;space_allocated_mb&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;FROM&amp;nbsp;sys.database_files&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;WHERE&amp;nbsp;type_desc&amp;nbsp;= 'ROWS'&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;GROUP BY&amp;nbsp;file_id;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Shrink Using TRUNCATEONLY&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt; -&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Run&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;DBCC SHRINKFILE&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;with the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;TRUNCATEONLY&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;option for each data file. This can quickly reclaim unused space at the end of the file when applicable:&lt;/SPAN&gt; &lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;DBCC SHRINKFILE (&amp;lt;file_id&amp;gt;, TRUNCATEONLY);&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;To retrieve all applicable file IDs:&lt;/SPAN&gt; &lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;SELECT&amp;nbsp;file_id&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;FROM&amp;nbsp;sys.database_files&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;WHERE&amp;nbsp;type_desc&amp;nbsp;= 'ROWS';&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:278}"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Validate Space Reclamation&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt; - &lt;SPAN data-contrast="auto"&gt;Re-run the space usage query to confirm whether the&amp;nbsp;allocated&amp;nbsp;space has been reduced.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:278}"&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Use Incremental Target Sizes&lt;/STRONG&gt; -&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;If unused&amp;nbsp;allocated&amp;nbsp;space&amp;nbsp;remains, specify a target size in gradual increments. For example, if a file is 1 GB with only 100 MB used, shrink the file incrementally:&lt;/SPAN&gt; &lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;DBCC SHRINKFILE (1,&amp;nbsp;900);&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;DBCC SHRINKFILE (1, 800);&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Continue until the desired size is reached&amp;nbsp;(The value above&amp;nbsp;i.e.&amp;nbsp;800 or 900 is in MB)&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Additional Best Practices:&lt;/SPAN&gt;&lt;/STRONG&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;1. &lt;SPAN data-contrast="auto"&gt;Multiple&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;DBCC SHRINKFILE&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; operations can be run concurrently on the database, which may help reduce overall execution time.&amp;nbsp;The problem is if we increase the number of parallel shrinks too much, they&amp;nbsp;start blocking&amp;nbsp;&amp;amp; deadlocking&amp;nbsp;each&amp;nbsp;other&amp;nbsp;which is counterproductive.&amp;nbsp;In such cases, using the WAIT_AT_LOW_PRIORITY&amp;nbsp;option in DBCC SHRINKFILE&amp;nbsp;can&amp;nbsp;mitigate locking issues.&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;2. Shrink progress can be monitored using the following query:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;SELECT command,&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;      &amp;nbsp;percent_complete,&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;       status,&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;      &amp;nbsp;wait_resource,&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;      &amp;nbsp;session_id,&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;      &amp;nbsp;wait_type,&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;      &amp;nbsp;blocking_session_id,&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;      &amp;nbsp;cpu_time,&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;       reads,&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;      &amp;nbsp;CAST(((DATEDIFF(s,start_time,&amp;nbsp;GETDATE()))/3600) AS varchar) + ' hour(s), '&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;                     +&amp;nbsp;CAST((DATEDIFF(s,start_time,&amp;nbsp;GETDATE())%3600)/60 AS varchar) + 'min, '&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;                     +&amp;nbsp;CAST((DATEDIFF(s,start_time,&amp;nbsp;GETDATE())%60) AS varchar) + ' sec' AS running_time&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;FROM&amp;nbsp;sys.dm_exec_requests&amp;nbsp;AS r&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;LEFT JOIN&amp;nbsp;sys.databases&amp;nbsp;AS d&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;ON r.database_id&amp;nbsp;= d.database_id&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;WHERE r.command&amp;nbsp;IN ('DbccSpaceReclaim','DbccFilesCompact','DbccLOBCompact','DBCC');&lt;/SPAN&gt;&lt;/EM&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;3. &lt;SPAN data-contrast="auto"&gt;For future shrink operations, we recommend specifying &lt;STRONG&gt;a&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;target size equal to space used plus approximately 3 GB&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;.&amp;nbsp;Very small&amp;nbsp;target size windows often result in longer execution times and reduced effectiveness due to repeated preprocessing.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;4. &lt;SPAN data-contrast="auto"&gt;Perform shrink operations during&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;off-business hours or a planned maintenance window&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;, as active workloads during peak hours can cause blocking and timeouts.&lt;/SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;5. &lt;SPAN data-contrast="auto"&gt;Databases with large&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;LOB or&amp;nbsp;columnstore&amp;nbsp;data&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; may require&amp;nbsp;additional&amp;nbsp;time to shrink. Rebuilding&amp;nbsp;columnstore&amp;nbsp;indexes can improve shrink effectiveness, as they internally use LOB storage.&amp;nbsp;Also, as&amp;nbsp;stated&amp;nbsp;in this &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-index-transact-sql?view=sql-server-ver17#reorganize-with--lob_compaction---on--off--" target="_blank" rel="noopener"&gt;document,&lt;/A&gt;&amp;nbsp; If the DB has objects with LOBs, compacting them before shrink can help.&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&lt;SPAN data-contrast="auto"&gt;6. Index rebuilds should&amp;nbsp;not&amp;nbsp;be executed concurrently with shrink operations. Index rebuilds require free space and may trigger file growth, which counteracts shrink efforts.&amp;nbsp;Rebuild indexes first, then perform shrink operations.&amp;nbsp;If page density is&amp;nbsp;low&amp;nbsp;Rebuild,&amp;nbsp;If&amp;nbsp;page&amp;nbsp;density is high, no need to rebuild.&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559740&amp;quot;:256}"&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&lt;SPAN data-contrast="auto"&gt;7. &lt;SPAN data-ccp-parastyle="x_wordsection1" data-ccp-parastyle-defn="{&amp;quot;ObjectId&amp;quot;:&amp;quot;8564f0bd-05d7-519f-b393-39725bf8be04|1&amp;quot;,&amp;quot;ClassId&amp;quot;:1073872969,&amp;quot;Properties&amp;quot;:[268442635,&amp;quot;24&amp;quot;,335559740,&amp;quot;240&amp;quot;,201341983,&amp;quot;0&amp;quot;,201342446,&amp;quot;1&amp;quot;,201342447,&amp;quot;5&amp;quot;,201342448,&amp;quot;1&amp;quot;,201342449,&amp;quot;1&amp;quot;,469777841,&amp;quot;Aptos&amp;quot;,469777842,&amp;quot;Aptos&amp;quot;,469777843,&amp;quot;Aptos&amp;quot;,469777844,&amp;quot;Aptos&amp;quot;,201341986,&amp;quot;1&amp;quot;,469769226,&amp;quot;Aptos&amp;quot;,469775450,&amp;quot;x_wordsection1&amp;quot;,201340122,&amp;quot;2&amp;quot;,134233614,&amp;quot;true&amp;quot;,469778129,&amp;quot;xwordsection1&amp;quot;,335572020,&amp;quot;1&amp;quot;,134233118,&amp;quot;true&amp;quot;,134233117,&amp;quot;true&amp;quot;,469778324,&amp;quot;Normal&amp;quot;]}"&gt;New data&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;will always be distributed to the file with the max empty space according to the proportional fill algorithm. This is explained in detail &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/sql/relational-databases/pages-and-extents-architecture-guide?view=sql-server-ver17" target="_blank"&gt;here&lt;/A&gt;. There is no one-to-one mapping between tables and files, and deadlocks&amp;nbsp;may&amp;nbsp;sometimes&amp;nbsp;occur during parallel shrink operations.&amp;nbsp;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;8. &lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1" data-ccp-parastyle-defn="{&amp;quot;ObjectId&amp;quot;:&amp;quot;8564f0bd-05d7-519f-b393-39725bf8be04|1&amp;quot;,&amp;quot;ClassId&amp;quot;:1073872969,&amp;quot;Properties&amp;quot;:[268442635,&amp;quot;24&amp;quot;,335559740,&amp;quot;240&amp;quot;,201341983,&amp;quot;0&amp;quot;,201342446,&amp;quot;1&amp;quot;,201342447,&amp;quot;5&amp;quot;,201342448,&amp;quot;1&amp;quot;,201342449,&amp;quot;1&amp;quot;,469777841,&amp;quot;Aptos&amp;quot;,469777842,&amp;quot;Aptos&amp;quot;,469777843,&amp;quot;Aptos&amp;quot;,469777844,&amp;quot;Aptos&amp;quot;,201341986,&amp;quot;1&amp;quot;,469769226,&amp;quot;Aptos&amp;quot;,469775450,&amp;quot;x_wordsection1&amp;quot;,201340122,&amp;quot;2&amp;quot;,134233614,&amp;quot;true&amp;quot;,469778129,&amp;quot;xwordsection1&amp;quot;,335572020,&amp;quot;1&amp;quot;,134233118,&amp;quot;true&amp;quot;,134233117,&amp;quot;true&amp;quot;,469778324,&amp;quot;Normal&amp;quot;]}"&gt;It must be noted that&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;index rebuilds can improve shrink effectiveness&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;, especially if page density is low.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;The key point here is that&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;you need to rebuild before&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;shrink&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;Hence,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;it is best to focus on rebuilding indexes with less than&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;80% page&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;&amp;nbsp;density, particularly for large tables, and to use&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;sampled&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;mode&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;(&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;&amp;nbsp;Please check the DMV&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;sys.dm_db_index_physical_stats&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-index-physical-stats-transact-sql?view=sql-server-ver17#scanning-modes" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;&amp;nbsp;for more info on sampled mode)&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;for index statistics to avoid long-running queries.&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;&amp;nbsp;Please use the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;script&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/azure-sql/database/file-space-manage?view=azuresql-db#evaluate-index-page-density" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;&amp;nbsp;to find out the page density.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;9. &lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1" data-ccp-parastyle-defn="{&amp;quot;ObjectId&amp;quot;:&amp;quot;8564f0bd-05d7-519f-b393-39725bf8be04|1&amp;quot;,&amp;quot;ClassId&amp;quot;:1073872969,&amp;quot;Properties&amp;quot;:[268442635,&amp;quot;24&amp;quot;,335559740,&amp;quot;240&amp;quot;,201341983,&amp;quot;0&amp;quot;,201342446,&amp;quot;1&amp;quot;,201342447,&amp;quot;5&amp;quot;,201342448,&amp;quot;1&amp;quot;,201342449,&amp;quot;1&amp;quot;,469777841,&amp;quot;Aptos&amp;quot;,469777842,&amp;quot;Aptos&amp;quot;,469777843,&amp;quot;Aptos&amp;quot;,469777844,&amp;quot;Aptos&amp;quot;,201341986,&amp;quot;1&amp;quot;,469769226,&amp;quot;Aptos&amp;quot;,469775450,&amp;quot;x_wordsection1&amp;quot;,201340122,&amp;quot;2&amp;quot;,134233614,&amp;quot;true&amp;quot;,469778129,&amp;quot;xwordsection1&amp;quot;,335572020,&amp;quot;1&amp;quot;,134233118,&amp;quot;true&amp;quot;,134233117,&amp;quot;true&amp;quot;,469778324,&amp;quot;Normal&amp;quot;]}"&gt;Columnstore&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;&amp;nbsp;indexes internally use LOB&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;data types&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;and can hinder shrinking if not rebuilt.&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;&amp;nbsp;Prioritizing&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;columnstore&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;&amp;nbsp;index rebuild&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="x_wordsection1"&gt;s can help as data at the end of the files can prevent truncation.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;10. &lt;SPAN data-contrast="auto"&gt;If needed, attempt&amp;nbsp;shrink&amp;nbsp;operations on&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;two to three files at a time&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and&amp;nbsp;adjust parallelism cautiously while&amp;nbsp;monitoring&amp;nbsp;for blocking or deadlocks.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;References&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/blog/azuresqlblog/shrink-for-azure-sql-database-hyperscale-is-now-generally-available/4371490" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Shrink for Azure SQL Database Hyperscale is now generally available | Microsoft Community Hub&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-shrinkfile-transact-sql?view=sql-server-ver17#syntax" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;DBCC SHRINKFILE (Transact-SQL) - SQL Server | Microsoft Learn&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/file-space-manage?view=azuresql-db#shrink-large-databases" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Database file space management - Azure SQL Database | Microsoft Learn&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/indexes/reorganize-and-rebuild-indexes?view=sql-server-ver17" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Maintain Indexes Optimally to Improve Performance and Reduce Resource Utilization - SQL Server | Microsoft Learn&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/pages-and-extents-architecture-guide?view=sql-server-ver17" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Page and Extent Architecture Guide - SQL Server | Microsoft Learn&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt; &lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256,&amp;quot;469777462&amp;quot;:[6967],&amp;quot;469777927&amp;quot;:[0],&amp;quot;469777928&amp;quot;:[1]}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-index-physical-stats-transact-sql?view=sql-server-ver17" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;sys.dm_db_index_physical_stats (Transact-SQL) - SQL Server | Microsoft Learn&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256,&amp;quot;469777462&amp;quot;:[6967],&amp;quot;469777927&amp;quot;:[0],&amp;quot;469777928&amp;quot;:[1]}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-index-transact-sql?view=sql-server-ver17#reorganize-with--lob_compaction---on--off--" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;ALTER INDEX (Transact-SQL) - SQL Server | Microsoft Learn&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:256,&amp;quot;469777462&amp;quot;:[6967],&amp;quot;469777927&amp;quot;:[0],&amp;quot;469777928&amp;quot;:[1]}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 25 Mar 2026 00:58:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/hyperscale-azure-sql-database-shrink-process-tips-tricks/ba-p/4505338</guid>
      <dc:creator>Tancy</dc:creator>
      <dc:date>2026-03-25T00:58:33Z</dc:date>
    </item>
    <item>
      <title>When Azure Portal/CLI Can’t Delete an Azure SQL DB: Check the Database Name (Unsupported Characters)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/when-azure-portal-cli-can-t-delete-an-azure-sql-db-check-the/ba-p/4505193</link>
      <description>&lt;H3&gt;Scenario (from a real service request)&lt;/H3&gt;
&lt;P&gt;A customer reported a &lt;STRONG&gt;General Purpose (Gen5, 2 vCores)&lt;/STRONG&gt; Azure SQL Database that was &lt;STRONG&gt;incurring charges&lt;/STRONG&gt; but could not be deleted using &lt;STRONG&gt;Azure Portal&lt;/STRONG&gt; or &lt;STRONG&gt;Azure CLI&lt;/STRONG&gt;. &lt;BR /&gt;CLI output showed two entries, including one whose &lt;STRONG&gt;database name included a forward slash&lt;/STRONG&gt; (example display: xxxx-xxx-sql/xxx-xxx-db).&lt;/P&gt;
&lt;H3&gt;Symptoms you may see&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;The database appears in listing outputs, but &lt;STRONG&gt;deletion via ARM/CLI fails&lt;/STRONG&gt; with invalid resource ID formatting.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;The name looks like server/db (contains /), making it &lt;STRONG&gt;difficult for portal/CLI to target correctly&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Why this happens?&lt;/H3&gt;
&lt;P&gt;Databases created through &lt;STRONG&gt;T‑SQL/SSMS&lt;/STRONG&gt; can sometimes allow &lt;STRONG&gt;characters that ARM-based creation would block&lt;/STRONG&gt;, which can cause portal/CLI/ARM operations to fail for that database. &lt;BR /&gt;In SQL, identifiers that don’t follow “regular” naming rules must be used as &lt;STRONG&gt;delimited identifiers&lt;/STRONG&gt; (e.g., wrapped in brackets).&lt;/P&gt;
&lt;H3&gt;The fix that worked&lt;/H3&gt;
&lt;P&gt;We advised the customer to delete the database using &lt;STRONG&gt;T‑SQL&lt;/STRONG&gt;, enclosing the database name in &lt;STRONG&gt;square brackets&lt;/STRONG&gt; (delimited identifier). &lt;BR /&gt;The customer confirmed the database was successfully dropped using this approach.&lt;/P&gt;
&lt;H3&gt;If you want to prevent this going forward&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Prefer creating databases through &lt;STRONG&gt;portal/ARM/CLI&lt;/STRONG&gt;, which enforces naming rules and avoids “unsupported character” edge cases.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;If you must keep a database that has unsupported characters, Microsoft’s public guidance notes that the long-term workaround is to &lt;STRONG&gt;rename the database using T‑SQL&lt;/STRONG&gt; to a compliant name so it can be managed normally via portal/CLI again&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Key takeaway&lt;/H3&gt;
&lt;P&gt;If an Azure SQL Database becomes “undeletable” through portal/CLI and the name contains unusual characters (like '&amp;lt;,&amp;gt;,*,%,&amp;amp;,:,\,/,?'), it may still be fully manageable from &lt;STRONG&gt;T‑SQL&lt;/STRONG&gt; using &lt;STRONG&gt;delimited identifiers&lt;/STRONG&gt;—and that can be the cleanest way to unblock deletion and stop unexpected costs.&lt;/P&gt;</description>
      <pubDate>Tue, 24 Mar 2026 14:27:45 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/when-azure-portal-cli-can-t-delete-an-azure-sql-db-check-the/ba-p/4505193</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-03-24T14:27:45Z</dc:date>
    </item>
  </channel>
</rss>

