azuredatabase
6 TopicsAzure SQL (LTR): You Don’t Need to Copy LTR Backups Across Regions to Restore Them
Summary Customers sometimes attempt to copy Azure SQL Long-Term Retention (LTR) backups across regions using Copy-AzSqlDatabaseLongTermRetentionBackup, only to hit the error: LongTermRetentionMigrationRequestNotSupported LTR backup migration copy feature is not supported on subscription This blog clarifies why this happens, when LTR backup copy is actually supported, and most importantly the correct and supported way to restore an LTR backup into a different region without copying it. The Common Scenario A customer has: An LTR backup stored in Region A A need to restore the database into Region B The assumption that the LTR backup must first be copied cross-region They attempt: Copy-AzSqlDatabaseLongTermRetentionBackup and immediately receive a platform validation error stating the feature isn’t supported on their subscription. Why This Error Happens The key misunderstanding is what the LTR backup copy API is actually for. Copy-AzSqlDatabaseLongTermRetentionBackup is NOT a general-purpose feature This API is: Backend-gated Allowlist-only Intended only for region decommissioning scenarios In other words: It is not supported for normal customer-driven migrations There is no portal toggle or feature registration Subscriptions are only allowlisted when Microsoft is retiring a region, and LTR backups must be preserved elsewhere. Because of this, most subscriptions - will receive: LongTermRetentionMigrationRequestNotSupported The Correct & Supported Solution Good news: You do NOT need to copy the LTR backup to another region to restore it there. Azure SQL allows you to: Restore an LTR backup directly to any Azure SQL logical server, in any region. Supported Approach: Restore LTR Backup Directly Use Restore-AzSqlDatabase with the -FromLongTermRetentionBackup switch. Example (PowerShell) Restore-AzSqlDatabase ` -FromLongTermRetentionBackup ` -ResourceId $ltrBackup.ResourceId ` -ServerName $serverName ` -ResourceGroupName $resourceGroup ` -TargetDatabaseName "Test" ` -ServiceObjectiveName P1 This works across regions No backend enablement required Fully supported and documented How This Works (Important Concept) LTR backups are stored in geo-redundant storage The restore operation does not depend on the original region The platform automatically handles data access and restores placement So, while the backup physically originated in Region A, you are free to restore it to Region B, C, or any supported Azure region without copying it first. When Is LTR Backup Copy Actually Used? Only in this scenario: Microsoft-initiated region decommissioning In that case: LTR backups must be relocated to remain available Subscriptions are temporarily allowlisted Copy-AzSqlDatabaseLongTermRetentionBackup is enabled at the backend Outside of this scenario, the API is intentionally restricted. Key Takeaways You can restore an LTR backup to any region directly You do not need (and usually cannot use) LTR backup copy Backup copy is gated and reserved for region retirement scenarios Use Restore-AzSqlDatabase -FromLongTermRetentionBackup instead Final Recommendation for Customers If customers encounter this error: Reassure them this is not a misconfiguration or permission issue Explain that LTR restore is the correct solution Avoid escalation for feature enablement unless a region retirement is involvedAzure Data Sync: Fixing “Cannot find the user ‘DataSync_executor’” When Creating a New Sync Group
Summary When creating a new Azure SQL Data Sync group, customers may encounter the following error during setup—even when no active sync groups exist: “Failed to perform data sync operation: Cannot find the user 'DataSync_executor', because it does not exist or you do not have permission.” This failure typically occurs during certificate and symmetric key creation as Azure attempts to grant permissions to the DataSync_executor role. In this post, we’ll walk through: The common scenario where this issue appears Why cleanup scripts alone may not fix it A supported, reliable resolution approach to restore Data Sync successfully The Problem Scenario A customer attempts to create a brand-new Azure SQL Data Sync group (hub + members), but the operation fails with an error similar to: Cannot find the user 'DataSync_executor', because it does not exist or you do not have permission. Creating certificate Creating symmetric key Granting permission to [DataSync_executor] on certificate Key observations from affected cases: No active sync group exists Cleanup scripts (including Data Sync complete cleanup.sql) were already executed The failure persists even after retrying the setup Why This Happens Azure SQL Data Sync depends on system-managed database roles that must be created and configured only by the Azure Data Sync service itself. If these roles (or related permissions) are: Missing Partially deleted Left in an inconsistent state then Data Sync may fail while attempting to create certificates or grant required permissions. Important: Manually creating or partially restoring these roles is not supported and often leads to repeated failures. How to Detect the Issue Before troubleshooting further, confirm whether the required Data Sync roles are missing. 1. Run the Data Sync Health Checker Ask the customer to run Data Sync Health Checker, then review SyncDB_Log. Common warnings include: DataSync_reader IS MISSING DataSync_executor IS MISSING Missing EXECUTE/SELECT permissions on dss and TaskHosting schemas This confirms the root cause is role and permission inconsistency. Supported and Effective Resolution Step 1: Verify Roles Are Missing Run the following query on each affected database (hub and members): SELECT name FROM sys.database_principals WHERE name IN ('DataSync_executor', 'DataSync_reader'); If no rows are returned, the roles are missing and must be recovered by Azure Data Sync itself - not manually. Step 2: Fully Clean Up Leftover Data Sync Objects Do this only if the database is not actively syncing -- Remove roles if partially present DROP ROLE IF EXISTS DataSync_executor; DROP ROLE IF EXISTS DataSync_reader; -- Drop DataSync schema IF EXISTS (SELECT 1 FROM sys.schemas WHERE name = 'DataSync') BEGIN DROP SCHEMA DataSync; END This ensures there are no partial or orphaned Data Sync objects left behind that could interfere with setup. Step 3: Recreate the Sync Group (Critical Step) Do not manually recreate roles or permissions Instead: Delete the existing (failed) Sync Group from the Azure Portal Recreate the Sync Group from scratch Re-add the hub and member databases During this process, Azure will automatically: Recreate DataSync_executor and DataSync_reader Assign all required permissions Deploy the correct schemas, certificates, and procedures Key Takeaways DataSync_executor and DataSync_reader are service-managed roles Cleanup scripts alone may not fully reset a broken state Manual role creation is not supported Deleting and recreating the Sync Group is the only reliable recovery method once roles are missing Final Recommendation If you encounter Data Sync setup failures referencing DataSync_executor, always: Validate role existence Fully clean up broken artifacts Let Azure Data Sync recreate everything by rebuilding the Sync Group This approach consistently resolves the issue and restores a healthy Data Sync deployment.Understanding and Monitoring Class 2 Transactions in Azure SQL Database
During a recent customer engagement, we investigated sustained transaction log growth in Azure SQL Database without obvious large user transactions. The customer was familiar with PostgreSQL diagnostics and wanted to understand how similar insights can be obtained in Azure SQL Database—especially around Class 2 (system) transactions. This post summarizes what we discussed, explains why Azure SQL behaves differently, and walks through practical DMV‑based monitoring patterns you can use today. Azure SQL Database vs. PostgreSQL: Diagnostic Model Differences One of the first clarifications we made is that Azure SQL Database does not expose diagnostic settings equivalent to PostgreSQL’s system‑level log diagnostics. Azure SQL Database is a fully managed PaaS service, and many internal operations—such as checkpoints, version store cleanup, and background maintenance—are abstracted from direct control. Instead of low‑level engine logs, Azure SQL provides cumulative Dynamic Management Views (DMVs) that expose the effects of system activity rather than the internal implementation. What Are Class 2 Transactions? In Azure SQL Database, Class 2 transactions generally refer to system‑generated transactions, not directly initiated by user workloads. These commonly include: Checkpoint operations Version store cleanup Ghost record cleanup Background metadata maintenance Although they are not user‑driven, these transactions still generate transaction log activity, which can be surprising when log usage grows steadily without large user transactions. Key DMVs to Monitor Class 2 Activity 1. Transaction Log Usage SELECT * FROM sys.dm_db_log_space_usage; This DMV provides: Total log size Used log space Used log percentage If log usage grows steadily without large user transactions, it is often a signal that background system activity (Class 2 transactions) is responsible. Checkpoint Activity SELECT * FROM sys.dm_exec_requests WHERE command = 'CHECKPOINT'; Frequent checkpoints result in: More frequent log flushes Increased system log writes In Azure SQL Database, checkpoint frequency is system‑managed and cannot be tuned through configuration or diagnostic settings. Version Store Usage (Common Class 2 Contributor) SELECT * FROM sys.dm_tran_version_store_space_usage; High version store usage often leads to: Background cleanup tasks Increased system transactions Additional transaction log generation This is especially common in workloads using: Snapshot Isolation Read Committed Snapshot Isolation (RCSI) Long‑running transactions or readers Automating Monitoring with Azure Elastic Jobs Because these DMVs are cumulative, capturing them over time is key. During the call, we discussed automating data collection using Azure Elastic Jobs. Elastic Jobs allow you to: Schedule DMV snapshots Store historical trends Correlate spikes with workload patterns Microsoft provides full guidance on creating and managing Elastic Jobs using T‑SQL here: Create and manage Elastic Jobs using T‑SQL Index Management and Class 2 Impact Index maintenance can indirectly increase Class 2 activity by: Increasing version store usage Triggering additional background cleanup Instead of manual index tuning, we recommended enabling Query Performance Insight – Index recommendations in the Azure Portal. This allows Azure SQL Database to automatically: Suggest index creation Suggest index removal based on real workload patterns. Why Checkpoints Cannot Be Tuned A common question is whether checkpoint frequency can be reduced to lower system log activity. In Azure SQL Database: Checkpoints are engine‑managed There is no diagnostic or configuration setting to control their frequency This design ensures platform stability and predictable recovery behavior As a result, monitoring—not tuning—is the correct approach. Practical Takeaways From this case, the key lessons are: Not all transaction log growth is user‑driven Class 2 transactions are a normal part of Azure SQL Database DMVs provide the best visibility into system behavior Trend‑based monitoring is more valuable than point‑in‑time checks Automation via Elastic Jobs is essential for long‑term analysis Conclusion Class 2 transactions are often misunderstood because they operate quietly in the background. By using the right DMVs and collecting data over time, you can clearly distinguish expected system behavior from genuine workload issues. If you’re coming from PostgreSQL or on‑prem SQL Server, the key mindset shift is this: Azure SQL Database exposes outcomes, not internals—and that’s by design.Troubleshooting Azure SQL Data Sync Groups Stuck in Progressing State
Azure SQL Data Sync is commonly used to synchronize data across Azure SQL Databases and on‑premises SQL Server environments. While the service works well in many scenarios, customers may occasionally encounter a situation where a Sync Group remains stuck in a “Progressing” state and cannot be started, stopped, or refreshed. This blog walks through a real-world troubleshooting scenario, highlights the root cause, and outlines practical remediation steps based on actual support investigation and collaboration. Problem Overview In this scenario, the customer reported that: The Sync Group was stuck in “Progressing” for multiple days Sync operations could not be started or stopped Tables could not be refreshed or reconfigured Azure Activity Logs showed operations as Succeeded, yet sync never progressed Our backend telemetry showed the Sync Group as Active, while hub and member databases were in Reprovisioning state The last successful sync occurred on XX day, after which the sync pipeline stopped making progress. Initial Investigation Findings During the investigation, several key observations were made: 1. High DATA IO Utilization Telemetry and backend checks revealed that DATA IO utilization was pegged at 100% on one of the sync member databases starting XX day. Despite no noticeable change in application workload, the database was under sustained IO pressure, which directly impacted Data Sync operations. 2. Deadlocks During Sync Processing Our backend telemetry showed repeated deadlock errors: Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim. These deadlocks were observed for multiple Sync Member IDs starting the same day IO saturation began. This aligned with the hypothesis that resource contention, not a Data Sync service failure, was the underlying issue. 3. Metadata Database Was Healthy The Sync metadata database was running on a serverless Azure SQL Database (1 vCore) and showed healthy resource usage, ruling it out as a bottleneck. Recommended Troubleshooting Steps Based on the findings, the following steps were recommended and validated: ✅ Step 1: Address Database Resource Constraints First Before attempting to recreate or reset the Sync Group, the focus was placed on resolving DATA IO saturation on the affected database. Actions included: Scaling up the database (DTUs / vCores) Monitoring IO utilization after scaling Ensuring sufficient headroom for sync operations This was identified as the primary remediation step. ✅ Step 2: Use the Azure SQL Data Sync Health Checker The Azure SQL Data Sync Health Checker was recommended to validate: Sync metadata integrity Table-level configuration issues Agent and connectivity status GitHub tool: AzureSQLDataSyncHealthChecker ✅ Step 3: Validate Sync Group and Agent State via PowerShell PowerShell was used to confirm: Sync Group state Last successful sync time On‑premises Sync Agent connectivity Example commands used: Get-AzureRmSqlSyncGroup ` -ResourceGroupName "ResourceGroup01" ` -ServerName "Server01" ` -DatabaseName "Database01" | Format-List Get-AzureRmSqlSyncAgent ` -ResourceGroupName "ResourceGroup01" ` -ServerName "Server01" | Select ResourceGroupName, SyncState, LastSyncTime Resolution After the customer increased the database size, DATA IO utilization dropped, sync operations resumed normally, and the customer confirmed that the issue was resolved.Why Long-Term Retention (LTR) Backups Don’t Attach After a PITR Restore in Azure SQL Database
Summary Customers sometimes expect that after performing a Point‑in‑Time Restore (PITR) and renaming the restored database back to its original name, existing Long‑Term Retention (LTR) backups will automatically appear and continue from where they left off. This behavior may have worked in older or legacy environments, but in modern Azure SQL Database deployments—especially across new servers or subscriptions—this expectation can lead to confusion. This article explains why LTR backups do not attach to restored databases, even if the database name is reused, and what customers should expect instead. The Scenario The discussion originated from a common migration pattern: A customer has an Azure SQL Database with LTR policies configured (for example, monthly backups retained for 10 years). The customer performs a Point‑in‑Time Restore (PITR) of that database. After the restore, the database is renamed to match the original database name. The customer expects the existing LTR backups to appear under the restored database. In legacy environments, this behavior appeared to work. However, in newer Azure SQL Database deployments, the LTR backups are not visible after the restore and rename process. Key Technical Detail: LTR Is Not Based on Database Name The most important concept to understand is this: LTR backups are associated with the database’s logical database ID—not the database name. Each Azure SQL Database is assigned a unique logical database ID at creation time. When a PITR restore is performed: A new database is created It receives a new logical database ID Even if you rename the database to match the original name, the logical ID remains different As a result, the restored database is treated as a completely new database from an LTR perspective, and it does not inherit the historical LTR backup chain. Why Renaming the Database Does Not Help Renaming a database only changes its display name. It does not change: The logical database ID The internal association used by the LTR system Because LTR configuration and backup visibility are tied to the logical database ID, renaming alone cannot reattach historical LTR backups. Subscription Boundaries Matter Another important clarification raised in the discussion: LTR backups are scoped to the subscription where the database was created While you can restore LTR backups to a different server within the same subscription, you cannot carry historical LTR backups across subscriptions If a customer migrates to a new subscription, the historical LTR chain from the old subscription cannot be reused or reattached. Only new LTR backups created after the move will exist in the new subscription. What Customers Will Observe After a PITR restore and rename: ✅ The database is successfully restored ✅ LTR policies can be configured again ❌ Historical LTR backups from the original database are not visible ❌ The restored database does not inherit old LTR backups, even if the name matches This is expected behavior and aligns with the current Azure SQL Database architecture. How to Validate LTR Backups Correctly To avoid confusion caused by portal caching or UI expectations, customers can list LTR backups programmatically using PowerShell or Azure CLI, as documented in Microsoft Learn: Azure SQL Database: Manage long-term backup retention Azure SQL Database: Manage long-term backup retention - Azure SQL Database | Microsoft Learn This confirms whether LTR backups exist for a specific logical database ID. Best Practices and Recommendations Do not rely on database renaming to preserve LTR history. Treat any PITR restore as a new database from an LTR perspective. If historical LTR backups must remain accessible: Keep the original database intact Restore LTR backups directly from the original database when needed Plan migrations carefully, especially when moving across subscriptions, as LTR history cannot be migrated. Final Thoughts LTR backups are a powerful compliance and recovery feature in Azure SQL Database, but they are intentionally designed to be immutable and identity‑based, not name‑based. Understanding that logical database ID—not database name—controls LTR association helps set correct expectations and avoids surprises during restores or migrations. Frequently Asked Questions (FAQ) Q1: Why don’t my existing LTR backups appear after I restore a database using PITR? Because a Point‑in‑Time Restore (PITR) creates a new database with a new logical database ID. Long‑Term Retention (LTR) backups are associated with the database’s logical ID—not its name—so the restored database does not inherit the historical LTR backup chain. Q2: If I rename the restored database to the original name, shouldn’t the LTR backups reappear? No. Renaming a database only changes its display name. It does not change the logical database ID, which is what LTR uses to associate backups. As a result, renaming does not reattach existing LTR backups. Q3: This used to work in our legacy environment—why is it different now? In older environments, the behavior may have appeared to work due to differences in platform implementation. In current Azure SQL Database architecture, LTR association is strictly identity‑based, which ensures immutability, compliance, and predictable backup behavior. Q4: Can I attach historical LTR backups to a restored database manually? No. LTR backups are immutable and cannot be reattached or reassigned to a different logical database ID. This behavior is by design. Q5: What happens if I move my database to a new subscription? LTR backups are scoped to the subscription where the database was created. If you migrate to a new subscription: Historical LTR backups from the old subscription cannot be carried over Only new LTR backups created after the move will exist in the new subscription Q6: Can I still restore from my old LTR backups? Yes. As long as the original database (or its logical identity) still exists in the original subscription, you can restore directly from those LTR backups—even if a newer database with the same name exists elsewhere. Q7: How can I verify which LTR backups actually exist? The most reliable way is to list LTR backups programmatically using Azure PowerShell or Azure CLI, which queries backups by logical database ID rather than relying solely on portal views. Refer to the official documentation: Azure SQL Database – Manage long‑term backup retention Q8: What is the recommended approach if we need long‑term recoverability after PITR? Treat every PITR restore as a new database from an LTR perspective Keep the original database intact if historical LTR backups must remain accessible Plan subscription migrations carefully, as LTR history cannot be migrated