sqlserver
2 TopicsTroubleshooting Azure SQL Data Sync Failure: SQL Error 8106 During Bulk Insert
Azure SQL Data Sync is widely used to maintain consistency across distributed databases in hub–member topologies. However, synchronization may occasionally fail due to schema mismatches between participating databases — even when everything appears correctly configured at first glance. In this post, we’ll walk through a real-world troubleshooting scenario involving a Data Sync failure caused by a schema inconsistency related to an IDENTITY column, and how it was mitigated. Sample Error: sync_7726d6cb22124c0f901192c434f49106bd618f8ab16343b2adc03250f8367ff4\3953fb7d-1dba-4656-8150-83153d5d019b.batch. See the inner exception for more details. Inner exception: Failed to execute the command 'BulkInsertCommand' for table 'schema.table_name'; the transaction was rolled back. Ensure that the command syntax is correct. Inner exception: SqlException ID: e19b3677-d67e-4c8e-bc49-13d3df61ad0e, Error Code: -2146232060 - SqlError Number:8106, Message: SQL error with code 8106 For more information, provide tracing ID ‘92e76130-f80a-4372-9a48-ec0ede8b0288’ to customer support." Scenario Overview A synchronization operation began failing for a specific table within an Azure SQL Data Sync group. The failure was observed during the sync process when applying changes using a batch file. The error surfaced as part of a failed BulkInsertCommand execution on a synced table, causing the transaction to roll back. Further investigation revealed the following SQL exception: SqlError Number: 8106 Table does not have the identity property. Cannot perform SET operation. Initial Troubleshooting Steps Before identifying the root cause, the following actions were taken: The affected table was removed from the sync group. A sync operation was triggered. The table was re-added to the sync group. Sync was triggered again. Despite performing these steps, the issue persisted with the same error. This indicated that the failure was not related to sync metadata or temporary configuration inconsistencies. Root Cause Analysis After reviewing the table definitions across the sync topology, it was discovered that: The synchronized table had an IDENTITY column defined on one side of the topology (Hub or Member) but not on the other. This schema mismatch led to the sync service attempting to apply SET IDENTITY_INSERT operations during the bulk insert phase — which failed on the database where the column lacked the identity property. Azure SQL Data Sync relies on consistent schema definitions across all participating databases. Any deviation — particularly involving identity columns — can interrupt data movement operations. Mitigation Approach To resolve the issue, the following corrective steps were applied: Remove the affected table from the sync group and save the configuration. Refresh the sync schema. Recreate the table to include the appropriate IDENTITY property. Add the corrected table back to the sync group. Trigger a new sync operation. These steps ensured that the table definitions were aligned across all sync participants, allowing the synchronization process to proceed successfully. Best Practices to Avoid Similar Issues To prevent identity-related sync failures in Azure SQL Data Sync: ✅ Ensure table schemas are identical across all participating databases before onboarding them into a sync group. ✅ Pay special attention to: IDENTITY properties Primary keys Data types Nullable constraints ✅ Always validate schema consistency when: Adding new tables to a sync group Modifying existing table definitions Final Thoughts Schema mismatches — especially those involving identity columns — are a common but often overlooked cause of Data Sync failures. By ensuring consistent table definitions across your hub and member databases, you can significantly reduce the risk of synchronization errors and maintain reliable data movement across regions.Understanding and Monitoring Class 2 Transactions in Azure SQL Database
During a recent customer engagement, we investigated sustained transaction log growth in Azure SQL Database without obvious large user transactions. The customer was familiar with PostgreSQL diagnostics and wanted to understand how similar insights can be obtained in Azure SQL Database—especially around Class 2 (system) transactions. This post summarizes what we discussed, explains why Azure SQL behaves differently, and walks through practical DMV‑based monitoring patterns you can use today. Azure SQL Database vs. PostgreSQL: Diagnostic Model Differences One of the first clarifications we made is that Azure SQL Database does not expose diagnostic settings equivalent to PostgreSQL’s system‑level log diagnostics. Azure SQL Database is a fully managed PaaS service, and many internal operations—such as checkpoints, version store cleanup, and background maintenance—are abstracted from direct control. Instead of low‑level engine logs, Azure SQL provides cumulative Dynamic Management Views (DMVs) that expose the effects of system activity rather than the internal implementation. What Are Class 2 Transactions? In Azure SQL Database, Class 2 transactions generally refer to system‑generated transactions, not directly initiated by user workloads. These commonly include: Checkpoint operations Version store cleanup Ghost record cleanup Background metadata maintenance Although they are not user‑driven, these transactions still generate transaction log activity, which can be surprising when log usage grows steadily without large user transactions. Key DMVs to Monitor Class 2 Activity 1. Transaction Log Usage SELECT * FROM sys.dm_db_log_space_usage; This DMV provides: Total log size Used log space Used log percentage If log usage grows steadily without large user transactions, it is often a signal that background system activity (Class 2 transactions) is responsible. Checkpoint Activity SELECT * FROM sys.dm_exec_requests WHERE command = 'CHECKPOINT'; Frequent checkpoints result in: More frequent log flushes Increased system log writes In Azure SQL Database, checkpoint frequency is system‑managed and cannot be tuned through configuration or diagnostic settings. Version Store Usage (Common Class 2 Contributor) SELECT * FROM sys.dm_tran_version_store_space_usage; High version store usage often leads to: Background cleanup tasks Increased system transactions Additional transaction log generation This is especially common in workloads using: Snapshot Isolation Read Committed Snapshot Isolation (RCSI) Long‑running transactions or readers Automating Monitoring with Azure Elastic Jobs Because these DMVs are cumulative, capturing them over time is key. During the call, we discussed automating data collection using Azure Elastic Jobs. Elastic Jobs allow you to: Schedule DMV snapshots Store historical trends Correlate spikes with workload patterns Microsoft provides full guidance on creating and managing Elastic Jobs using T‑SQL here: Create and manage Elastic Jobs using T‑SQL Index Management and Class 2 Impact Index maintenance can indirectly increase Class 2 activity by: Increasing version store usage Triggering additional background cleanup Instead of manual index tuning, we recommended enabling Query Performance Insight – Index recommendations in the Azure Portal. This allows Azure SQL Database to automatically: Suggest index creation Suggest index removal based on real workload patterns. Why Checkpoints Cannot Be Tuned A common question is whether checkpoint frequency can be reduced to lower system log activity. In Azure SQL Database: Checkpoints are engine‑managed There is no diagnostic or configuration setting to control their frequency This design ensures platform stability and predictable recovery behavior As a result, monitoring—not tuning—is the correct approach. Practical Takeaways From this case, the key lessons are: Not all transaction log growth is user‑driven Class 2 transactions are a normal part of Azure SQL Database DMVs provide the best visibility into system behavior Trend‑based monitoring is more valuable than point‑in‑time checks Automation via Elastic Jobs is essential for long‑term analysis Conclusion Class 2 transactions are often misunderstood because they operate quietly in the background. By using the right DMVs and collecting data over time, you can clearly distinguish expected system behavior from genuine workload issues. If you’re coming from PostgreSQL or on‑prem SQL Server, the key mindset shift is this: Azure SQL Database exposes outcomes, not internals—and that’s by design.