Blog Post

Azure Database Support Blog
4 MIN READ

SqlError Number: 3961 Message: Snapshot isolation transaction failed in database 'DatabaseName'

Mohamed_Baioumy_MSFT's avatar
Dec 11, 2025

Error Code: -2146232060 SqlError Number: 3961 Message: Snapshot isolation transaction failed in database 'DatabaseName' because the object accessed by the statement has been modified by a DDL statement in another concurrent transaction since the start of this transaction. Metadata is not versioned, and concurrent updates can lead to inconsistency.

This error indicates a snapshot isolation conflict caused by concurrent schema changes during Azure SQL Data Sync operations. Here’s the analysis:

 

Root Cause

  • The sync process uses snapshot isolation to enumerate changes.
  • While the sync transaction was running, a DDL statement (e.g., ALTER TABLE, CREATE INDEX, or similar) modified the metadata of an object accessed by the sync.
  • Snapshot isolation does not version metadata, so concurrent schema changes break consistency and SQL Server abort the transaction.
  • Error details:
    • SqlError Number: 3961
    • Message: Snapshot isolation transaction failed because metadata changed during the transaction.

 

Why It Happens

  • Azure Data Sync relies on stable schema during enumeration.
  • If schema changes occur (adding columns, altering indexes) while sync is reading changes, it triggers this error.
  • Common scenarios:
    • Schema deployment during sync.
    • Automated scripts altering tables while sync jobs run.

 

Impact

  • Sync job fails for that cycle.
  • No data loss, but sync latency increases until the next successful run.
  • Repeated failures can cause backlog and performance degradation.

 

Recommended Actions

  1. Avoid Concurrent DDL During Sync
    • Schedule schema changes outside sync windows.
    • Use maintenance windows for deployments.
  1. Monitor Sync Jobs
    • Check Azure Portal → SQL Data Sync → Sync Group → Logs for repeated errors.
    • Enable Diagnostic Settings to send sync errors to Log Analytics for alerting.
  1. Retry Logic
    • Sync automatically retries, but if errors persist, pause sync, apply schema changes, then resume.
  1. Alerting
    • Create an Azure Monitor alert for error code 3961 or failed sync jobs.
    • Use Log Analytics query:

 

AzureDiagnostics
| where OperationName == "SyncGroupJob"
| where ResultDescription contains "3961"

 

Trigger alert when count > 0.

 

Preventive Design

  • Freeze schema during sync cycles.
  • If frequent schema changes are unavoidable, consider transaction isolation level adjustments.

A restart (or failover) of the Azure SQL Database is very unlikely to fix this error beyond a brief, accidental reprieve. Error 3961 occurs because Azure SQL Data Sync runs under snapshot isolation, and your sync transaction encountered a concurrent DDL change (e.g., ALTER TABLE, CREATE/DROP INDEX, etc.). Metadata isn’t versioned under snapshot isolation, so SQL Server aborts the enumeration to prevent inconsistent results. Restarting won’t stop the DDL from happening again.

 

Why a restart won’t help

  • The failure is logic/contention-based, not a stuck engine or memory leak.
  • Even if a restart clears in-flight locks, the underlying pattern (DDL overlapping with Data Sync’s snapshot transaction) will reproduce the error on the next run.

 

What actually resolves it

1) Separate DDL from sync windows

  • Pause the sync group (Hub and affected Member(s)) or schedule a maintenance window.
  • Apply schema changes everywhere (Hub + all Members) while sync is paused.
  • Refresh sync schema in the Data Sync portal for the sync group.
  • Resume sync and force a job run to verify.

 

Tip: If you must deploy often, align Data Sync schedules with release windows (e.g., reduce frequency or disable during the deployment).

 

Ensure schema parity across Hub/Member(s)

Data Sync relies on identical schema for tracked tables. After any DDL:

  • Confirm columns, data types, PKs, and indexes match.
  • If you added/removed tables or columns, re‑select them in the sync group and re‑provision (this will recreate tracking tables/triggers as needed).

 

Block or serialize DDL during sync

If frequent DDL is unavoidable:

  • Process discipline: require a change freeze while the sync job is active.
  • Automation: wrap deployments with steps that pause sync → deploy → refresh schema → resume.
  • Staging: deploy to a staging database, validate, then switch sync scope during a short window.

 

Note: Changing isolation level of the workload won’t help Data Sync; Data Sync’s snapshot isolation is by design for change enumeration consistency.

 

Runbook you can use now

Identify active schema changes

 

-- DDL currently executing
SELECT s.session_id, r.command, r.status, r.cpu_time, r.total_elapsed_time, r.sql_handle
FROM sys.dm_exec_requests r
JOIN sys.dm_exec_sessions s ON r.session_id = s.session_id
WHERE r.command IN ('ALTER TABLE','CREATE INDEX','DROP INDEX','ALTER INDEX','CREATE TABLE','DROP TABLE');

 

Check snapshot transactions / version store pressure

 

-- Active snapshot transactions
SELECT * FROM sys.dm_tran_active_snapshot_database_transactionsSELECT * FROM sys.dm_tran_active_snapshot_database_transactions;

 

-- Version store usage by database

 

Stabilize and resume

  • Pause the sync group.
  • Complete/cancel any DDL or long-running schema operations.
  • Apply the same DDL on Hub and Members.
  • In Azure portal: Data Sync → Sync group → Refresh schema (Hub and Members).
  • Resume sync and Run a manual job; verify success.

 

Prevention & monitoring

  • Change policy: codify “no DDL during sync windows”.
  • Pipelines: add steps to pause/resume sync around schema deployments.
  • Alerting: create an Azure Monitor log alert for SyncGroupJob failures that contain 3961, so you’re notified immediately.

 

KQL (example) if you send Data Sync diagnostics to Log Analytics:

 

AzureDiagnostics
| where OperationName == "SyncGroupJob"

 

“Can we switch to READ COMMITTED SNAPSHOT to avoid this?”
No—Data Sync’s internal enumeration uses snapshot isolation; metadata is still not versioned. The only reliable fix is no concurrent DDL.

“Is there data loss?”
No—the job aborts to protect consistency. You may see delay/latency until a clean run completes.

“Do we need to rebuild the sync group?”
Usually not. Only re‑provision if you changed tracked tables/columns or tracking has become invalid.

 

Please let me know if you have any questions or concerns, I am happy to help.

Published Dec 11, 2025
Version 1.0
No CommentsBe the first to comment