change tracking history table
1 TopicEnable auditing to identify Change Tracking history table drops using Extended Events (Azure SQL Db)
When Change Tracking is enabled, the system-maintained history table dbo.MSChange_tracking_history is a key place to look for cleanup progress and errors. For example, Microsoft’s troubleshooting guidance for Change Tracking auto-cleanup explicitly references querying the history table to validate whether cleanup is running and whether there are cleanup errors. In real-world environments, teams sometimes observe that dbo.MSChange_tracking_history “disappears” unexpectedly (for example, during cleanup troubleshooting), which then blocks visibility into cleanup progress and error history. When that happens, the most practical way to answer “who/what dropped the history table?” is to audit the DDL using Extended Events (XEvents)—a lightweight monitoring capability designed for troubleshooting. This post shows how to create a database-scoped XEvent session in Azure SQL Database that captures Change Tracking history table drops and related events, using the script you provided. Why Extended Events for this scenario? Extended Events are built to capture internal engine activities with relatively low overhead, and are supported across SQL Server and Azure SQL. In Azure SQL Database, XEvents are commonly used for investigation and troubleshooting when you need evidence of what happened and who initiated it. In this specific case, we want to capture: A history table drop event (Change Tracking-specific) A syscommittab truncate event (Change Tracking-specific) A generic object deletion event filtered to the history table name And we want to enrich each event with context (app, host, user, SQL text, stack, timestamps, etc.). The Extended Events session (copy/paste) Below is your script to create a database-scoped session named CTXEvent that audits the Change Tracking history table drop and related events: CREATE EVENT SESSION [CTXEvent] ON DATABASE ADD EVENT sqlserver.change_tracking_historytable_drop( ACTION(mdmtargetpkg.mdmget_TimeStampUTC,sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.context_info,sqlserver.database_name,sqlserver.external_script_request_id,sqlserver.session_id,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.tsql_stack,sqlserver.username)), ADD EVENT sqlserver.change_tracking_syscommittab_truncate( ACTION(mdmtargetpkg.mdmget_TimeStampUTC,sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.context_info,sqlserver.database_name,sqlserver.external_script_request_id,sqlserver.session_id,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.tsql_stack,sqlserver.username)), ADD EVENT sqlserver.object_deleted( ACTION(mdmtargetpkg.mdmget_TimeStampUTC,sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.context_info,sqlserver.database_name,sqlserver.external_script_request_id,sqlserver.session_id,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.tsql_stack,sqlserver.username) WHERE ([object_name]=N'MSchange_tracking_history')) WITH (MAX_MEMORY=16384 KB,EVENT_RETENTION_MODE=NO_EVENT_LOSS,MAX_DISPATCH_LATENCY=10 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=ON,STARTUP_STATE=OFF) GO Why these three events? sqlserver.change_tracking_historytable_drop: directly captures the Change Tracking “history table dropped” operation (purpose-built for this investigation). sqlserver.change_tracking_syscommittab_truncate: captures maintenance actions on syscommittab, which is central to Change Tracking’s internal state. sqlserver.object_deleted with a predicate on MSchange_tracking_history: provides a generic “safety net” if the drop manifests via generic DDL deletion paths. Why these actions? The actions you selected are the practical “who/what/where” context you usually need to attribute the operation: sqlserver.client_app_name / sqlserver.client_hostname: which app + which machine initiated the DDL sqlserver.username: identity executing the statement sqlserver.sql_text + sqlserver.tsql_stack: the statement and call stack (helpful if the drop occurs via a job, framework, or tooling) sqlserver.session_id / sqlserver.transaction_id: correlate activity across events TRACK_CAUSALITY=ON: enables correlation across related events within the session context. Starting and stopping the session Your session is created with STARTUP_STATE=OFF, meaning it does not start automatically. That’s typically a good practice for targeted investigations: enable it only during the window where you expect the issue. For Azure SQL Database, you can manage the session using SSMS or equivalent T-SQL workflows. Microsoft’s Extended Events QuickStart walks through creating sessions and working with session data, including the general workflow concepts. Tip (practical): Turn it on shortly before your expected “drop window,” reproduce/observe, then turn it off after you capture the event to minimize overhead. Where does the data go? Your script defines the events + actions + session options, but it does not define a target (for example, event_file). That’s OK for short, in-the-moment investigations, but for durable evidence and easy sharing, many teams prefer writing to an event file target. Option: Persist captured events to Azure Storage (recommended for multi-hour investigations) Microsoft provides a dedicated walkthrough for Azure SQL Database to create an event session with an event_file target stored in Azure Storage, including: creating a storage account + container granting access (RBAC or SAS) creating a database credential creating/starting the session viewing the event data If you choose this route, you can then read event files using sys.fn_xe_file_target_read_file (supported in Azure SQL Database, commonly used to consume XEL files from blob storage). How to interpret the results (what to look for) When the table drop happens, the captured event payload should allow you to confidently answer: Who initiated the operation? Look at sqlserver.username plus client_app_name and client_hostname. What executed the drop? Look at sql_text and tsql_stack. Was this tied to cleanup activity or another workflow? If you see change_tracking_syscommittab_truncate around the same time (and correlated via session/transaction/casuality identifiers), that can indicate the drop happened in proximity to Change Tracking maintenance behavior. Was the drop recurring or one-off? If you capture repeated events at a steady cadence, it points to an automated process (job, deployment pipeline, scheduled task, framework) rather than an ad-hoc human action. Why this matters: the history table is part of Change Tracking troubleshooting Microsoft’s Change Tracking cleanup troubleshooting guidance explicitly uses dbo.MSChange_tracking_history as the place to validate whether cleanup is running and whether errors are being logged. So if the history table is being dropped, you effectively lose an important “black box recorder” that you rely on to understand cleanup behavior—exactly why auditing the drop is so valuable. Operational tips (minimize noise, maximize signal) These are practical recommendations commonly used in incident investigations (treat them as operational guidance you can adapt to your environment): Run the session for a limited time window (enough to catch the drop). Keep actions focused (your list is already purpose-built for attribution). Prefer file target if you need durable evidence (especially when coordinating across teams). Document what “good” looks like before you enable the session (e.g., confirm whether the history table exists and whether it’s being written to), using the troubleshooting query from Microsoft docs. Common troubleshooting steps (when the audit session “doesn’t capture anything”) When you set up a targeted Extended Events session like CTXEvent, there are usually three places where things can go wrong: (1) Change Tracking state, (2) XEvent session scope/permissions, or (3) the target (storage/credential). The checklist below is written to help you isolate which layer is failing—quickly, and with minimal guesswork. 1) First, validate the Change Tracking baseline (so you’re not “tracing a ghost”) Does the history table exist—and is it expected to exist? Microsoft’s Change Tracking guidance uses dbo.MSChange_tracking_history as the first place to look for cleanup status and errors. If the table doesn’t exist, that can be a signal that auto-cleanup has never run (or hasn’t run since CT was re-enabled). Check whether auto-cleanup is enabled and whether it’s been running recently. The Change Tracking troubleshooting doc recommends checking sys.change_tracking_databases (to validate auto-cleanup is enabled) and querying the history table for recent entries (to validate cleanup is actually running). If dbo.MSChange_tracking_history is missing, confirm whether that’s due to CT lifecycle. A common “gotcha” is assuming the history table is created the moment CT is enabled. In practice, the table can be missing if CT was disabled and then re-enabled, or if only manual cleanup was used—because the history table is tied to auto-cleanup behavior. Why this matters: If the history table never existed (or is missing for expected reasons), your “history table drop” symptom might actually be a different behavior than a true DDL DROP—and you’ll want to confirm the CT lifecycle first before you rely on drop-auditing as the primary signal. 2) Confirm the Extended Events session is created in the right place (Azure SQL DB specifics) Make sure the session is database-scoped (Azure SQL Database requirement). In Azure SQL Database, Extended Events sessions are always database-scoped, and the event_file target writes to Azure Storage blobs (not local disk). If you accidentally create or manage the session as if it were server-scoped, you’ll hit confusing errors or see “no data.” Confirm you created the session in a user database (not master). Azure SQL Database doesn’t support creating database-scoped sessions from master. If you try, you can run into failures that look like target/session problems but are actually scope issues. Verify permissions: least privilege that still works. If the session creation/alter/start fails silently (or you can’t start it), confirm the principal has the right database permission. A practical minimum that works for managing DB-scoped sessions is ALTER ANY DATABASE EVENT SESSION (instead of granting broad CONTROL/db_owner). 3) If you’re using event_file: storage + credential is the most common failure point If the session starts but no .xel ever shows up, focus here first—because event_file in Azure SQL DB depends on a correct Storage setup. Validate the “happy path” prerequisites (in order): Storage account exists and a container exists. The Database Engine has access to that container either via RBAC (managed identity) or via a SAS token. A credential is created in the same database where the session is created. Microsoft’s event_file walkthrough is explicit about these steps, and missing any one of them can cause “target initialization” or “no output” symptoms. Quick diagnostic: temporarily switch to an in-memory target to prove events are firing. The Azure SQL Extended Events guidance notes you can use a ring_buffer target as a quick way to validate that the session is collecting events (without involving storage). If ring_buffer shows events but event_file does not, the issue is almost certainly storage/credential access. Common “it looks correct but still fails” causes (practical): You created the credential in the wrong database (or the session is in a different DB than the credential). The container permissions don’t actually include write access for the engine identity/token. The session was created correctly but never started (especially if STARTUP_STATE = OFF). 4) If you do capture events, but can’t attribute “who did it” Once the session is logging events, attribution usually comes down to having the right actions attached. The docs emphasize that Extended Events are designed to collect “as much or as little as necessary,” and in investigations you typically want the “who/what/where” context: app name, hostname, username, session_id, sql_text, etc. Practical tip: Start with your curated set of actions (like client_app_name, client_hostname, username, session_id, and sql_text), and only add more fields if you still can’t triangulate the origin. This keeps overhead and noise down while staying evidence driven. 5) If the table drop is periodic (hourly / scheduled), treat it like an automation clue In real customer scenarios, the symptom can be “history table disappears repeatedly,” which makes it especially important to correlate drops with client_app_name / hostname / username (and optionally transaction/session details) to determine whether this is a scheduled job, maintenance task, or application behavior. This is exactly the type of “pattern identification” where XEvents tends to outperform ad-hoc guessing. Quick “decision tree” summary History table missing? Start with CT lifecycle + auto-cleanup checks. Session exists but captures nothing? Confirm DB scope + permissions. Captures to ring_buffer but not to event_file? Storage/credential access problem. Captures events but no “who”? Add/verify the right actions (client_app_name/hostname/username/sql_text). References Extended Events QuickStart and overview (applies to Azure SQL Database). Change Tracking cleanup troubleshooting and history table usage (dbo.MSChange_tracking_history). Event File target for Azure SQL Database stored in Azure Storage + steps (RBAC/SAS + credential). Reading XEL output from blob using sys.fn_xe_file_target_read_file.