auditing
72 TopicsPrimer: How to Use RBAC for Applications to Control App Use of the Mail.Send Permission
The temptation to use the Mail.Send application permission in scripts can lead PowerShell developers into trouble because the permission allows access to all mailboxes, including sensitive executive and financial mailboxes. Fortunately, RBAC for Applications allows tenants to control the access that apps have to mailboxes and other Exchange content. All explained here with an example script to test RBAC of Applications. https://office365itpros.com/2026/02/17/mail-send-rbac-for-applications/46Views2likes4CommentsEnable auditing to identify Change Tracking history table drops using Extended Events (Azure SQL Db)
When Change Tracking is enabled, the system-maintained history table dbo.MSChange_tracking_history is a key place to look for cleanup progress and errors. For example, Microsoft’s troubleshooting guidance for Change Tracking auto-cleanup explicitly references querying the history table to validate whether cleanup is running and whether there are cleanup errors. In real-world environments, teams sometimes observe that dbo.MSChange_tracking_history “disappears” unexpectedly (for example, during cleanup troubleshooting), which then blocks visibility into cleanup progress and error history. When that happens, the most practical way to answer “who/what dropped the history table?” is to audit the DDL using Extended Events (XEvents)—a lightweight monitoring capability designed for troubleshooting. This post shows how to create a database-scoped XEvent session in Azure SQL Database that captures Change Tracking history table drops and related events, using the script you provided. Why Extended Events for this scenario? Extended Events are built to capture internal engine activities with relatively low overhead, and are supported across SQL Server and Azure SQL. In Azure SQL Database, XEvents are commonly used for investigation and troubleshooting when you need evidence of what happened and who initiated it. In this specific case, we want to capture: A history table drop event (Change Tracking-specific) A syscommittab truncate event (Change Tracking-specific) A generic object deletion event filtered to the history table name And we want to enrich each event with context (app, host, user, SQL text, stack, timestamps, etc.). The Extended Events session (copy/paste) Below is your script to create a database-scoped session named CTXEvent that audits the Change Tracking history table drop and related events: CREATE EVENT SESSION [CTXEvent] ON DATABASE ADD EVENT sqlserver.change_tracking_historytable_drop( ACTION(mdmtargetpkg.mdmget_TimeStampUTC,sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.context_info,sqlserver.database_name,sqlserver.external_script_request_id,sqlserver.session_id,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.tsql_stack,sqlserver.username)), ADD EVENT sqlserver.change_tracking_syscommittab_truncate( ACTION(mdmtargetpkg.mdmget_TimeStampUTC,sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.context_info,sqlserver.database_name,sqlserver.external_script_request_id,sqlserver.session_id,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.tsql_stack,sqlserver.username)), ADD EVENT sqlserver.object_deleted( ACTION(mdmtargetpkg.mdmget_TimeStampUTC,sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.context_info,sqlserver.database_name,sqlserver.external_script_request_id,sqlserver.session_id,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.tsql_stack,sqlserver.username) WHERE ([object_name]=N'MSchange_tracking_history')) WITH (MAX_MEMORY=16384 KB,EVENT_RETENTION_MODE=NO_EVENT_LOSS,MAX_DISPATCH_LATENCY=10 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=ON,STARTUP_STATE=OFF) GO Why these three events? sqlserver.change_tracking_historytable_drop: directly captures the Change Tracking “history table dropped” operation (purpose-built for this investigation). sqlserver.change_tracking_syscommittab_truncate: captures maintenance actions on syscommittab, which is central to Change Tracking’s internal state. sqlserver.object_deleted with a predicate on MSchange_tracking_history: provides a generic “safety net” if the drop manifests via generic DDL deletion paths. Why these actions? The actions you selected are the practical “who/what/where” context you usually need to attribute the operation: sqlserver.client_app_name / sqlserver.client_hostname: which app + which machine initiated the DDL sqlserver.username: identity executing the statement sqlserver.sql_text + sqlserver.tsql_stack: the statement and call stack (helpful if the drop occurs via a job, framework, or tooling) sqlserver.session_id / sqlserver.transaction_id: correlate activity across events TRACK_CAUSALITY=ON: enables correlation across related events within the session context. Starting and stopping the session Your session is created with STARTUP_STATE=OFF, meaning it does not start automatically. That’s typically a good practice for targeted investigations: enable it only during the window where you expect the issue. For Azure SQL Database, you can manage the session using SSMS or equivalent T-SQL workflows. Microsoft’s Extended Events QuickStart walks through creating sessions and working with session data, including the general workflow concepts. Tip (practical): Turn it on shortly before your expected “drop window,” reproduce/observe, then turn it off after you capture the event to minimize overhead. Where does the data go? Your script defines the events + actions + session options, but it does not define a target (for example, event_file). That’s OK for short, in-the-moment investigations, but for durable evidence and easy sharing, many teams prefer writing to an event file target. Option: Persist captured events to Azure Storage (recommended for multi-hour investigations) Microsoft provides a dedicated walkthrough for Azure SQL Database to create an event session with an event_file target stored in Azure Storage, including: creating a storage account + container granting access (RBAC or SAS) creating a database credential creating/starting the session viewing the event data If you choose this route, you can then read event files using sys.fn_xe_file_target_read_file (supported in Azure SQL Database, commonly used to consume XEL files from blob storage). How to interpret the results (what to look for) When the table drop happens, the captured event payload should allow you to confidently answer: Who initiated the operation? Look at sqlserver.username plus client_app_name and client_hostname. What executed the drop? Look at sql_text and tsql_stack. Was this tied to cleanup activity or another workflow? If you see change_tracking_syscommittab_truncate around the same time (and correlated via session/transaction/casuality identifiers), that can indicate the drop happened in proximity to Change Tracking maintenance behavior. Was the drop recurring or one-off? If you capture repeated events at a steady cadence, it points to an automated process (job, deployment pipeline, scheduled task, framework) rather than an ad-hoc human action. Why this matters: the history table is part of Change Tracking troubleshooting Microsoft’s Change Tracking cleanup troubleshooting guidance explicitly uses dbo.MSChange_tracking_history as the place to validate whether cleanup is running and whether errors are being logged. So if the history table is being dropped, you effectively lose an important “black box recorder” that you rely on to understand cleanup behavior—exactly why auditing the drop is so valuable. Operational tips (minimize noise, maximize signal) These are practical recommendations commonly used in incident investigations (treat them as operational guidance you can adapt to your environment): Run the session for a limited time window (enough to catch the drop). Keep actions focused (your list is already purpose-built for attribution). Prefer file target if you need durable evidence (especially when coordinating across teams). Document what “good” looks like before you enable the session (e.g., confirm whether the history table exists and whether it’s being written to), using the troubleshooting query from Microsoft docs. Common troubleshooting steps (when the audit session “doesn’t capture anything”) When you set up a targeted Extended Events session like CTXEvent, there are usually three places where things can go wrong: (1) Change Tracking state, (2) XEvent session scope/permissions, or (3) the target (storage/credential). The checklist below is written to help you isolate which layer is failing—quickly, and with minimal guesswork. 1) First, validate the Change Tracking baseline (so you’re not “tracing a ghost”) Does the history table exist—and is it expected to exist? Microsoft’s Change Tracking guidance uses dbo.MSChange_tracking_history as the first place to look for cleanup status and errors. If the table doesn’t exist, that can be a signal that auto-cleanup has never run (or hasn’t run since CT was re-enabled). Check whether auto-cleanup is enabled and whether it’s been running recently. The Change Tracking troubleshooting doc recommends checking sys.change_tracking_databases (to validate auto-cleanup is enabled) and querying the history table for recent entries (to validate cleanup is actually running). If dbo.MSChange_tracking_history is missing, confirm whether that’s due to CT lifecycle. A common “gotcha” is assuming the history table is created the moment CT is enabled. In practice, the table can be missing if CT was disabled and then re-enabled, or if only manual cleanup was used—because the history table is tied to auto-cleanup behavior. Why this matters: If the history table never existed (or is missing for expected reasons), your “history table drop” symptom might actually be a different behavior than a true DDL DROP—and you’ll want to confirm the CT lifecycle first before you rely on drop-auditing as the primary signal. 2) Confirm the Extended Events session is created in the right place (Azure SQL DB specifics) Make sure the session is database-scoped (Azure SQL Database requirement). In Azure SQL Database, Extended Events sessions are always database-scoped, and the event_file target writes to Azure Storage blobs (not local disk). If you accidentally create or manage the session as if it were server-scoped, you’ll hit confusing errors or see “no data.” Confirm you created the session in a user database (not master). Azure SQL Database doesn’t support creating database-scoped sessions from master. If you try, you can run into failures that look like target/session problems but are actually scope issues. Verify permissions: least privilege that still works. If the session creation/alter/start fails silently (or you can’t start it), confirm the principal has the right database permission. A practical minimum that works for managing DB-scoped sessions is ALTER ANY DATABASE EVENT SESSION (instead of granting broad CONTROL/db_owner). 3) If you’re using event_file: storage + credential is the most common failure point If the session starts but no .xel ever shows up, focus here first—because event_file in Azure SQL DB depends on a correct Storage setup. Validate the “happy path” prerequisites (in order): Storage account exists and a container exists. The Database Engine has access to that container either via RBAC (managed identity) or via a SAS token. A credential is created in the same database where the session is created. Microsoft’s event_file walkthrough is explicit about these steps, and missing any one of them can cause “target initialization” or “no output” symptoms. Quick diagnostic: temporarily switch to an in-memory target to prove events are firing. The Azure SQL Extended Events guidance notes you can use a ring_buffer target as a quick way to validate that the session is collecting events (without involving storage). If ring_buffer shows events but event_file does not, the issue is almost certainly storage/credential access. Common “it looks correct but still fails” causes (practical): You created the credential in the wrong database (or the session is in a different DB than the credential). The container permissions don’t actually include write access for the engine identity/token. The session was created correctly but never started (especially if STARTUP_STATE = OFF). 4) If you do capture events, but can’t attribute “who did it” Once the session is logging events, attribution usually comes down to having the right actions attached. The docs emphasize that Extended Events are designed to collect “as much or as little as necessary,” and in investigations you typically want the “who/what/where” context: app name, hostname, username, session_id, sql_text, etc. Practical tip: Start with your curated set of actions (like client_app_name, client_hostname, username, session_id, and sql_text), and only add more fields if you still can’t triangulate the origin. This keeps overhead and noise down while staying evidence driven. 5) If the table drop is periodic (hourly / scheduled), treat it like an automation clue In real customer scenarios, the symptom can be “history table disappears repeatedly,” which makes it especially important to correlate drops with client_app_name / hostname / username (and optionally transaction/session details) to determine whether this is a scheduled job, maintenance task, or application behavior. This is exactly the type of “pattern identification” where XEvents tends to outperform ad-hoc guessing. Quick “decision tree” summary History table missing? Start with CT lifecycle + auto-cleanup checks. Session exists but captures nothing? Confirm DB scope + permissions. Captures to ring_buffer but not to event_file? Storage/credential access problem. Captures events but no “who”? Add/verify the right actions (client_app_name/hostname/username/sql_text). References Extended Events QuickStart and overview (applies to Azure SQL Database). Change Tracking cleanup troubleshooting and history table usage (dbo.MSChange_tracking_history). Event File target for Azure SQL Database stored in Azure Storage + steps (RBAC/SAS + credential). Reading XEL output from blob using sys.fn_xe_file_target_read_file.144Views0likes0CommentsAudit Log, what is TokenIssuedAtTime?
I used audit log to search user delete MS Teams files, by using Recycled File and Recycled Folder, I got the log file. Why the TokenIssuedAtTime and the CreationTime are so much different? Below is one of the log record {"AppAccessContext":{"AADSessionId":"8f382a1d-b233-425c-92f4-3cf9ed395c9e","CorrelationId":"ae68fba0-40db-2000-ce07-a7bde7727c3f","TokenIssuedAtTime":"2023-12-23T00:47:57","UniqueTokenId":"U4m5SFCmckOiN_QLrysqAQ"},"CreationTime":"2023-12-26T04:24:52","Id":"7a3dc23c-2699-485b-0a87-08dc05ca9b40","Operation":"FolderRecycled","OrganizationId":"7cf9c29c-c6af-4790-b98b-4eff7637f9be","RecordType":6,"UserKey":"i:0h.f|membership|email address removed for privacy reasons","UserType":0,"Version":1,"Workload":"SharePoint","ClientIP":"2001:d08:e2:58d:61cb:e4bc:c451:aef9","UserId":"email address removed for privacy reasons","AuthenticationType":"FormsCookieAuth","BrowserName":"","BrowserVersion":"","CorrelationId":"ae68fba0-40db-2000-ce07-a7bde7727c3f","EventSource":"SharePoint","IsManagedDevice":false,"ItemType":"Folder","ListId":"33880cd7-1db1-450f-9cd0-5c437c0ccaee","ListItemUniqueId":"184cd92b-40cf-4fa1-82aa-ad5fa61a2a05","Platform":"WinDesktop","Site":"f1bb631d-8ff4-4411-b49f-066e20be905c","UserAgent":"Microsoft SkyDriveSync 23.246.1127.0002 ship; Windows NT 10.0 (19045)","WebId":"aa607282-8b47-47d1-938b-c0cde8e2d87d","DeviceDisplayName":"2a01:111:2055:202:4701:ee31:fe3f:156","CrossScopeSyncDelete":false,"HighPriorityMediaProcessing":false,"SharingType":"","SourceFileExtension":"","SiteUrl":"https://mysharepoint.sharepoint.com/sites/mysite/","SourceRelativeUrl":"Shared Documents/test/MyFolder","SourceFileName":"Quotation","ObjectId":"https://mysharepoint.sharepoint.com/sites/mysite/Shared Documents/test/MyFolder/Test1"}977Views0likes1CommentLanguage defaults audit for everything M365
We are struggling to find where and how the wrong language is being used for various parts of the M365 platform. We have Swedish set as default, but still English is used for a number of places which often are only realized as a consequence by a user. For example, in Viva Engage language is set to Swedish, and for the SharePoint as well. But: When a new user logs on VE is in English While the SharePoint web part is in Swedish, the link text have for some time ended with "- Home" (English) instead of as it was when we started 2+ years ago " - Startsida" (Swedish) Then when creating a VE group Event (Teams-meeting) default language is also English Tracking down what and where is making the wrong language being used is hard. I would be very grateful if pointed to a resource that give an as complete as possible overview of everything in M365 that we need to look over for making sure that the correct language is default everywhere it should be.261Views0likes4CommentsCreating alerts for custom errors with auditing and Log Analytics
Errors are an inherent part of any application. As a database professional managing an Azure SQL Managed Instance, you may be interested in understanding when specific errors occur and how to leverage user-generated errors to respond swiftly when particular scenarios arise. On this post we will see how we can setup alerts for scenarios like query blocking and long open transactions. This will be an extend of what is described on How to setup alerts for deadlocks using Log Analytics | Microsoft Community Hub Step 1 - Setup Log Analytics and auditing Follow steps 1, 2, 3 and 4 described on How to setup alerts for deadlocks using Log Analytics | Microsoft Community Hub Step 2 - Create a table to save the details of the blocking chain / long open transaction Getting an alert itself is not useful if you don`t have a way of getting details for later analysis. Create the table on a database of your choice. If you are interested on blocking: CREATE TABLE [dbo].[blocking_report] ( [DateTime] [datetime] NULL ,[HeadBlocker] [varchar](1) NOT NULL ,[SessionID] [smallint] NOT NULL ,[Login] [nvarchar](128) NOT NULL ,[Database] [nvarchar](128) NULL ,[BlockedBy] [smallint] NULL ,[OpenTransactions] [int] NULL ,[Status] [nvarchar](30) NOT NULL ,[WaitType] [nvarchar](60) NULL ,[WaitTime_ms] [bigint] NULL ,[WaitResource] [nvarchar](256) NULL ,[WaitResourceDesc] [nvarchar](3072) NULL ,[Command] [nvarchar](32) NULL ,[Application] [nvarchar](128) NULL ,[TotalCPU_ms] [int] NOT NULL ,[TotalPhysicalIO_MB] [bigint] NULL ,[MemoryUse_KB] [int] NULL ,[LoginTime] [datetime] NOT NULL ,[LastRequestStartTime] [datetime] NOT NULL ,[HostName] [nvarchar](128) NULL ,[QueryHash] [binary](8) NULL ,[BlockerQuery_or_MostRecentQuery] [nvarchar](max) NULL ) If you are interested on open transactions: CREATE TABLE [dbo].[opentransactions]( [CapturedTime] [datetime] NOT NULL, [tran_elapsed_time_seconds] [int] NULL, [transaction_begin_time] [datetime] NOT NULL, [session_id] [int] NOT NULL, [database_name] [nvarchar](128) NULL, [open_transaction_count] [int] NOT NULL, [host_name] [nvarchar](128) NULL, [program_name] [nvarchar](128) NULL, [login_name] [nvarchar](128) NULL, [status] [nvarchar](30) NULL, [text] [nvarchar](max) NULL ) Step 3 - Create a SQL Agent job Schedule the two SQL Agent jobs using the queries below. You might be interested on schedule that runs every X amount of seconds/minutes. Describing very briefly what each query does: if a query is being blocked for more than 60 seconds, save the blocking chain on a table and raise an error if a transaction is open for more than 120 seconds, save the query details on a table and raise an error Make the necessary adjustments according to your goals. Query for blocking: IF ( SELECT count(*) FROM sys.dm_exec_requests WHERE wait_type LIKE 'LCK%' AND wait_time > 60000 ) > 0 -- checks for queries waiting to obtain a lock for more than 30 seconds BEGIN INSERT INTO database_name.dbo.Blocking_report -- make sure that you change the database_name value SELECT ( SELECT getdate() ) ,[HeadBlocker] = CASE WHEN r2.session_id IS NOT NULL AND ( r.blocking_session_id = 0 OR r.session_id IS NULL ) THEN '1' ELSE '' END ,[SessionID] = s.session_id ,[Login] = s.login_name ,[Database] = db_name(p.dbid) ,[BlockedBy] = w.blocking_session_id ,[OpenTransactions] = r.open_transaction_count ,[Status] = s.STATUS ,[WaitType] = w.wait_type ,[WaitTime_ms] = w.wait_duration_ms ,[WaitResource] = r.wait_resource ,[WaitResourceDesc] = w.resource_description ,[Command] = r.command ,[Application] = s.program_name ,[TotalCPU_ms] = s.cpu_time ,[TotalPhysicalIO_MB] = (s.reads + s.writes) * 8 / 1024 ,[MemoryUse_KB] = s.memory_usage * 8192 / 1024 ,[LoginTime] = s.login_time ,[LastRequestStartTime] = s.last_request_start_time ,[HostName] = s.host_name ,[QueryHash] = r.query_hash ,[BlockerQuery_or_MostRecentQuery] = txt.TEXT FROM sys.dm_exec_sessions s LEFT OUTER JOIN sys.dm_exec_connections c ON (s.session_id = c.session_id) LEFT OUTER JOIN sys.dm_exec_requests r ON (s.session_id = r.session_id) LEFT OUTER JOIN sys.dm_os_tasks t ON ( r.session_id = t.session_id AND r.request_id = t.request_id ) LEFT OUTER JOIN ( SELECT * ,ROW_NUMBER() OVER ( PARTITION BY waiting_task_address ORDER BY wait_duration_ms DESC ) AS row_num FROM sys.dm_os_waiting_tasks ) w ON (t.task_address = w.waiting_task_address) AND w.row_num = 1 LEFT OUTER JOIN sys.dm_exec_requests r2 ON (s.session_id = r2.blocking_session_id) LEFT OUTER JOIN sys.sysprocesses p ON (s.session_id = p.spid) OUTER APPLY sys.dm_exec_sql_text(ISNULL(r.[sql_handle], c.most_recent_sql_handle)) AS txt WHERE s.is_user_process = 1 AND ( r2.session_id IS NOT NULL AND ( r.blocking_session_id = 0 OR r.session_id IS NULL ) ) OR blocked > 0 ORDER BY [HeadBlocker] DESC ,s.session_id; THROW 50000 ,'There are queries being blocked for more than 60 seconds' ,1; END Query for open transactions: DECLARE @count INT = ( SELECT count(*) FROM sys.dm_tran_active_transactions at INNER JOIN sys.dm_tran_session_transactions st ON st.transaction_id = at.transaction_id LEFT OUTER JOIN sys.dm_exec_sessions sess ON st.session_id = sess.session_id LEFT OUTER JOIN sys.dm_exec_connections conn ON conn.session_id = sess.session_id OUTER APPLY sys.dm_exec_sql_text(conn.most_recent_sql_handle) AS txt WHERE DATEDIFF(SECOND, transaction_begin_time, GETDATE()) > 120 -- 120 seconds ) IF @count > 0 BEGIN INSERT INTO database_name.dbo.opentransactions -- change database_name to where the table was created SELECT GETDATE() AS CapturedTime ,DATEDIFF(SECOND, transaction_begin_time, GETDATE()) AS tran_elapsed_time_seconds ,at.transaction_begin_time ,st.session_id ,DB_NAME(sess.database_id) AS database_name ,st.open_transaction_count ,sess.host_name ,sess.program_name ,sess.login_name ,sess.STATUS ,txt.TEXT FROM sys.dm_tran_active_transactions at INNER JOIN sys.dm_tran_session_transactions st ON st.transaction_id = at.transaction_id LEFT OUTER JOIN sys.dm_exec_sessions sess ON st.session_id = sess.session_id LEFT OUTER JOIN sys.dm_exec_connections conn ON conn.session_id = sess.session_id OUTER APPLY sys.dm_exec_sql_text(conn.most_recent_sql_handle) AS txt WHERE DATEDIFF(SECOND, transaction_begin_time, GETDATE()) > 120 --120 seconds ORDER BY tran_elapsed_time_seconds DESC; THROW 50000 ,'There are open transactions for more than 120 seconds' ,1; END Step 4 - Create the alert based on a Log Analytics query Just as described on steps 6 and 7 of How to setup alerts for deadlocks using Log Analytics | Microsoft Community Hub, you can create the two separate alerts using the queries below. For blocking: AzureDiagnostics | where TimeGenerated > ago(15m) //last 15 minutes | where LogicalServerName_s == "server_name" //server name | where Category == "SQLSecurityAuditEvents" | where additional_information_s contains "There are queries being blocked for more" | project TimeGenerated, LogicalServerName_s, database_name_s, additional_information_s For open transactions: AzureDiagnostics | where TimeGenerated > ago(15m) //last 15 minutes | where LogicalServerName_s == "server_name" //server name | where Category == "SQLSecurityAuditEvents" | where additional_information_s contains "There are open transactions for more than" | project TimeGenerated, LogicalServerName_s, database_name_s, additional_information_s219Views0likes0CommentsConfigure Auditing for Azure SQL Database on a Specific table
The Azure SQL Auditing feature is used to track database events and writes them to an audit log in your Azure storage account, Log Analytics workspace, or Event Hubs. For those who are interested on how to configure Auditing on Azure SQL Database either on server-level or database-level please visit this Configure Auditing for Azure SQL Database series - Part 1 and Configure Auditing for Azure SQL Database series - Part 2 For those who are looking into enable Auditing for Azure SQL Database on a specific table as this option is not available on Azure Portal as of now. we don`t have any estimate date when this option is available on Azure Portal, but they can use the following PowerShell script that will help you enable Auditing on a specific table in Azure SQL Database. In script I had given example table as (MyTable) targeting a table named MyTable in the DBO schema and want to Audit SELECT and INSERT actions. # Define variables $resourceGroup = "YourResourceGroup" $serverName = "your-sql-server-name" $databaseName = "your-database-name" $storageAccount = "yourstorageaccount" $tableName = "MyTable" $schemaName = "dbo" # Login to Azure Connect-AzAccount # Enable auditing at the database level Set-AzSqlDatabaseAuditing ` -ResourceGroupName $resourceGroup ` -ServerName $serverName ` -DatabaseName $databaseName ` -StorageAccountName $storageAccount ` -AuditActionGroup "SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP", "FAILED_DATABASE_AUTHENTICATION_GROUP" ` -AuditAction "SELECT ON $schemaName.$tableName BY public", "INSERT ON $schemaName.$tableName BY public" ` -State Enabled REST API - Manage Auditing Using APIs - Azure SQL Database & Azure Synapse Analytics | Microsoft Learn To enable auditing on a specific table in Azure SQL Database using the REST API, you can use the Create or Update Database Extended Auditing Policy endpoint. This allows you to define fine-grained auditing rules, including actions on specific tables. URL PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Sql/servers/{serverName}/databases/{databaseName}/extendedAuditingSettings/default?api-version=2021-11-01-preview Reference Request Body: { "properties": { "state": "Enabled", "storageEndpoint": "https://.blob.core.windows.net/", "storageAccountAccessKey": "", "retentionDays": 90, "auditActionsAndGroups": [ "SELECT ON dbo.MyTable BY public", "INSERT ON dbo.MyTable BY public" ], "isStorageSecondaryKeyInUse": false } } Parameters: auditActionsAndGroups: This is where you specify the exact actions and the table. You can include SELECT, INSERT, UPDATE, DELETE, etc. storageEndpoint: The Azure Blob Storage endpoint where audit logs will be stored. retentionDays: Number of days to retain logs. state: Must be "Enabled" to activate auditing. AZ Cli – https://learn.microsoft.com/en-us/cli/azure/sql/db/audit-policy?view=azure-cli-latest#az-sql-db-audit-policy-update az sql db audit-policy update -g ResourceGroupName -s Servername -n DatabaseName --state Enabled --bsts Enabled --storage-key "" --storage-endpoint https://StorageAccount.blob.core.windows.net/ --actions FAILED_DATABASE_AUTHENTICATION_GROUP 'UPDATE ON dbo.MyTable BY public' sample output { "auditActionsAndGroups": [ "FAILED_DATABASE_AUTHENTICATION_GROUP", "UPDATE ON dbo.MyTable BY public" ], "id": "/subscriptions/xxxxx-xxxxx-xxxxxx-xxxxx-xxxxxx/resourceGroups/ResourceGroupName/providers/Microsoft.Sql/servers/ServerName/databases/DatabaseName/auditingSettings/Default", "isAzureMonitorTargetEnabled": true, "isManagedIdentityInUse": false, "isStorageSecondaryKeyInUse": false, "kind": null, "name": "Default", "queueDelayMs": null, "resourceGroup": "ResourceGroupName", "retentionDays": 10, "state": "Enabled", "storageAccountAccessKey": null, "storageAccountSubscriptionId": "xxxx-xxxxx-xxxx-xxxxx-xxxxxxx", "storageEndpoint": https://StorageAccount.blob.core.windows.net/, "type": "Microsoft.Sql/servers/databases/auditingSettings" } For more information Configure Auditing for Azure SQL Database series - Part 1 Configure Auditing for Azure SQL Database series - Part 2 Set-AzSqlDatabaseAudit Manage Auditing Using APIs - Azure SQL Database & Azure Synapse Analytics | Microsoft Learn az sql db audit-policy update148Views0likes0CommentsMicrosoft Fixes Copilot Audit Records
After a report to the MSRC about some missing file data from Copilot audit records, Microsoft fixed the problem and audit records now contain details about the SharePoint Online files reviewed by Copilot to construct answers to user prompts. Having solid audit and compliance data is a good thing, unless you’re a lawyer charged with defending an eDiscovery action who might be asked to produce the files. https://office365itpros.com/2025/08/22/copilot-audit-records-fixed/100Views0likes0CommentsPurview and auditing file modifications
I have full M365 E5 license and use Purview auditing a lot for investigations. I noticed is reports file modified which is create but some of my files would get modified constantly. I'm curious if it can log and provide a report on what exactly was modified. For example: If text was added or deleted, can it tell me what was added or deleted i.e. the actual text and the action (Add\Delete) If an image was pasted into a word document, can it tell me that? If possible, down to a copy of the image that was inserted? If it can't do this level of detail anyone have suggestions of a product that can?117Views0likes1CommentAuditing Resource Bookings
I am once again dealing with overlapped bookings on rooms set to not allow overlapped bookings. I've verified this in PowerShell. I've limited who can make these to 2 users. I've trained these users. They are using Outlook on the web - just in case the app doesn't update. STILL, I am getting bookings on top of each other. One person books something - then another person books something on top of it. I need to audit this. I have enabled resource auditing. I have tried 1) Purview auditing the booking users - everything they do related to Exchange. This returns hundreds of entries for the time periods specified, and I genuinely do not have time to parse all of this. 2) using purview to search only for entries on the room resource. These apparently have no data, ever. 3) Using purview to search for activities related to booking.. Too bad the activity dropdown does not work, and 80% of the entries are repeats. Maybe I can find operation names from the documentation? (no) 4) Maybe Copilot can help me find operation names? (no, he just hallucinates) 6) Abandoning Purview, I open Exchange and head to the Collect Logs entries. I run both Resource Bookings and Calendar logs. Because that is, essentially, exactly what I want. Unfortunately neither of these are even remotely useful to a human - except to confirm this shouldn't be possible. I spend an hour parsing through the endless nigh-identically named fields, to find data relevant to me - What I can find isn't accurate, and isn't useful. 7) I open Outlook, and try to look at the actual mail interactions on that shared mailbox. Unfortunately, there is nothing for these days. No sent, inbox, deleted items referencing the bookings at all. 8) I open powershell, i try commands microsoft gives. These are deprecated. This should really not be this hard. In fact, it should be REALLY EASY to see when bookings came in and out. I am really close to looking for a 3rd party solution just so i don't have to waste any more time on this129Views0likes1Comment