azure sql database
160 TopicsUnderstanding action_id discrepancies in Azure SQL Database Audit Logs (BCM vs AL / CR / DR)
Overview While working with Azure SQL Database auditing in enterprise environments, you may encounter an inconsistency in how the action_id field is captured across different PaaS SQL servers. In one such scenario, a customer observed: PaaS Server action_id observed for similar DDL statements Server A AL, CR, DR Server B BCM only This inconsistency impacted downstream compliance pipelines, as the audit data was expected to be captured and interpreted uniformly across all servers. This article explains: How Azure SQL Auditing works by default What causes BCM to appear instead of AL/CR/DR How to standardize audit logs across PaaS servers How Azure SQL Database Auditing Works? Azure SQL Database auditing uses a managed and fixed audit policy at the service level. When auditing is enabled at the server level, the default auditing policy includes the following action groups: BATCH_COMPLETED_GROUP SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP FAILED_DATABASE_AUTHENTICATION_GROUP These groups audit: All query execution activity Successful authentication attempts Failed authentication attempts As a result, SQL batches — including DDL statements like CREATE, ALTER, or DROP on database objects — are captured under the BATCH_COMPLETED_GROUP and appear with action_id = BCM Why AL, CR, and DR are not captured by default? Audit action IDs such as AL, CR and DR are considered Security / DDL-level audit events. These events are not included in the default Azure SQL auditing policy. Instead, they are generated only when the corresponding Security-related AuditActionGroups are explicitly enabled. For example: AuditActionGroup Captures DATABASE_OBJECT_CHANGE_GROUP CREATE / ALTER / DROP on database objects DATABASE_PRINCIPAL_CHANGE_GROUP User / role changes DATABASE_ROLE_MEMBER_CHANGE_GROUP Role membership updates DDL operations such as CREATE / ALTER / DROP on database objects are captured under action groups like DATABASE_OBJECT_CHANGE_GROUP. Observed Behavior in a Newly Created Test Server Running the following PowerShell command on a newly provisioned logical server showed only the default audit action groups enabled. (Get-AzSqlServerAudit -ResourceGroupName "RGName" -ServerName "ServerName").AuditActionGroup Therefore, DDL statements were audited but recorded as action_id = BCM Enabling AL / CR / DR Action IDs To capture DDL operations under their respective audit action IDs, configure the required security audit action groups at the SQL Server level. For example: In this customer scenario, we executed the following command: Set-AzSqlServerAudit -ResourceGroupName "RGName" -ServerName "ServerName" -AuditActionGroup "DATABASE_PRINCIPAL_CHANGE_GROUP", "DATABASE_ROLE_MEMBER_CHANGE_GROUP", "DATABASE_OBJECT_CHANGE_GROUP" After applying this configuration: DDL operations were captured in the audit logs as action_id = CR, AL and DR instead of BCM. Ensuring Consistent Compliance Across PaaS Servers To standardize audit logging behavior across environments: Step 1: Compare AuditActionGroups Run the following command on all servers: (Get-AzSqlServerAudit -ResourceGroupName "<RG>" -ServerName "<ServerName>").AuditActionGroup Step 2: Align AuditActionGroups Configure all server with same AuditActionGroup values. In this case, value used was below: Set-AzSqlServerAudit -ResourceGroupName "<RG>" -ServerName "<ServerName>" -AuditActionGroup ` "DATABASE_PRINCIPAL_CHANGE_GROUP", "DATABASE_ROLE_MEMBER_CHANGE_GROUP", "DATABASE_OBJECT_CHANGE_GROUP" Step 3: Validate Once aligned, similar SQL statements across all PaaS servers should now generate consistent action_id values in audit logs. Accepted values for AuditActionGroups. Ensure appropriate groups are enabled based on your organization’s compliance needs. Accepted values: BATCH_STARTED_GROUP, BATCH_COMPLETED_GROUP, APPLICATION_ROLE_CHANGE_PASSWORD_GROUP, BACKUP_RESTORE_GROUP, DATABASE_LOGOUT_GROUP, DATABASE_OBJECT_CHANGE_GROUP, DATABASE_OBJECT_OWNERSHIP_CHANGE_GROUP, DATABASE_OBJECT_PERMISSION_CHANGE_GROUP, DATABASE_OPERATION_GROUP, DATABASE_PERMISSION_CHANGE_GROUP, DATABASE_PRINCIPAL_CHANGE_GROUP, DATABASE_PRINCIPAL_IMPERSONATION_GROUP, DATABASE_ROLE_MEMBER_CHANGE_GROUP, FAILED_DATABASE_AUTHENTICATION_GROUP, SCHEMA_OBJECT_ACCESS_GROUP, SCHEMA_OBJECT_CHANGE_GROUP, SCHEMA_OBJECT_OWNERSHIP_CHANGE_GROUP, SCHEMA_OBJECT_PERMISSION_CHANGE_GROUP, SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP, USER_CHANGE_PASSWORD_GROUP, LEDGER_OPERATION_GROUP, DBCC_GROUP, DATABASE_OWNERSHIP_CHANGE_GROUP, DATABASE_CHANGE_GROUP Links: Get-AzSqlServerAudit (Az.Sql) | Microsoft Learn Set-AzSqlServerAudit (Az.Sql) | Microsoft Learn61Views0likes0CommentsFixing “There is not enough space on the disk” during Azure Data Sync initial sync (On‑prem ➜ Azure)
When you run an initial (first-time) sync from an on‑premises SQL Server database to Azure SQL Database using SQL Data Sync, the local agent may fail with a disk-space error—even when the disk “looks” like it has free space. The reason is that the initial sync can generate large temporary files in the Windows TEMP location used by the Data Sync Agent. This post explains the symptom, what’s happening under the hood, and the most practical mitigation: move the Data Sync Agent’s TEMP/TMP to a drive with sufficient space and restart the service. Symptom During an initial sync (commonly on-premises ➜ Azure), the sync fails while applying a batch file. Error You may see an error similar to: Sync failed with the exception: “An unexpected error occurred when applying batch file … .batch. See the inner exception for more details. Inner exception: There is not enough space on the disk …” Microsoft Learn also calls out “disk insufficient space” scenarios for SQL Data Sync and points to the %TEMP% directory as the key location to check. What’s actually happening (Root Cause) 1) Initial sync uses temp files on the agent machine During initialization, the local agent can load data and store it as temp files in the system temp folder. This is explicitly called out in the Azure SQL Data Sync scalability guidance. 2) The agent can generate more than “just the batch files” In practice, you’ll often see: Batch files (e.g., sync_*.batch) Extra temp files under folders like MAT_ / MATS_ that are used for internal processing (commonly described as “sorting”/intermediate work). Internal field experience shared in the Data Sync support channel highlights that the MAT/MATS files can be much larger than the batch files—sometimes 8–10× larger than the data being synced for that table (especially during initialization). 3) Why “I still have free disk space” can be misleading If your Data Sync Agent’s TEMP points to a system drive (often C:), it can fill quickly with temp batches + MAT/MATS files during the first sync—particularly for large tables or many tables being initialized. The Azure SQL Data Sync “large scale” guidance recommends ensuring the temp folder has enough space before starting initialization and notes you can move TEMP/TMP to another drive. Mitigation (Recommended) Option A — Move TEMP/TMP to a larger drive (recommended) The Microsoft Azure Blog guidance for large-scale initialization is clear: move the temp folder by setting TEMP and TMP environment variables and restart the sync service. Key point: change the variables for the same account running the Data Sync Agent service Environment variables exist at user scope and machine scope, and the effective TEMP location depends on which account the agent service runs under. A simple PowerShell approach (run elevated) is to read and set the variables at the appropriate scope. (Example shown below uses the standard .NET environment APIs.) # Run in Administrator mode # Get current values [Environment]::GetEnvironmentVariable("TEMP","User") [Environment]::GetEnvironmentVariable("TEMP","Machine") # Set new values (examples) [Environment]::SetEnvironmentVariable("TEMP","D:\TempUser","User") [Environment]::SetEnvironmentVariable("TMP" ,"D:\TempUser","User") # or machine scope: [Environment]::SetEnvironmentVariable("TEMP","D:\TempMachine","Machine") [Environment]::SetEnvironmentVariable("TMP" ,"D:\TempMachine","Machine") Important: After updating TEMP/TMP, restart the SQL Data Sync agent service so it picks up the new environment settings. Option B — If you can’t log in as the service account: update TEMP/TMP in the registry for that account If you need to change TEMP/TMP for a specific account without interactive logon, you can update the user environment variables stored in the registry. General Windows guidance indicates: User environment variables live under HKEY_CURRENT_USER\Environment (and for other users, under that user’s SID hive loaded under HKEY_USERS). A common approach is: Identify the service account SID (example commands such as WMIC are often used in practice). Open Registry Editor Navigate to: HKEY_USERS\<SID>\Environment Update TEMP and TMP to a path on a drive with sufficient space. Restart the Data Sync service. Option C — Clean up leftover sync temp files (when sync is NOT running) In some cases, the “disk out of space” condition is caused by leftover sync files that were not removed (for example, if something had files open during deletion). Microsoft Learn suggests manually deleting sync files from %temp% and cleaning subdirectories only when sync is not in progress. Validation checklist (after the change) After moving TEMP/TMP and restarting the service, confirm: New temp path is being used Initiate sync and check that new sync_*.batch / temp artifacts appear under the new folder. Sufficient free space exists for initialization Especially for large tables, ensure the chosen drive can accommodate temp growth during the first sync. Rerun initial sync Retry the initial sync after making the change. Classification Symptom type: Agent side / initialization failure Primary root cause: Insufficient disk space on the TEMP location used by the Data Sync Agent during initial sync temp-file generation Fix type: Configuration / operational (move TEMP/TMP to a larger drive + restart agent service) a { text-decoration: none; color: #464feb; } tr th, tr td { border: 1px solid #e6e6e6; } tr th { background-color: #f5f5f5; } Helpful references Troubleshoot SQL Data Sync (Microsoft Learn) Sync SQL data in large scale using Azure SQL Data Sync (Microsoft Azure Blog) Data Sync Agent for SQL Data Sync (Microsoft Learn)Connect to Azure SQL Database using a custom domain name with Microsoft Entra ID authentication
Many of us might prefer to connect to Azure SQL Server using a custom domain name (like devsqlserver.mycompany.com) rather than the default fully qualified domain name (devsqlserver.database.windows.net), often because of application-specific or compliance reasons. This article details how you can accomplish this when logging in with Microsoft Entra ID (for example, user@mycompany.com) in Azure SQL Database specific environment. Frequently, users encounter errors similar to the one described below during this process. Before you start: If you use SQL authentication (SQL username/password), the steps are different. Refer the following article for that scenario: How to use different domain name to connect to Azure SQL DB Server | Microsoft Community Hub With SQL authentication, you can include the server name in the login (for example, username@servername). With Microsoft Entra ID authentication, you don’t do that—so your custom DNS name must follow one important rule. Key requirement for Microsoft Entra ID authentication In an Azure SQL Database (PaaS) environment, the platform relies on the server name portion of the Fully Qualified Domain Name (FQDN) to correctly route incoming connection requests to the appropriate logical server. When you use a custom DNS name, it is important that the name starts with the exact Azure SQL server name (the part before .database.windows.net). Why this is required: Azure SQL Database is a multi-tenant PaaS service, where multiple logical servers are hosted behind shared infrastructure. During the connection process (especially with Microsoft Entra ID authentication), Azure SQL uses the server name extracted from the FQDN to: Identify the correct logical server Route the connection internally within the platform Validate the authentication context This behavior aligns with how Azure SQL endpoints are designed and resolved within Microsoft’s managed infrastructure. If your custom DNS name doesn’t start with the Azure SQL server name, Azure can’t route the connection to the correct server. Sign-in may fail and you might see error 40532 (as shown above). To fix this, change the custom DNS name so it starts with your Azure SQL server name. Example: if your server is devsqlserver.database.windows.net, your custom name must start with 'devsqlserver' devsqlserver.mycompany.com devsqlserver.contoso.com devsqlserver.mydomain.com Step-by-step: set up and connect Pick the custom name. It must start with your server name. Example: use devsqlserver.mycompany.com (not othername.mycompany.com). Create DNS records for the custom name. Create a CNAME or DNS alias to point the custom name to your Azure SQL server endpoint (public) or to the private endpoint IP (private) as per the blog mentioned above. Check DNS from your computer. Make sure devsqlserver.mycompany.com resolves to the right address before you try to connect. Connect with Microsoft Entra ID. In SSMS/Azure Data Studio, set Server to your custom server name and select a Microsoft Entra ID authentication option (for example, Universal with MFA). Sign in and connect. Use your Entra ID (for example, user@mycompany.com). Example: Also, when you connect to Azure SQL Database using a custom domain name, you might see the following error: “The target principal name is incorrect” Example: This happens because Azure SQL’s SSL/TLS certificate is issued for the default server name (for example, servername.database.windows.net), not for your custom DNS name. During the secure connection process, the client validates that the server name you are connecting to matches the name in the certificate. Since the custom domain does not match the certificate, this validation fails, resulting in the error. This is expected behavior and is part of standard security checks to prevent connecting to an untrusted or impersonated server. To proceed with the connection, you can configure the client to trust the server certificate by: Setting Trust Server Certificate = True in the client settings, or Adding TrustServerCertificate=True in the connection string This bypasses the strict name validation and allows the connection to succeed. Note: Please use the latest client drivers (ODBC/JDBC/.NET, etc.). In some old driver versions, the 'TrustServerCertificate' setting may not work properly, and you may still face connection issues with the same 'target principal name is incorrect' error. So, it is always better to keep drivers updated for smooth connectivity with Azure SQL. Applies to both public and private endpoints: This naming requirement and approach work whether you connect over the public endpoint or through a private endpoint for Azure SQL Database scenario, as long as DNS resolution for the custom name is set up correctly for your network.241Views3likes0CommentsLessons Learned #539: Azure SQL DB Scale-Down from S3 to S2 Can Fail When Change Feed Is Enabled
Recently, I worked on a service request where a customer reported that an Azure SQL Database could not be scaled down from Standard S3 to Standard S2. The operation failed with the following message: "An unexpected error occurred while processing the request". During the troubleshooting process, we reviewed the database configuration to identify any setting that could prevent the scale-down operation. As part of that review, we executed the query select * from sys.databases and observed that the column is_change_feed_enabled had a value different from 0. This indicated that Change Feed was enabled on the database and, according to the current documentation, this setting is not supported when scaling down to Standard S0, S1, or S2 After disabling Change Feed by running EXEC sys.sp_change_feed_disable_db; we were able to complete the scale-down operation successfully.Recovering Missing Rows (“Gaps”) in Azure SQL Data Sync — Supported Approaches (and What to Avoid)
Azure SQL Data Sync is commonly used to keep selected tables synchronized between a hub database and one or more member databases. In some cases, you may discover a data “gap”: a subset of rows that exist in the source but are missing on the destination for a specific time window, even though synchronization continues afterward for new changes. This post explains supported recovery patterns and what not to do, based on a real support scenario where a customer reported missing rows for a single table within a sync group and requested a way to synchronize only the missing records. The scenario: Data Sync continues, but some rows are missing In the referenced case, the customer observed that: A specific table had a gap on the member side (missing rows for a period), while newer data continued to sync normally afterward. They asked for a Microsoft-supported method to sync only the missing rows, without rebuilding or fully reinitializing the table. This is a reasonable goal—but the recovery method matters, because Data Sync relies on service-managed tracking artifacts. Temptation: “Can we push missing data by editing tracking tables or calling internal triggers?” A frequent idea is to “force” Data Sync to pick up missing rows by manipulating internal artifacts: Writing directly into Data Sync tracking tables (for example, tables under the DataSync schema such as *_dss_tracking), or altering provisioning markers. Manually invoking Data Sync–generated triggers or relying on their internal logic. The case discussion specifically referenced internal triggers such as _dss_insert_trigger, _dss_update_trigger, and _dss_delete_trigger. Why this is not recommended / not supported as a customer-facing solution In the case, the guidance from Microsoft engineering was clear: Manually invoking internal Data Sync triggers is not supported and can increase the risk of data corruption because these triggers are service-generated at runtime and are not intended for manual use. Directly manipulating Data Sync tracking/metadata tables is not recommended. The customer thread also highlights that these tracking tables are part of Data Sync internals, and using them for manual “push” scenarios is not a supported approach. Also, the customer conversation highlights an important conceptual point: tracking tables are part of how the service tracks changes; they are not meant to be treated as a user-managed replication queue. Supported recovery option #1 (recommended): Re-drive change detection via the base table The most supportable approach is to make Data Sync detect the missing rows through its normal change tracking path—by operating on the base/source table, not the service-managed internals. A practical pattern: “No-op update” to re-fire tracking In the internal discussion with the product team, the recommended pattern was to update the source/base table (even with a “no-op” assignment) so that Data Sync’s normal tracking logic is triggered, without manually invoking internal triggers. Example pattern (conceptual): UPDATE t SET some_column = some_column -- no-op: value unchanged FROM dbo.YourTable AS t WHERE <filter identifying the rows that are missing on the destination>; This approach is called out explicitly in the thread as a way to “re-drive” change detection safely through supported mechanisms. Operational guidance (practical): Apply the update in small batches, especially for large tables, to reduce transaction/log impact and avoid long-running operations. Validate the impacted row set first (for example, by comparing keys between hub and member). Supported recovery option #2: Deprovision and re-provision the affected table (safe “reset” path) If the gap is large, the row-set is hard to isolate, or you want a clean realignment of tracking artifacts, the operational approach discussed in the case was: Stop sync Remove the table from the sync group (so the service deprovisions tracking objects) Fix/clean the destination state as needed Add the table back and let Data Sync re-provision and sync again This option is often the safest when the goal is to avoid touching system-managed artifacts directly. Note: In production environments, customers may not be able to truncate/empty tables due to operational constraints. In that situation, the sync may take longer because the service might need to do more row-by-row evaluation. This “tradeoff” was discussed in the case context. Diagnostics: Use the Azure SQL Data Sync Health Checker When you suspect metadata drift, missing objects, or provisioning inconsistencies, the case recommended using the AzureSQLDataSyncHealthChecker script. This tool: Validates hub/member metadata and scopes against the sync metadata database Produces logs that can highlight missing artifacts and other inconsistencies Is intended to help troubleshoot Data Sync issues faster A likely contributor to “gaps”: schema changes during Data Sync (snapshot isolation conflict) In the case discussion, telemetry referenced an error consistent with concurrent DDL/schema changes while the sync process is enumerating changes (snapshot isolation + metadata changes). A well-known related error is SQL Server error 3961, which occurs when a snapshot isolation transaction fails because metadata was modified by a concurrent DDL statement, since metadata is not versioned. Microsoft documents this behavior and explains why metadata changes conflict with snapshot isolation semantics. Prevention guidance (practical) Avoid running schema deployments (DDL) during active sync windows. Use a controlled workflow for schema changes with Data Sync—pause/coordinate changes to prevent mid-sync metadata shifts. (General best practices exist for ongoing Data Sync operations and maintenance.) Key takeaways Do not treat Data Sync tracking tables/triggers as user-managed “replication internals.” Manually invoking internal triggers or editing tracking tables is not a supported customer-facing recovery mechanism. Do recover gaps via base table operations (insert/update) so the service captures changes through its normal path—“no-op update” is one practical pattern when you already know the missing row set. For large/complex gaps, consider the safe reset approach: remove the table from the sync group and re-add it to re-provision artifacts. Use the AzureSQLDataSyncHealthChecker to validate metadata consistency and reduce guesswork. If you see intermittent failures around deployments, consider the schema-change + snapshot isolation pattern (e.g., error 3961) as a possible contributor and schedule DDL changes accordingly. From our experience, when there are row gaps it is usually because of change in the PK or source table.217Views0likes0CommentsImportExportJobError, SQL72012, SQL72014, SQL72045.
When trying to Import .bacpac file via Azure Portal, it fails with below error: 'code': 'ImportExportJobError', 'message': 'The ImportExport operation with Request Id '' failed due to 'The ImportExport operation with Request Id '' failed due to 'Could not import package.\\nWarning SQL72012: The object [data_0] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.\\nWarning SQL72012: The object [log] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.\\nError SQL72014: Framework Mi'.'.' The DAC framework hit a hard error and failed due to conflicts with existing system objects in the target database. The generated error does not show much information, except a Warning without enough detail to pinpoint the root cause. ⚠️ What the warnings mean (SQL72012) The object [data_0] exists in the target, but it will not be dropped… The object [log] exists in the target, but it will not be dropped… Retrying the import from the portal doesn’t help—the job fails consistently with the same outcome. To understand the exact cause of error, you can use SQLPackage.exe tool and follow steps below: Download SqlPackage for Windows. To extract the file by right clicking on the file in Windows Explorer, and selecting 'Extract All...', and select the target directory. Open a new Terminal window and cd to the location where SqlPackage was extracted: Import: sqlpackage.exe /Action:Import /tsn:ServerName.database.windows.net /tdn:sqlimporttestDB /tu:sql-user /tp:password /sf:"C:\test\DB-file.bacpac" /d:True /df:C:\test\df.txt TSN: Target server name, where the database will be imported. Tdn: Target database name, the name of the new database that will be created. Tu: user Tp: password Sf: source file, where the bacpac file is located. d: diagnostic, this parameter will help us to obtain detailed information for the import/export process. Df: where the diagnostic file will be saved and the name of it, please change the folder location to the same used on the source file. Note: .bacpac file needs to be present locally on your machine for this command. In the diagnostic file generated, we got a clear error indicating that Script "CREATE USER [EntraUser1@contoso.com] FOR EXTERNAL PROVIDER" failed to execute and only connections established with Entra accounts can create other Entra users. Microsoft.Data.Tools.Diagnostics.Tracer Error: 0 : 2026-01-22T06:56:50 : Error encountered during import operation Exception: Microsoft.SqlServer.Dac.DacServicesException: Could not import package. Error SQL72014: Core Microsoft SqlClient Data Provider: Msg 33159, Level 16, State 1, Line 1 Principal 'EntraUser1@contoso.com' could not be created. Only connections established with Active Directory accounts can create other Active Directory users. Error SQL72045: Script execution error. The executed script: CREATE USER [EntraUser1@contoso.com] FOR EXTERNAL PROVIDER; You’ll also notice the database gets created, but none of the schema or tables are deployed—leaving you with an online but completely blank database. SSMS shows the same behavior, the database appears online after creation, but the import ultimately fails and the resulting database is empty. When explicitly executing this script in database, you'll face the exact same error: If the source database contains only Contained Entra users, you typically won’t see this issue. For contained users, the import job uses a different user-creation script—one that can be executed even when the import connection is established using SQL authentication. CREATE USER [EntraUser2@contoso.com] WITH SID = 'User-SID-Here' , TYPE = E; The issue occurs when the Exported database contains Entra Server logins and a corresponding User is created in the User Database. To mitigate this issue: You need to initiate the Import request using Microsoft Entra Account since SQL Authentication account cannot create a user from an External Provider which is Microsoft Entra in this case. Make the Entra Users 'Contained' in User Database before exporting and then use SQL Authentication account for importing the DB. The second option is especially useful if you prefer importing through the Azure portal but your Entra account has MFA enforced. In that case, using SQL authentication for the import workflow can be a practical path forward. REFERENCES: Import a BACPAC File to Create a Database in Azure SQL Database - Azure SQL Database & Azure SQL Managed Instance | Microsoft Learn SqlPackage Import - SQL Server | Microsoft Learn256Views2likes0CommentsUsing ClientConnectionId to Correlate .NET Connection Attempts in Azure SQL
Getting Better Diagnostics with ClientConnectionId in .NET A few days ago, I was working on a customer case involving intermittent connectivity failures to Azure SQL Database from a .NET application. On the surface, nothing looked unusual. Retries were happening. In this post, I want to share a simple yet effective pattern for producing JDBC-style trace logs in .NET — specifically focusing on the ClientConnectionId property exposed by SqlConnection. This gives you a powerful correlation key that aligns with backend diagnostics and significantly speeds up root cause analysis for connection problems. Why ClientConnectionId Matters Azure SQL Database assigns a unique identifier to every connection attempt from the client. In .NET, this identifier is available through the ClientConnectionId property of SqlConnection. According to the official documentation: The ClientConnectionId property gets the connection ID of the most recent connection attempt, regardless of whether the attempt succeeded or failed. Source: https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlconnection.clientconnectionid?view=netframework-4.8.1 This GUID is the single most useful piece of telemetry for correlating client connection attempts with server logs and support traces. What .NET Logging Doesn’t Give You by Default Unlike the JDBC driver, the .NET SQL Client does not produce rich internal logs of every connection handshake or retry. There’s no built-in switch to emit gateway and redirect details, attempt counts, or port information. What you do have is: Timestamps Connection attempt boundaries ClientConnectionId values Outcome (success or failure) If you capture and format these consistently, you end up with logs that are as actionable as the JDBC trace output — and importantly, easy to correlate with backend diagnostics and Azure support tooling. Below is a small console application in C# that produces structured logs in the same timestamped, [FINE] format you might see from a JDBC trace — but for .NET applications: using System; using Microsoft.Data.SqlClient; class Program { static int Main() { // SAMPLE connection string (SQL Authentication) // Replace this with your own connection string. // This is provided only for demonstration purposes. string connectionString = "Server=tcp:<servername>.database.windows.net,1433;" + "Database=<database_name>;" + "User ID=<sql_username>;" + "Password=<sql_password>;" + "Encrypt=True;" + "TrustServerCertificate=False;" + "Connection Timeout=30;"; int connectionId = 1; // Log connection creation Log($"ConnectionID:{connectionId} created by (SqlConnection)"); using SqlConnection connection = new SqlConnection(connectionString); try { // Log connection attempt Log($"ConnectionID:{connectionId} This attempt No: 0"); // Open the connection connection.Open(); // Log ClientConnectionId after the connection attempt Log($"ConnectionID:{connectionId} ClientConnectionId: {connection.ClientConnectionId}"); // Execute a simple test query using SqlCommand cmd = new SqlCommand("SELECT 1", connection) { Log($"SqlCommand:1 created by (ConnectionID:{connectionId})"); Log("SqlCommand:1 Executing (not server cursor) SELECT 1"); cmd.ExecuteScalar(); Log("SqlDataReader:1 created by (SqlCommand:1)"); } } catch (SqlException ex) { // ClientConnectionId is available even on failure Log($"ConnectionID:{connectionId} ClientConnectionId: {connection.ClientConnectionId} (failure)"); Log($"SqlException Number: {ex.Number}"); Log($"Message: {ex.Message}"); return 1; } return 0; } // Simple logger to match JDBC-style output format static void Log(string message) { Console.WriteLine( $"[{DateTime.Now:yyyy-MM-dd HH:mm:ss}] [FINE] {message}" ); } } Run the above application and you’ll get output like: [2025-12-31 03:38:10] [FINE] ConnectionID:1 This attempt server name: aabeaXXX.trXXXX.northeurope1-a.worker.database.windows.net port: 11002 InstanceName: null useParallel: false [2025-12-31 03:38:10] [FINE] ConnectionID:1 This attempt endtime: 1767152309272 [2025-12-31 03:38:10] [FINE] ConnectionID:1 This attempt No: 1 [2025-12-31 03:38:10] [FINE] ConnectionID:1 Connecting with server: aabeaXXX.trXXXX.northeurope1-a.worker.database.windows.net port: 11002 Timeout Full: 20 [2025-12-31 03:38:10] [FINE] ConnectionID:1 ClientConnectionID: 6387718b-150d-482a-9731-02d06383d38f Server returned major version: 12 [2025-12-31 03:38:10] [FINE] SqlCommand:1 created by (ConnectionID:1 ClientConnectionID: 6387718b-150d-482a-9731-02d06383d38f) [2025-12-31 03:38:10] [FINE] SqlCommand:1 Executing (not server cursor) select 1 [2025-12-31 03:38:10] [FINE] SqlDataReader:1 created by (SqlCommand:1) [2025-12-31 03:38:10] [FINE] ConnectionID:2 created by (SqlConnection) [2025-12-31 03:38:11] [FINE] ConnectionID:2 ClientConnectionID: 5fdd311e-a219-45bc-a4f6-7ee1cc2f96bf Server returned major version: 12 [2025-12-31 03:38:11] [FINE] sp_executesql SQL: SELECT 1 AS ID, calling sp_executesql [2025-12-31 03:38:12] [FINE] SqlDataReader:3 created by (sp_executesql SQL: SELECT 1 AS ID) Notice how each line is tagged with: A consistent local timestamp (yyyy-MM-dd HH:mm:ss) A [FINE] log level A structured identifier that mirrors what you’d see in JDBC logging If a connection fails, you’ll still get the ClientConnectionId logged, which is exactly what Azure SQL support teams will ask for when troubleshooting connectivity issues.415Views3likes0CommentsLesson Learned #526: How to Identify the REST API Version Used by Your PowerShell Commands?
A few days ago, I wanted to understand which REST API version was being used behind each PowerShell command I was running to create Azure SQL Database servers. To investigate, I picked a simple command: Get-AzSqlServer -ResourceGroupName "ResourceGroupName" -ServerName "servername". Remember that Get-AzSqlServer cmdlet is a PowerShell wrapper over the Azure REST API for the Microsoft.Sql/servers resource. Internally, it makes a call to the ARM endpoint documented here, passing the required api-version. The actual version used depends on the installed Az.Sql module and may vary from one environment to another. I found that setting the variable $DebugPreference = "Continue" in my PowerShell Script , PowerShell prints detailed internal debug output, including the exact REST API call sent to Azure Resource Manager (ARM). Checking the output I've found the section called: Absolute Uri: https://management.azure.com/subscriptions/xxx-xxxx--613ccd2df306/resourceGroups/ResourceGroupName/providers/Microsoft.Sql/servers/servername?api-version=2023-02-01-preview So, it seems that running this command we could see this info. Even though you don’t explicitly define the api-version when using cmdlets like Get-AzSqlServer, the Azure platform requires it under the hood. The version used determines, which properties are available or supported, what operations behave differently across versions, whether the call will succeed once older versions are retired, etc.. For example, by using Azure CLI from the Portal, I was able to see the most recent API versions. It’s also important to note that, if your organization has .NET code managing Azure SQL Database environments, the underlying REST API calls might still be using an outdated preview version.Lesson Learned #486: Snapshot Isolation Transaction Failed Due to a Concurrent DDL Statement
Today, we worked on a service request that our customer faced the following error message: Msg 3961, Level 16, State 1, Line 4 Snapshot isolation transaction failed in database 'DbName' because the object accessed by the statement has been modified by a DDL statement in another concurrent transaction since the start of this transaction. It is disallowed because the metadata is not versioned. A concurrent update to metadata can lead to inconsistency if mixed with snapshot isolation.2.7KViews0likes1CommentLessons Learned #538: Printing One Line Per Field in T-SQL in Azure SQL Database.
Working on a case with a huge number of columns I needed to dump records as one line per field. Happy to share this script that it was useful. For example, I would like to have the data in this way. Id: 123 Status: Open Title: Connectivity issue Running the following TSQL, I was able to obtain the info as I expected. ;WITH src AS ( SELECT top 10000 * FROM dbo.Incidents ORDER BY IncidentId ) SELECT s.IncidentId, f.ord, CONCAT(f.field, ': ', COALESCE(f.value, '')) AS line FROM src AS s CROSS APPLY (VALUES ( 1, N'IncidentId', CONVERT(nvarchar(50), s.IncidentId)), ( 2, N'Status', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Status), CHAR(13), ' '), CHAR(10), ' ')), ( 3, N'Path', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Path), CHAR(13), ' '), CHAR(10), ' ')), ( 4, N'Title', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Title), CHAR(13), ' '), CHAR(10), ' ')), ( 5, N'Severity', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Severity),CHAR(13), ' '), CHAR(10), ' ')), ( 6, N'IsResolved', CONVERT(nvarchar(10), s.IsResolved)), ( 7, N'ResolvedAt', CONVERT(varchar(19), s.ResolvedAt, 126)), -- ISO 8601 ( 8, N'Owner', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Owner), CHAR(13), ' '), CHAR(10), ' ')), ( 9, N'Source', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Source), CHAR(13), ' '), CHAR(10), ' ')), (10, N'Tags', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Tags), CHAR(13), ' '), CHAR(10), ' ')), (11, N'Notes', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Notes), CHAR(13), ' '), CHAR(10), ' ')) ) AS f(ord, field, value) ORDER BY s.IncidentId, f.ord;155Views0likes0Comments