azure sql database
155 TopicsImportExportJobError, SQL72012, SQL72014, SQL72045.
When trying to Import .bacpac file via Azure Portal, it fails with below error: 'code': 'ImportExportJobError', 'message': 'The ImportExport operation with Request Id '' failed due to 'The ImportExport operation with Request Id '' failed due to 'Could not import package.\\nWarning SQL72012: The object [data_0] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.\\nWarning SQL72012: The object [log] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.\\nError SQL72014: Framework Mi'.'.' The DAC framework hit a hard error and failed due to conflicts with existing system objects in the target database. The generated error does not show much information, except a Warning without enough detail to pinpoint the root cause. ⚠️ What the warnings mean (SQL72012) The object [data_0] exists in the target, but it will not be dropped… The object [log] exists in the target, but it will not be dropped… Retrying the import from the portal doesn’t help—the job fails consistently with the same outcome. To understand the exact cause of error, you can use SQLPackage.exe tool and follow steps below: Download SqlPackage for Windows. To extract the file by right clicking on the file in Windows Explorer, and selecting 'Extract All...', and select the target directory. Open a new Terminal window and cd to the location where SqlPackage was extracted: Import: sqlpackage.exe /Action:Import /tsn:ServerName.database.windows.net /tdn:sqlimporttestDB /tu:sql-user /tp:password /sf:"C:\test\DB-file.bacpac" /d:True /df:C:\test\df.txt TSN: Target server name, where the database will be imported. Tdn: Target database name, the name of the new database that will be created. Tu: user Tp: password Sf: source file, where the bacpac file is located. d: diagnostic, this parameter will help us to obtain detailed information for the import/export process. Df: where the diagnostic file will be saved and the name of it, please change the folder location to the same used on the source file. Note: .bacpac file needs to be present locally on your machine for this command. In the diagnostic file generated, we got a clear error indicating that Script "CREATE USER [EntraUser1@contoso.com] FOR EXTERNAL PROVIDER" failed to execute and only connections established with Entra accounts can create other Entra users. Microsoft.Data.Tools.Diagnostics.Tracer Error: 0 : 2026-01-22T06:56:50 : Error encountered during import operation Exception: Microsoft.SqlServer.Dac.DacServicesException: Could not import package. Error SQL72014: Core Microsoft SqlClient Data Provider: Msg 33159, Level 16, State 1, Line 1 Principal 'EntraUser1@contoso.com' could not be created. Only connections established with Active Directory accounts can create other Active Directory users. Error SQL72045: Script execution error. The executed script: CREATE USER [EntraUser1@contoso.com] FOR EXTERNAL PROVIDER; You’ll also notice the database gets created, but none of the schema or tables are deployed—leaving you with an online but completely blank database. SSMS shows the same behavior, the database appears online after creation, but the import ultimately fails and the resulting database is empty. When explicitly executing this script in database, you'll face the exact same error: If the source database contains only Contained Entra users, you typically won’t see this issue. For contained users, the import job uses a different user-creation script—one that can be executed even when the import connection is established using SQL authentication. CREATE USER [EntraUser2@contoso.com] WITH SID = 'User-SID-Here' , TYPE = E; The issue occurs when the Exported database contains Entra Server logins and a corresponding User is created in the User Database. To mitigate this issue: You need to initiate the Import request using Microsoft Entra Account since SQL Authentication account cannot create a user from an External Provider which is Microsoft Entra in this case. Make the Entra Users 'Contained' in User Database before exporting and then use SQL Authentication account for importing the DB. The second option is especially useful if you prefer importing through the Azure portal but your Entra account has MFA enforced. In that case, using SQL authentication for the import workflow can be a practical path forward. REFERENCES: Import a BACPAC File to Create a Database in Azure SQL Database - Azure SQL Database & Azure SQL Managed Instance | Microsoft Learn SqlPackage Import - SQL Server | Microsoft Learn122Views2likes0CommentsUsing ClientConnectionId to Correlate .NET Connection Attempts in Azure SQL
Getting Better Diagnostics with ClientConnectionId in .NET A few days ago, I was working on a customer case involving intermittent connectivity failures to Azure SQL Database from a .NET application. On the surface, nothing looked unusual. Retries were happening. In this post, I want to share a simple yet effective pattern for producing JDBC-style trace logs in .NET — specifically focusing on the ClientConnectionId property exposed by SqlConnection. This gives you a powerful correlation key that aligns with backend diagnostics and significantly speeds up root cause analysis for connection problems. Why ClientConnectionId Matters Azure SQL Database assigns a unique identifier to every connection attempt from the client. In .NET, this identifier is available through the ClientConnectionId property of SqlConnection. According to the official documentation: The ClientConnectionId property gets the connection ID of the most recent connection attempt, regardless of whether the attempt succeeded or failed. Source: https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlconnection.clientconnectionid?view=netframework-4.8.1 This GUID is the single most useful piece of telemetry for correlating client connection attempts with server logs and support traces. What .NET Logging Doesn’t Give You by Default Unlike the JDBC driver, the .NET SQL Client does not produce rich internal logs of every connection handshake or retry. There’s no built-in switch to emit gateway and redirect details, attempt counts, or port information. What you do have is: Timestamps Connection attempt boundaries ClientConnectionId values Outcome (success or failure) If you capture and format these consistently, you end up with logs that are as actionable as the JDBC trace output — and importantly, easy to correlate with backend diagnostics and Azure support tooling. Below is a small console application in C# that produces structured logs in the same timestamped, [FINE] format you might see from a JDBC trace — but for .NET applications: using System; using Microsoft.Data.SqlClient; class Program { static int Main() { // SAMPLE connection string (SQL Authentication) // Replace this with your own connection string. // This is provided only for demonstration purposes. string connectionString = "Server=tcp:<servername>.database.windows.net,1433;" + "Database=<database_name>;" + "User ID=<sql_username>;" + "Password=<sql_password>;" + "Encrypt=True;" + "TrustServerCertificate=False;" + "Connection Timeout=30;"; int connectionId = 1; // Log connection creation Log($"ConnectionID:{connectionId} created by (SqlConnection)"); using SqlConnection connection = new SqlConnection(connectionString); try { // Log connection attempt Log($"ConnectionID:{connectionId} This attempt No: 0"); // Open the connection connection.Open(); // Log ClientConnectionId after the connection attempt Log($"ConnectionID:{connectionId} ClientConnectionId: {connection.ClientConnectionId}"); // Execute a simple test query using SqlCommand cmd = new SqlCommand("SELECT 1", connection) { Log($"SqlCommand:1 created by (ConnectionID:{connectionId})"); Log("SqlCommand:1 Executing (not server cursor) SELECT 1"); cmd.ExecuteScalar(); Log("SqlDataReader:1 created by (SqlCommand:1)"); } } catch (SqlException ex) { // ClientConnectionId is available even on failure Log($"ConnectionID:{connectionId} ClientConnectionId: {connection.ClientConnectionId} (failure)"); Log($"SqlException Number: {ex.Number}"); Log($"Message: {ex.Message}"); return 1; } return 0; } // Simple logger to match JDBC-style output format static void Log(string message) { Console.WriteLine( $"[{DateTime.Now:yyyy-MM-dd HH:mm:ss}] [FINE] {message}" ); } } Run the above application and you’ll get output like: [2025-12-31 03:38:10] [FINE] ConnectionID:1 This attempt server name: aabeaXXX.trXXXX.northeurope1-a.worker.database.windows.net port: 11002 InstanceName: null useParallel: false [2025-12-31 03:38:10] [FINE] ConnectionID:1 This attempt endtime: 1767152309272 [2025-12-31 03:38:10] [FINE] ConnectionID:1 This attempt No: 1 [2025-12-31 03:38:10] [FINE] ConnectionID:1 Connecting with server: aabeaXXX.trXXXX.northeurope1-a.worker.database.windows.net port: 11002 Timeout Full: 20 [2025-12-31 03:38:10] [FINE] ConnectionID:1 ClientConnectionID: 6387718b-150d-482a-9731-02d06383d38f Server returned major version: 12 [2025-12-31 03:38:10] [FINE] SqlCommand:1 created by (ConnectionID:1 ClientConnectionID: 6387718b-150d-482a-9731-02d06383d38f) [2025-12-31 03:38:10] [FINE] SqlCommand:1 Executing (not server cursor) select 1 [2025-12-31 03:38:10] [FINE] SqlDataReader:1 created by (SqlCommand:1) [2025-12-31 03:38:10] [FINE] ConnectionID:2 created by (SqlConnection) [2025-12-31 03:38:11] [FINE] ConnectionID:2 ClientConnectionID: 5fdd311e-a219-45bc-a4f6-7ee1cc2f96bf Server returned major version: 12 [2025-12-31 03:38:11] [FINE] sp_executesql SQL: SELECT 1 AS ID, calling sp_executesql [2025-12-31 03:38:12] [FINE] SqlDataReader:3 created by (sp_executesql SQL: SELECT 1 AS ID) Notice how each line is tagged with: A consistent local timestamp (yyyy-MM-dd HH:mm:ss) A [FINE] log level A structured identifier that mirrors what you’d see in JDBC logging If a connection fails, you’ll still get the ClientConnectionId logged, which is exactly what Azure SQL support teams will ask for when troubleshooting connectivity issues.321Views3likes0CommentsLesson Learned #526: How to Identify the REST API Version Used by Your PowerShell Commands?
A few days ago, I wanted to understand which REST API version was being used behind each PowerShell command I was running to create Azure SQL Database servers. To investigate, I picked a simple command: Get-AzSqlServer -ResourceGroupName "ResourceGroupName" -ServerName "servername". Remember that Get-AzSqlServer cmdlet is a PowerShell wrapper over the Azure REST API for the Microsoft.Sql/servers resource. Internally, it makes a call to the ARM endpoint documented here, passing the required api-version. The actual version used depends on the installed Az.Sql module and may vary from one environment to another. I found that setting the variable $DebugPreference = "Continue" in my PowerShell Script , PowerShell prints detailed internal debug output, including the exact REST API call sent to Azure Resource Manager (ARM). Checking the output I've found the section called: Absolute Uri: https://management.azure.com/subscriptions/xxx-xxxx--613ccd2df306/resourceGroups/ResourceGroupName/providers/Microsoft.Sql/servers/servername?api-version=2023-02-01-preview So, it seems that running this command we could see this info. Even though you don’t explicitly define the api-version when using cmdlets like Get-AzSqlServer, the Azure platform requires it under the hood. The version used determines, which properties are available or supported, what operations behave differently across versions, whether the call will succeed once older versions are retired, etc.. For example, by using Azure CLI from the Portal, I was able to see the most recent API versions. It’s also important to note that, if your organization has .NET code managing Azure SQL Database environments, the underlying REST API calls might still be using an outdated preview version.Lesson Learned #486: Snapshot Isolation Transaction Failed Due to a Concurrent DDL Statement
Today, we worked on a service request that our customer faced the following error message: Msg 3961, Level 16, State 1, Line 4 Snapshot isolation transaction failed in database 'DbName' because the object accessed by the statement has been modified by a DDL statement in another concurrent transaction since the start of this transaction. It is disallowed because the metadata is not versioned. A concurrent update to metadata can lead to inconsistency if mixed with snapshot isolation.2.7KViews0likes1CommentLessons Learned #538: Printing One Line Per Field in T-SQL in Azure SQL Database.
Working on a case with a huge number of columns I needed to dump records as one line per field. Happy to share this script that it was useful. For example, I would like to have the data in this way. Id: 123 Status: Open Title: Connectivity issue Running the following TSQL, I was able to obtain the info as I expected. ;WITH src AS ( SELECT top 10000 * FROM dbo.Incidents ORDER BY IncidentId ) SELECT s.IncidentId, f.ord, CONCAT(f.field, ': ', COALESCE(f.value, '')) AS line FROM src AS s CROSS APPLY (VALUES ( 1, N'IncidentId', CONVERT(nvarchar(50), s.IncidentId)), ( 2, N'Status', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Status), CHAR(13), ' '), CHAR(10), ' ')), ( 3, N'Path', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Path), CHAR(13), ' '), CHAR(10), ' ')), ( 4, N'Title', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Title), CHAR(13), ' '), CHAR(10), ' ')), ( 5, N'Severity', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Severity),CHAR(13), ' '), CHAR(10), ' ')), ( 6, N'IsResolved', CONVERT(nvarchar(10), s.IsResolved)), ( 7, N'ResolvedAt', CONVERT(varchar(19), s.ResolvedAt, 126)), -- ISO 8601 ( 8, N'Owner', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Owner), CHAR(13), ' '), CHAR(10), ' ')), ( 9, N'Source', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Source), CHAR(13), ' '), CHAR(10), ' ')), (10, N'Tags', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Tags), CHAR(13), ' '), CHAR(10), ' ')), (11, N'Notes', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Notes), CHAR(13), ' '), CHAR(10), ' ')) ) AS f(ord, field, value) ORDER BY s.IncidentId, f.ord;141Views0likes0CommentsLessons Learned #537: Copilot Prompts for Troubleshooting on Azure SQL Database
We had the opportunity to share our experience in several community sessions how SSMS Copilot can help across multiple phases of troubleshooting. In this article, I would like to share a set of prompts we found in those sessions and show how to apply them to an example query. During a performance incident, we captured the following query, generated by PowerBI. SELECT TOP (1000001) * FROM ( SELECT [t2].[Fiscal Month Label] AS [c38], SUM([t5].[Total Excluding Tax]) AS [a0], SUM([t5].[Total Including Tax]) AS [a1] FROM ( SELECT [$Table].[Sale Key] as [Sale Key], [$Table].[City Key] as [City Key], [$Table].[Customer Key] as [Customer Key], [$Table].[Bill To Customer Key] as [Bill To Customer Key], [$Table].[Stock Item Key] as [Stock Item Key], [$Table].[Invoice Date Key] as [Invoice Date Key], [$Table].[Delivery Date Key] as [Delivery Date Key], [$Table].[Salesperson Key] as [Salesperson Key], [$Table].[WWI Invoice ID] as [WWI Invoice ID], [$Table].[Description] as [Description], [$Table].[Package] as [Package], [$Table].[Quantity] as [Quantity], [$Table].[Unit Price] as [Unit Price], [$Table].[Tax Rate] as [Tax Rate], [$Table].[Total Excluding Tax] as [Total Excluding Tax], [$Table].[Tax Amount] as [Tax Amount], [$Table].[Profit] as [Profit], [$Table].[Total Including Tax] as [Total Including Tax], [$Table].[Total Dry Items] as [Total Dry Items], [$Table].[Total Chiller Items] as [Total Chiller Items], [$Table].[Lineage Key] as [Lineage Key] FROM [Fact].[Sale] as [$Table] ) AS [t5] INNER JOIN ( SELECT [$Table].[Date] as [Date], [$Table].[Day Number] as [Day Number], [$Table].[Day] as [Day], [$Table].[Month] as [Month], [$Table].[Short Month] as [Short Month], [$Table].[Calendar Month Number] as [Calendar Month Number], [$Table].[Calendar Month Label] as [Calendar Month Label], [$Table].[Calendar Year] as [Calendar Year], [$Table].[Calendar Year Label] as [Calendar Year Label], [$Table].[Fiscal Month Number] as [Fiscal Month Number], [$Table].[Fiscal Month Label] as [Fiscal Month Label], [$Table].[Fiscal Year] as [Fiscal Year], [$Table].[Fiscal Year Label] as [Fiscal Year Label], [$Table].[ISO Week Number] as [ISO Week Number] FROM [Dimension].[Date] as [$Table] ) AS [t2] ON [t5].[Delivery Date Key] = [t2].[Date] GROUP BY [t2].[Fiscal Month Label] ) AS [MainTable] WHERE ( NOT([a0] IS NULL) OR NOT([a1] IS NULL) ) I structure the investigation in three areas: Analysis – understand the data model, sizes, and relationships. List all tables in the 'Fact' and 'Dimension' schemas with space usage in MB and number of rows. The name of the tables and their relations among them. Please, provide a textual representation for all relations. List all foreign key relationships between tables in the 'Fact' and 'Dimension' schemas, showing the cardinality and referenced columns. Could you please let me know what is the meaning of every table? Describe all schemas in this database, listing the number of tables and views per schema. Create a textual data model (ER-style) representation showing how all Fact and Dimension tables are connected. Maintenance Plan Check – verify statistics freshness, index health/fragmentation, partition layout, and data quality. List all statistics in the database that have not been updated in the last 7 days, showing table name, number of rows, and last update date. List all indexes in the database with fragmentation higher than 30%, including table name, index name, and page count. Please, provide the T-SQL to rebuild all indexes in ONLINE mode and UPDATE STATISTICS for all tables that are automatic statistics. Check for fact table rows that reference dimension keys which no longer exist (broken foreign key integrity). Find queries that perform table scans on large tables where no indexes are used, based on recent execution plans. Performance Improvements – simplify/reshape the query and consider indexed views, columnstore, partitioning, and missing indexes. In this part, I would like to spend more time about these prompts, for example the following ones, help me to understand the performance issue, simplify the query text and also, explains what the query is doing. Identify the longest-running query in the last 24 hours provide the full text of the query Please simplify the query Explain me the query Explain in plain language what the following SQL query does, including the purpose of each subquery and the final WHERE clause. Show a histogram of data distribution for key columns used in joins or filters, such as SaleDate, ProductCategory, or Region. Finally, using this prompt I could find a lot of useful information how to improve the execution of this query: Analyze the following SQL query and provide a detailed performance review tailored for Azure SQL Database Hyperscale and Power BI DirectQuery scenarios. For each recommendation, estimate the potential performance improvement as a percentage (e.g. query runtime reduction, I/O savings, etc.). 1. Could this query benefit from a schemabound indexed view or a materialized view? Estimate the performance gain if implemented. 2. Is there any missing index on the involved tables that would improve join or filter efficiency? Include the suggested index definition and expected benefit. 3. Would using a clustered or nonclustered columnstore index on the main fact table improve performance? Estimate the potential gain in query time or storage. 4. Could partitioning the fact table improve performance by enabling partition elimination? If so, suggest the partition key and scheme, and estimate improvement. 5. Are current statistics sufficient for optimal execution plans? Recommend updates if needed and estimate impact. 6. Does this query preserve query folding when used with Power BI DirectQuery? If not, identify what breaks folding and suggest how to fix it. 7. Recommend any query rewrites or schema redesigns, along with estimated performance improvements for each. I got a lot of improvements suggestions about it: Evaluated a schemabound indexed view that pre‑aggregates by month (see Reference Implementations), then pointed Power BI to the view. Ensured clustered columnstore on Fact.Sale; considered a targeted rowstore NCI on [Delivery Date Key] INCLUDE ([Total Excluding Tax], [Total Including Tax]) when columnstore alone wasn’t sufficient. Verified statistics freshness on join/aggregate columns and enabled incremental stats for partitions. Checked partitioning by date to leverage elimination for common slicers.309Views0likes0CommentsLessons Learned #536: Error the database type is not understood by the data reader.
This week we have been working on a support case where our customer had the following message 50006: Error the database type is not understood by the data reader when exporting using SQL Server Management Studio to a bacpac. We also observed that other tools showed the same message Checking SQL Server Management Studio, we faced the following error screen: Could not export schema and data from database. (Microsoft.SqlServer.Dac) ADDITIONAL INFORMATION: 50006: Error the database type '' is not understood by the data reader. (Microsoft.Data.Tools.Schema.Sql) As part of our investigation, we validated that the native export tool SQLPackage did not produce the same error. This indicated that the issue was not directly related to the database itself, but rather to the version of SSMS (or the specific client tool being used). After further analysis, we identified the root cause: This error occurs due to a limitation in SSMS v20 (and some third-party export/backup tools). Specifically, these tools do not support the JSON data type in SQL database columns. As a result, any attempt to export or back up a database containing JSON columns with these versions will fail. After the analysis, we've found to be a limitation in the SSMS v.20 and other third-party backup/export tool, which does not support the JSON data type in SQL database columns. As a result, any attempt to export or back up a database containing JSON columns using this tool would fail. In our case, the issue was resolved after the customer upgraded to SSMS v21, which includes support for JSON data types during bacpac exports. Afterthe upgrade, the export process worked as expected.102Views0likes0CommentsConfigure Auditing for Azure SQL Database on a Specific table
The Azure SQL Auditing feature is used to track database events and writes them to an audit log in your Azure storage account, Log Analytics workspace, or Event Hubs. For those who are interested on how to configure Auditing on Azure SQL Database either on server-level or database-level please visit this Configure Auditing for Azure SQL Database series - Part 1 and Configure Auditing for Azure SQL Database series - Part 2 For those who are looking into enable Auditing for Azure SQL Database on a specific table as this option is not available on Azure Portal as of now. we don`t have any estimate date when this option is available on Azure Portal, but they can use the following PowerShell script that will help you enable Auditing on a specific table in Azure SQL Database. In script I had given example table as (MyTable) targeting a table named MyTable in the DBO schema and want to Audit SELECT and INSERT actions. # Define variables $resourceGroup = "YourResourceGroup" $serverName = "your-sql-server-name" $databaseName = "your-database-name" $storageAccount = "yourstorageaccount" $tableName = "MyTable" $schemaName = "dbo" # Login to Azure Connect-AzAccount # Enable auditing at the database level Set-AzSqlDatabaseAuditing ` -ResourceGroupName $resourceGroup ` -ServerName $serverName ` -DatabaseName $databaseName ` -StorageAccountName $storageAccount ` -AuditActionGroup "SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP", "FAILED_DATABASE_AUTHENTICATION_GROUP" ` -AuditAction "SELECT ON $schemaName.$tableName BY public", "INSERT ON $schemaName.$tableName BY public" ` -State Enabled REST API - Manage Auditing Using APIs - Azure SQL Database & Azure Synapse Analytics | Microsoft Learn To enable auditing on a specific table in Azure SQL Database using the REST API, you can use the Create or Update Database Extended Auditing Policy endpoint. This allows you to define fine-grained auditing rules, including actions on specific tables. URL PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Sql/servers/{serverName}/databases/{databaseName}/extendedAuditingSettings/default?api-version=2021-11-01-preview Reference Request Body: { "properties": { "state": "Enabled", "storageEndpoint": "https://.blob.core.windows.net/", "storageAccountAccessKey": "", "retentionDays": 90, "auditActionsAndGroups": [ "SELECT ON dbo.MyTable BY public", "INSERT ON dbo.MyTable BY public" ], "isStorageSecondaryKeyInUse": false } } Parameters: auditActionsAndGroups: This is where you specify the exact actions and the table. You can include SELECT, INSERT, UPDATE, DELETE, etc. storageEndpoint: The Azure Blob Storage endpoint where audit logs will be stored. retentionDays: Number of days to retain logs. state: Must be "Enabled" to activate auditing. AZ Cli – https://learn.microsoft.com/en-us/cli/azure/sql/db/audit-policy?view=azure-cli-latest#az-sql-db-audit-policy-update az sql db audit-policy update -g ResourceGroupName -s Servername -n DatabaseName --state Enabled --bsts Enabled --storage-key "" --storage-endpoint https://StorageAccount.blob.core.windows.net/ --actions FAILED_DATABASE_AUTHENTICATION_GROUP 'UPDATE ON dbo.MyTable BY public' sample output { "auditActionsAndGroups": [ "FAILED_DATABASE_AUTHENTICATION_GROUP", "UPDATE ON dbo.MyTable BY public" ], "id": "/subscriptions/xxxxx-xxxxx-xxxxxx-xxxxx-xxxxxx/resourceGroups/ResourceGroupName/providers/Microsoft.Sql/servers/ServerName/databases/DatabaseName/auditingSettings/Default", "isAzureMonitorTargetEnabled": true, "isManagedIdentityInUse": false, "isStorageSecondaryKeyInUse": false, "kind": null, "name": "Default", "queueDelayMs": null, "resourceGroup": "ResourceGroupName", "retentionDays": 10, "state": "Enabled", "storageAccountAccessKey": null, "storageAccountSubscriptionId": "xxxx-xxxxx-xxxx-xxxxx-xxxxxxx", "storageEndpoint": https://StorageAccount.blob.core.windows.net/, "type": "Microsoft.Sql/servers/databases/auditingSettings" } For more information Configure Auditing for Azure SQL Database series - Part 1 Configure Auditing for Azure SQL Database series - Part 2 Set-AzSqlDatabaseAudit Manage Auditing Using APIs - Azure SQL Database & Azure Synapse Analytics | Microsoft Learn az sql db audit-policy update148Views0likes0CommentsData Migration - Collation mismatch error during an offline migration
This post explores a collation mismatch error during offline migration from SQL Server Express to Azure SQL. It identifies the root cause—AUTO_CLOSE being enabled—and offers a simple fix to ensure smooth migration using Azure DMS.141Views0likes0CommentsLesson Learned #529: Troubleshooting Application Slowness Using SSMS Copilot
Some days ago, I worked on a support case where a customer reported application slowness affecting multiple users. Instead of jumping into traces or manually writing diagnostic queries, we used SSMS Copilot to investigate the issue. I would like to share with you how we diagnosed and understood the root cause. To illustrate the case, let’s walk through a simplified example: we create a new table, and right after that, we add a new column to it. CREATE TABLE Ejemplo2 (ID INT) BEGIN TRANSACTION ALTER TABLE dbo.eJEMPLO2 ADD NuevoCampo INT NULL Using SQL Server Management Studio and Copilot we executed the following prompt: Please, provide all currently running or suspended sessions. Include session ID, status, command, wait type (if any), application_name, wait time, and current SQL text. We got the following results: I executed multiple times the same prompt and always the session ID 67 is in suspended mode and Wait_type LCK_M_SCH_S, for this reason, I run a new prompt: Please, provide all sessions that are currently blocked by another session. Include session ID, the blocking session ID, wait type, and the blocked SQL text . At the end, I found that the session 51 is blocking the session ID 67 and for this reason, I run a new prompt: do we any have active transaction pending for commit for the session ID 51. So, I understand that the Session ID 51 has a transaction open, so, let's ask the details of the session 51, with a new prompt: Please, show the most recent SQL statement executed by session ID 51, even if the session is currently sleeping or not running any active request. Include the session status and login name as well. Use sys.dm_exec_connections and most_recent_sql_handle to retrieve the query text if necessary. Well, we identified the problem, the session ID 67 is running a SELECT * FROM sys.Ejemplo2 but it's beging blocked by the session 51. Session ID 51 hasn’t finished its transaction, and now we either need to commit, rollback, or kill that session, especially if we don’t have access to the application that owns it. Before resolving the issue, I asked Copilot an additional prompt: Please, explain why session ID 67 is currently waiting. Include the wait type, and explanation of that, the resource being waited on, how long it has been waiting (seconds), and the SQL text. Also identify if another session is blocking it. The name of the object and schema Please, provide recommendations to prevent or reduce this kind of blocking situation in the future, based on the current wait type and blocking scenario observed with session ID 67. Please, summarize all current blocking chains in the system. Include blocking session IDs, blocked session IDs, wait types, wait durations, login names, and SQL statements involved.196Views0likes0Comments