azure sql database
153 TopicsLesson Learned #486: Snapshot Isolation Transaction Failed Due to a Concurrent DDL Statement
Today, we worked on a service request that our customer faced the following error message: Msg 3961, Level 16, State 1, Line 4 Snapshot isolation transaction failed in database 'DbName' because the object accessed by the statement has been modified by a DDL statement in another concurrent transaction since the start of this transaction. It is disallowed because the metadata is not versioned. A concurrent update to metadata can lead to inconsistency if mixed with snapshot isolation.2.6KViews0likes1CommentLessons Learned #538: Printing One Line Per Field in T-SQL in Azure SQL Database.
Working on a case with a huge number of columns I needed to dump records as one line per field. Happy to share this script that it was useful. For example, I would like to have the data in this way. Id: 123 Status: Open Title: Connectivity issue Running the following TSQL, I was able to obtain the info as I expected. ;WITH src AS ( SELECT top 10000 * FROM dbo.Incidents ORDER BY IncidentId ) SELECT s.IncidentId, f.ord, CONCAT(f.field, ': ', COALESCE(f.value, '')) AS line FROM src AS s CROSS APPLY (VALUES ( 1, N'IncidentId', CONVERT(nvarchar(50), s.IncidentId)), ( 2, N'Status', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Status), CHAR(13), ' '), CHAR(10), ' ')), ( 3, N'Path', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Path), CHAR(13), ' '), CHAR(10), ' ')), ( 4, N'Title', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Title), CHAR(13), ' '), CHAR(10), ' ')), ( 5, N'Severity', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Severity),CHAR(13), ' '), CHAR(10), ' ')), ( 6, N'IsResolved', CONVERT(nvarchar(10), s.IsResolved)), ( 7, N'ResolvedAt', CONVERT(varchar(19), s.ResolvedAt, 126)), -- ISO 8601 ( 8, N'Owner', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Owner), CHAR(13), ' '), CHAR(10), ' ')), ( 9, N'Source', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Source), CHAR(13), ' '), CHAR(10), ' ')), (10, N'Tags', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Tags), CHAR(13), ' '), CHAR(10), ' ')), (11, N'Notes', REPLACE(REPLACE(CONVERT(nvarchar(max), s.Notes), CHAR(13), ' '), CHAR(10), ' ')) ) AS f(ord, field, value) ORDER BY s.IncidentId, f.ord;104Views0likes0CommentsLessons Learned #537: Copilot Prompts for Troubleshooting on Azure SQL Database
We had the opportunity to share our experience in several community sessions how SSMS Copilot can help across multiple phases of troubleshooting. In this article, I would like to share a set of prompts we found in those sessions and show how to apply them to an example query. During a performance incident, we captured the following query, generated by PowerBI. SELECT TOP (1000001) * FROM ( SELECT [t2].[Fiscal Month Label] AS [c38], SUM([t5].[Total Excluding Tax]) AS [a0], SUM([t5].[Total Including Tax]) AS [a1] FROM ( SELECT [$Table].[Sale Key] as [Sale Key], [$Table].[City Key] as [City Key], [$Table].[Customer Key] as [Customer Key], [$Table].[Bill To Customer Key] as [Bill To Customer Key], [$Table].[Stock Item Key] as [Stock Item Key], [$Table].[Invoice Date Key] as [Invoice Date Key], [$Table].[Delivery Date Key] as [Delivery Date Key], [$Table].[Salesperson Key] as [Salesperson Key], [$Table].[WWI Invoice ID] as [WWI Invoice ID], [$Table].[Description] as [Description], [$Table].[Package] as [Package], [$Table].[Quantity] as [Quantity], [$Table].[Unit Price] as [Unit Price], [$Table].[Tax Rate] as [Tax Rate], [$Table].[Total Excluding Tax] as [Total Excluding Tax], [$Table].[Tax Amount] as [Tax Amount], [$Table].[Profit] as [Profit], [$Table].[Total Including Tax] as [Total Including Tax], [$Table].[Total Dry Items] as [Total Dry Items], [$Table].[Total Chiller Items] as [Total Chiller Items], [$Table].[Lineage Key] as [Lineage Key] FROM [Fact].[Sale] as [$Table] ) AS [t5] INNER JOIN ( SELECT [$Table].[Date] as [Date], [$Table].[Day Number] as [Day Number], [$Table].[Day] as [Day], [$Table].[Month] as [Month], [$Table].[Short Month] as [Short Month], [$Table].[Calendar Month Number] as [Calendar Month Number], [$Table].[Calendar Month Label] as [Calendar Month Label], [$Table].[Calendar Year] as [Calendar Year], [$Table].[Calendar Year Label] as [Calendar Year Label], [$Table].[Fiscal Month Number] as [Fiscal Month Number], [$Table].[Fiscal Month Label] as [Fiscal Month Label], [$Table].[Fiscal Year] as [Fiscal Year], [$Table].[Fiscal Year Label] as [Fiscal Year Label], [$Table].[ISO Week Number] as [ISO Week Number] FROM [Dimension].[Date] as [$Table] ) AS [t2] ON [t5].[Delivery Date Key] = [t2].[Date] GROUP BY [t2].[Fiscal Month Label] ) AS [MainTable] WHERE ( NOT([a0] IS NULL) OR NOT([a1] IS NULL) ) I structure the investigation in three areas: Analysis – understand the data model, sizes, and relationships. List all tables in the 'Fact' and 'Dimension' schemas with space usage in MB and number of rows. The name of the tables and their relations among them. Please, provide a textual representation for all relations. List all foreign key relationships between tables in the 'Fact' and 'Dimension' schemas, showing the cardinality and referenced columns. Could you please let me know what is the meaning of every table? Describe all schemas in this database, listing the number of tables and views per schema. Create a textual data model (ER-style) representation showing how all Fact and Dimension tables are connected. Maintenance Plan Check – verify statistics freshness, index health/fragmentation, partition layout, and data quality. List all statistics in the database that have not been updated in the last 7 days, showing table name, number of rows, and last update date. List all indexes in the database with fragmentation higher than 30%, including table name, index name, and page count. Please, provide the T-SQL to rebuild all indexes in ONLINE mode and UPDATE STATISTICS for all tables that are automatic statistics. Check for fact table rows that reference dimension keys which no longer exist (broken foreign key integrity). Find queries that perform table scans on large tables where no indexes are used, based on recent execution plans. Performance Improvements – simplify/reshape the query and consider indexed views, columnstore, partitioning, and missing indexes. In this part, I would like to spend more time about these prompts, for example the following ones, help me to understand the performance issue, simplify the query text and also, explains what the query is doing. Identify the longest-running query in the last 24 hours provide the full text of the query Please simplify the query Explain me the query Explain in plain language what the following SQL query does, including the purpose of each subquery and the final WHERE clause. Show a histogram of data distribution for key columns used in joins or filters, such as SaleDate, ProductCategory, or Region. Finally, using this prompt I could find a lot of useful information how to improve the execution of this query: Analyze the following SQL query and provide a detailed performance review tailored for Azure SQL Database Hyperscale and Power BI DirectQuery scenarios. For each recommendation, estimate the potential performance improvement as a percentage (e.g. query runtime reduction, I/O savings, etc.). 1. Could this query benefit from a schemabound indexed view or a materialized view? Estimate the performance gain if implemented. 2. Is there any missing index on the involved tables that would improve join or filter efficiency? Include the suggested index definition and expected benefit. 3. Would using a clustered or nonclustered columnstore index on the main fact table improve performance? Estimate the potential gain in query time or storage. 4. Could partitioning the fact table improve performance by enabling partition elimination? If so, suggest the partition key and scheme, and estimate improvement. 5. Are current statistics sufficient for optimal execution plans? Recommend updates if needed and estimate impact. 6. Does this query preserve query folding when used with Power BI DirectQuery? If not, identify what breaks folding and suggest how to fix it. 7. Recommend any query rewrites or schema redesigns, along with estimated performance improvements for each. I got a lot of improvements suggestions about it: Evaluated a schemabound indexed view that pre‑aggregates by month (see Reference Implementations), then pointed Power BI to the view. Ensured clustered columnstore on Fact.Sale; considered a targeted rowstore NCI on [Delivery Date Key] INCLUDE ([Total Excluding Tax], [Total Including Tax]) when columnstore alone wasn’t sufficient. Verified statistics freshness on join/aggregate columns and enabled incremental stats for partitions. Checked partitioning by date to leverage elimination for common slicers.252Views0likes0CommentsLessons Learned #536: Error the database type is not understood by the data reader.
This week we have been working on a support case where our customer had the following message 50006: Error the database type is not understood by the data reader when exporting using SQL Server Management Studio to a bacpac. We also observed that other tools showed the same message Checking SQL Server Management Studio, we faced the following error screen: Could not export schema and data from database. (Microsoft.SqlServer.Dac) ADDITIONAL INFORMATION: 50006: Error the database type '' is not understood by the data reader. (Microsoft.Data.Tools.Schema.Sql) As part of our investigation, we validated that the native export tool SQLPackage did not produce the same error. This indicated that the issue was not directly related to the database itself, but rather to the version of SSMS (or the specific client tool being used). After further analysis, we identified the root cause: This error occurs due to a limitation in SSMS v20 (and some third-party export/backup tools). Specifically, these tools do not support the JSON data type in SQL database columns. As a result, any attempt to export or back up a database containing JSON columns with these versions will fail. After the analysis, we've found to be a limitation in the SSMS v.20 and other third-party backup/export tool, which does not support the JSON data type in SQL database columns. As a result, any attempt to export or back up a database containing JSON columns using this tool would fail. In our case, the issue was resolved after the customer upgraded to SSMS v21, which includes support for JSON data types during bacpac exports. Afterthe upgrade, the export process worked as expected.Configure Auditing for Azure SQL Database on a Specific table
The Azure SQL Auditing feature is used to track database events and writes them to an audit log in your Azure storage account, Log Analytics workspace, or Event Hubs. For those who are interested on how to configure Auditing on Azure SQL Database either on server-level or database-level please visit this Configure Auditing for Azure SQL Database series - Part 1 and Configure Auditing for Azure SQL Database series - Part 2 For those who are looking into enable Auditing for Azure SQL Database on a specific table as this option is not available on Azure Portal as of now. we don`t have any estimate date when this option is available on Azure Portal, but they can use the following PowerShell script that will help you enable Auditing on a specific table in Azure SQL Database. In script I had given example table as (MyTable) targeting a table named MyTable in the DBO schema and want to Audit SELECT and INSERT actions. # Define variables $resourceGroup = "YourResourceGroup" $serverName = "your-sql-server-name" $databaseName = "your-database-name" $storageAccount = "yourstorageaccount" $tableName = "MyTable" $schemaName = "dbo" # Login to Azure Connect-AzAccount # Enable auditing at the database level Set-AzSqlDatabaseAuditing ` -ResourceGroupName $resourceGroup ` -ServerName $serverName ` -DatabaseName $databaseName ` -StorageAccountName $storageAccount ` -AuditActionGroup "SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP", "FAILED_DATABASE_AUTHENTICATION_GROUP" ` -AuditAction "SELECT ON $schemaName.$tableName BY public", "INSERT ON $schemaName.$tableName BY public" ` -State Enabled REST API - Manage Auditing Using APIs - Azure SQL Database & Azure Synapse Analytics | Microsoft Learn To enable auditing on a specific table in Azure SQL Database using the REST API, you can use the Create or Update Database Extended Auditing Policy endpoint. This allows you to define fine-grained auditing rules, including actions on specific tables. URL PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Sql/servers/{serverName}/databases/{databaseName}/extendedAuditingSettings/default?api-version=2021-11-01-preview Reference Request Body: { "properties": { "state": "Enabled", "storageEndpoint": "https://.blob.core.windows.net/", "storageAccountAccessKey": "", "retentionDays": 90, "auditActionsAndGroups": [ "SELECT ON dbo.MyTable BY public", "INSERT ON dbo.MyTable BY public" ], "isStorageSecondaryKeyInUse": false } } Parameters: auditActionsAndGroups: This is where you specify the exact actions and the table. You can include SELECT, INSERT, UPDATE, DELETE, etc. storageEndpoint: The Azure Blob Storage endpoint where audit logs will be stored. retentionDays: Number of days to retain logs. state: Must be "Enabled" to activate auditing. AZ Cli – https://learn.microsoft.com/en-us/cli/azure/sql/db/audit-policy?view=azure-cli-latest#az-sql-db-audit-policy-update az sql db audit-policy update -g ResourceGroupName -s Servername -n DatabaseName --state Enabled --bsts Enabled --storage-key "" --storage-endpoint https://StorageAccount.blob.core.windows.net/ --actions FAILED_DATABASE_AUTHENTICATION_GROUP 'UPDATE ON dbo.MyTable BY public' sample output { "auditActionsAndGroups": [ "FAILED_DATABASE_AUTHENTICATION_GROUP", "UPDATE ON dbo.MyTable BY public" ], "id": "/subscriptions/xxxxx-xxxxx-xxxxxx-xxxxx-xxxxxx/resourceGroups/ResourceGroupName/providers/Microsoft.Sql/servers/ServerName/databases/DatabaseName/auditingSettings/Default", "isAzureMonitorTargetEnabled": true, "isManagedIdentityInUse": false, "isStorageSecondaryKeyInUse": false, "kind": null, "name": "Default", "queueDelayMs": null, "resourceGroup": "ResourceGroupName", "retentionDays": 10, "state": "Enabled", "storageAccountAccessKey": null, "storageAccountSubscriptionId": "xxxx-xxxxx-xxxx-xxxxx-xxxxxxx", "storageEndpoint": https://StorageAccount.blob.core.windows.net/, "type": "Microsoft.Sql/servers/databases/auditingSettings" } For more information Configure Auditing for Azure SQL Database series - Part 1 Configure Auditing for Azure SQL Database series - Part 2 Set-AzSqlDatabaseAudit Manage Auditing Using APIs - Azure SQL Database & Azure Synapse Analytics | Microsoft Learn az sql db audit-policy update113Views0likes0CommentsData Migration - Collation mismatch error during an offline migration
This post explores a collation mismatch error during offline migration from SQL Server Express to Azure SQL. It identifies the root cause—AUTO_CLOSE being enabled—and offers a simple fix to ensure smooth migration using Azure DMS.122Views0likes0CommentsLesson Learned #529: Troubleshooting Application Slowness Using SSMS Copilot
Some days ago, I worked on a support case where a customer reported application slowness affecting multiple users. Instead of jumping into traces or manually writing diagnostic queries, we used SSMS Copilot to investigate the issue. I would like to share with you how we diagnosed and understood the root cause. To illustrate the case, let’s walk through a simplified example: we create a new table, and right after that, we add a new column to it. CREATE TABLE Ejemplo2 (ID INT) BEGIN TRANSACTION ALTER TABLE dbo.eJEMPLO2 ADD NuevoCampo INT NULL Using SQL Server Management Studio and Copilot we executed the following prompt: Please, provide all currently running or suspended sessions. Include session ID, status, command, wait type (if any), application_name, wait time, and current SQL text. We got the following results: I executed multiple times the same prompt and always the session ID 67 is in suspended mode and Wait_type LCK_M_SCH_S, for this reason, I run a new prompt: Please, provide all sessions that are currently blocked by another session. Include session ID, the blocking session ID, wait type, and the blocked SQL text . At the end, I found that the session 51 is blocking the session ID 67 and for this reason, I run a new prompt: do we any have active transaction pending for commit for the session ID 51. So, I understand that the Session ID 51 has a transaction open, so, let's ask the details of the session 51, with a new prompt: Please, show the most recent SQL statement executed by session ID 51, even if the session is currently sleeping or not running any active request. Include the session status and login name as well. Use sys.dm_exec_connections and most_recent_sql_handle to retrieve the query text if necessary. Well, we identified the problem, the session ID 67 is running a SELECT * FROM sys.Ejemplo2 but it's beging blocked by the session 51. Session ID 51 hasn’t finished its transaction, and now we either need to commit, rollback, or kill that session, especially if we don’t have access to the application that owns it. Before resolving the issue, I asked Copilot an additional prompt: Please, explain why session ID 67 is currently waiting. Include the wait type, and explanation of that, the resource being waited on, how long it has been waiting (seconds), and the SQL text. Also identify if another session is blocking it. The name of the object and schema Please, provide recommendations to prevent or reduce this kind of blocking situation in the future, based on the current wait type and blocking scenario observed with session ID 67. Please, summarize all current blocking chains in the system. Include blocking session IDs, blocked session IDs, wait types, wait durations, login names, and SQL statements involved.176Views0likes0CommentsLesson Learned #526: How to Identify the REST API Version Used by Your PowerShell Commands?
A few days ago, I wanted to understand which REST API version was being used behind each PowerShell command I was running to create Azure SQL Database servers. To investigate, I picked a simple command: Get-AzSqlServer -ResourceGroupName "ResourceGroupName" -ServerName "servername". Remember that Get-AzSqlServer cmdlet is a PowerShell wrapper over the Azure REST API for the Microsoft.Sql/servers resource. Internally, it makes a call to the ARM endpoint documented here, passing the required api-version. The actual version used depends on the installed Az.Sql module and may vary from one environment to another. I found that setting the variable $DebugPreference = "Continue" in my PowerShell Script , PowerShell prints detailed internal debug output, including the exact REST API call sent to Azure Resource Manager (ARM). Checking the output I've found the section called: Absolute Uri: https://management.azure.com/subscriptions/xxx-xxxx--613ccd2df306/resourceGroups/ResourceGroupName/providers/Microsoft.Sql/servers/servername?api-version=2023-02-01-preview So, it seems that running this command we could see this info. Even though you don’t explicitly define the api-version when using cmdlets like Get-AzSqlServer, the Azure platform requires it under the hood. The version used determines, which properties are available or supported, what operations behave differently across versions, whether the call will succeed once older versions are retired, etc.. For example, by using Azure CLI from the Portal, I was able to see the most recent API versions. It’s also important to note that, if your organization has .NET code managing Azure SQL Database environments, the underlying REST API calls might still be using an outdated preview version.307Views0likes0CommentsLesson Learned #522: Troubleshooting TLS and Cipher Suites with Python connecting to Azure SQL DB
A few days ago, we were working on a service request where our customer was experiencing several issues connecting to Azure SQL Database due to TLS version and cipher suite mismatches when using Python and ODBC Driver 18. Although we were able to get that information through a network trace, I would like to share things that I learned. Using the library SSL in Python allows to establish a TLS/SSL context where I can control the TLS version and specify or inspect the cipher suite. Here’s a small script that demonstrates how to connect to the Azure SQL Gateway over port 1433 and inspect the TLS configuration: import ssl import socket #ServerName to connect (Only Gateway) host = 'servername.database.windows.net' port = 1433 # TLS context context = ssl.create_default_context() print("Python uses:", ssl.OPENSSL_VERSION) context.minimum_version = ssl.TLSVersion.TLSv1_2 context.maximum_version = ssl.TLSVersion.TLSv1_2 context.check_hostname = True context.verify_mode = ssl.CERT_REQUIRED context.load_default_certs() # Testing the connection. with socket.create_connection((host, port)) as sock: with context.wrap_socket(sock, server_hostname=host) as ssock: print("TLS connection established.") print("TLS version:", ssock.version()) print("Cipher suite:", ssock.cipher()) # CN (Common Name) cert = ssock.getpeercert() try: cn = dict(x[0] for x in cert['subject'])['commonName'] print(f"\n Certificate CN: {cn}") except Exception as e: print(" Error extracting CN:", e) print("Valid from :", cert.get('notBefore')) print("Valid until:", cert.get('notAfter')) Using this script I was able to: Enforce a specific TLS version by setting minimum_version and maximum_version , for example, (1.2 or 1.3) Retrieve the cipher suite negotiated. Inspect the details of the certificate. Enjoy!250Views0likes0CommentsLesson Learned #520: Troubleshooting Azure SQL Database Redirect Connection over Private Endpoint
A few days ago, we handled an interesting support case where a customer encountered the following connection error when using sqlcmd to connect to their Azure SQL Database "Sqlcmd: Error: Microsoft ODBC Driver 18 for SQL Server : Login timeout expired. Sqlcmd: Error: Microsoft ODBC Driver 18 for SQL Server : TCP Provider: Error code 0x102. Sqlcmd: Error: Microsoft ODBC Driver 18 for SQL Server : A network-related or instance-specific error has occurred while establishing a connection to servername.database.windows.net (Redirected: servername.database.windows.net\xxxx8165ccxxx,6188). Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online." At first glance, what immediately caught our attention was the port number mentioned in the error 6188. This is not the typical 1433 port that SQL Server usually listens on. Additionally, the message referenced a "Redirected" connection, which gave us the first strong clue. We asked the customer to run the following commands for diagnostics steps: ping servername.database.windows.net to identify the IP address resolved for the Azure SQL Database Server, returning a private IP: 10.1.0.200. nslookup servername.database.windows.net to confirm whether the resolution was happening through a custom DNS or public DNS. ifconfig -a to determine the local IP address of the client, which was 10.1.0.10 (our customer is using Linux environment - RedHat) With all this information in hand, we asked the customer to open a terminal on their Linux machine and execute sudo tcpdump -i eth0 host 10.1.0.200 meanwhile they are attempting to connect using another terminal with sqlcmd and we observed that the sqlcmd was: First making a request to the port 1433 that is expected And then immediately attempting a second connection to port 6188 on the same private IP. It was during this second connection attempt that the timeout occurred. After it, based on the port and the message we asked to our customer what type of connection has this server and they reported Redirect. We explained in Azure SQL, when Redirect mode is enabled, the client: Connects to the gateway on port 1433 Receives a redirection response with a new target IP and dynamic port (e.g., 6188) Attempts a new connection to the private endpoint using this port We reviewed the connection configuration and confirmed that Redirect mode was enabled. After speaking with the customer's networking and security team, we discovered that their firewall rules were blocking outbound connections to dynamic ports like 6188. We proposed two alternative solutions: Option 1: Adjust Firewall Rules Allow outbound traffic from the client’s IP (10.1.0.10) to the Private Endpoint IP (10.1.0.200) for the required range of ports used by Azure SQL in Redirect mode. This keeps the benefits of Redirect mode: Lower latency Direct database access via Private Link Reduced dependence on Azure Gateway Option 2: Switch to Proxy Mode Change the Azure SQL Server's connection policy to Proxy, which forces all traffic through port 1433 only. This is simpler for environments where security rules restrict dynamic port ranges, but it may introduce slightly higher latency. In this case, the customer opted to update the VNet's NSG and outbound firewall rules to allow the necessary range of ports for the Private Endpoint. This allowed them to retain the benefits of Redirect mode while maintaining secure, high-performance connectivity to their Azure SQL Database.513Views0likes0Comments