performance
109 TopicsLessons Learned #537: Copilot Prompts for Troubleshooting on Azure SQL Database
We had the opportunity to share our experience in several community sessions how SSMS Copilot can help across multiple phases of troubleshooting. In this article, I would like to share a set of prompts we found in those sessions and show how to apply them to an example query. During a performance incident, we captured the following query, generated by PowerBI. SELECT TOP (1000001) * FROM ( SELECT [t2].[Fiscal Month Label] AS [c38], SUM([t5].[Total Excluding Tax]) AS [a0], SUM([t5].[Total Including Tax]) AS [a1] FROM ( SELECT [$Table].[Sale Key] as [Sale Key], [$Table].[City Key] as [City Key], [$Table].[Customer Key] as [Customer Key], [$Table].[Bill To Customer Key] as [Bill To Customer Key], [$Table].[Stock Item Key] as [Stock Item Key], [$Table].[Invoice Date Key] as [Invoice Date Key], [$Table].[Delivery Date Key] as [Delivery Date Key], [$Table].[Salesperson Key] as [Salesperson Key], [$Table].[WWI Invoice ID] as [WWI Invoice ID], [$Table].[Description] as [Description], [$Table].[Package] as [Package], [$Table].[Quantity] as [Quantity], [$Table].[Unit Price] as [Unit Price], [$Table].[Tax Rate] as [Tax Rate], [$Table].[Total Excluding Tax] as [Total Excluding Tax], [$Table].[Tax Amount] as [Tax Amount], [$Table].[Profit] as [Profit], [$Table].[Total Including Tax] as [Total Including Tax], [$Table].[Total Dry Items] as [Total Dry Items], [$Table].[Total Chiller Items] as [Total Chiller Items], [$Table].[Lineage Key] as [Lineage Key] FROM [Fact].[Sale] as [$Table] ) AS [t5] INNER JOIN ( SELECT [$Table].[Date] as [Date], [$Table].[Day Number] as [Day Number], [$Table].[Day] as [Day], [$Table].[Month] as [Month], [$Table].[Short Month] as [Short Month], [$Table].[Calendar Month Number] as [Calendar Month Number], [$Table].[Calendar Month Label] as [Calendar Month Label], [$Table].[Calendar Year] as [Calendar Year], [$Table].[Calendar Year Label] as [Calendar Year Label], [$Table].[Fiscal Month Number] as [Fiscal Month Number], [$Table].[Fiscal Month Label] as [Fiscal Month Label], [$Table].[Fiscal Year] as [Fiscal Year], [$Table].[Fiscal Year Label] as [Fiscal Year Label], [$Table].[ISO Week Number] as [ISO Week Number] FROM [Dimension].[Date] as [$Table] ) AS [t2] ON [t5].[Delivery Date Key] = [t2].[Date] GROUP BY [t2].[Fiscal Month Label] ) AS [MainTable] WHERE ( NOT([a0] IS NULL) OR NOT([a1] IS NULL) ) I structure the investigation in three areas: Analysis – understand the data model, sizes, and relationships. List all tables in the 'Fact' and 'Dimension' schemas with space usage in MB and number of rows. The name of the tables and their relations among them. Please, provide a textual representation for all relations. List all foreign key relationships between tables in the 'Fact' and 'Dimension' schemas, showing the cardinality and referenced columns. Could you please let me know what is the meaning of every table? Describe all schemas in this database, listing the number of tables and views per schema. Create a textual data model (ER-style) representation showing how all Fact and Dimension tables are connected. Maintenance Plan Check – verify statistics freshness, index health/fragmentation, partition layout, and data quality. List all statistics in the database that have not been updated in the last 7 days, showing table name, number of rows, and last update date. List all indexes in the database with fragmentation higher than 30%, including table name, index name, and page count. Please, provide the T-SQL to rebuild all indexes in ONLINE mode and UPDATE STATISTICS for all tables that are automatic statistics. Check for fact table rows that reference dimension keys which no longer exist (broken foreign key integrity). Find queries that perform table scans on large tables where no indexes are used, based on recent execution plans. Performance Improvements – simplify/reshape the query and consider indexed views, columnstore, partitioning, and missing indexes. In this part, I would like to spend more time about these prompts, for example the following ones, help me to understand the performance issue, simplify the query text and also, explains what the query is doing. Identify the longest-running query in the last 24 hours provide the full text of the query Please simplify the query Explain me the query Explain in plain language what the following SQL query does, including the purpose of each subquery and the final WHERE clause. Show a histogram of data distribution for key columns used in joins or filters, such as SaleDate, ProductCategory, or Region. Finally, using this prompt I could find a lot of useful information how to improve the execution of this query: Analyze the following SQL query and provide a detailed performance review tailored for Azure SQL Database Hyperscale and Power BI DirectQuery scenarios. For each recommendation, estimate the potential performance improvement as a percentage (e.g. query runtime reduction, I/O savings, etc.). 1. Could this query benefit from a schemabound indexed view or a materialized view? Estimate the performance gain if implemented. 2. Is there any missing index on the involved tables that would improve join or filter efficiency? Include the suggested index definition and expected benefit. 3. Would using a clustered or nonclustered columnstore index on the main fact table improve performance? Estimate the potential gain in query time or storage. 4. Could partitioning the fact table improve performance by enabling partition elimination? If so, suggest the partition key and scheme, and estimate improvement. 5. Are current statistics sufficient for optimal execution plans? Recommend updates if needed and estimate impact. 6. Does this query preserve query folding when used with Power BI DirectQuery? If not, identify what breaks folding and suggest how to fix it. 7. Recommend any query rewrites or schema redesigns, along with estimated performance improvements for each. I got a lot of improvements suggestions about it: Evaluated a schemabound indexed view that pre‑aggregates by month (see Reference Implementations), then pointed Power BI to the view. Ensured clustered columnstore on Fact.Sale; considered a targeted rowstore NCI on [Delivery Date Key] INCLUDE ([Total Excluding Tax], [Total Including Tax]) when columnstore alone wasn’t sufficient. Verified statistics freshness on join/aggregate columns and enabled incremental stats for partitions. Checked partitioning by date to leverage elimination for common slicers.193Views0likes0CommentsSeptember 2025 Recap: Azure Database for MySQL
Join us live on our YouTube channel on October 15, 2025 for an exclusive webinar where we’ll dive deeper into the latest Azure Database for MySQL updates and answer your questions! Watch it live here. Support for In-Place Major Version Upgrade from 8.0 to 8.4 We previously announced that Azure Database for MySQL version 8.4 is now generally available. We’re now happy to share that in-place major version upgrade is supported for MySQL servers from 8.0 to 8.4. If you’re currently on MySQL 5.7, you’ll first need to perform an in-place upgrade from 5.7 to 8.0, and then upgrade from 8.0 to 8.4. Learn more. Near-Zero-Downtime Maintenance - General Availability You can now take advantage of near-zero-downtime maintenance for Azure Database for MySQL with high availability (HA) enabled. This capability is now generally available and is supported by our new HA architecture based on a dedicated Azure Standard Load Balancer design. Thanks to the dedicated Standard Load Balancer design, maintenance is now faster and no longer impacted by client-side DNS caching, which previously caused brief connection drops in some scenarios. This enhancement ensures that your mission-critical workloads remain continuously available, even during infrastructure updates, helping you meet strict uptime requirements and maintain operational continuity. Learn more. We would love your feedback We look forward to your feedback as you explore these enhancements and continue building with Azure Database for MySQL. If you have any suggestions or queries about our service, please let us know by emailing us at AskAzureDBforMySQL@service.microsoft.com. To learn more about what's new with Flexible Server, see What's new in Azure Database for MySQL - Flexible Server. Stay tuned for more updates and announcements by following us on social media: YouTube | LinkedIn | X. Thanks for being part of our community!107Views0likes0CommentsAugust 2025 Recap: Azure Database for MySQL
We're excited to share a summary of the Azure Database for MySQL updates for the month of August 2025. Join us live on our YouTube channel on September 11, 2025 for an exclusive webinar where we’ll dive deeper into these updates and answer your questions! Watch it live here. Azure Database for MySQL 8.4 - General Availability We’re excited to announce that Azure Database for MySQL now supports MySQL 8.4 in General Availability (GA). This means you can create new MySQL 8.4 servers on Azure fully supported for production workloads. MySQL 8.4 is a long-term supported release from the MySQL community, bringing the latest features and improvements while emphasizing stability. With Azure’s managed service, you get these new capabilities backed by Azure’s enterprise-grade reliability and support. In short, MySQL 8.4 GA opens the door for you to upgrade your databases and future-proof your MySQL environment on Azure. Learn more. Cross subscription and cross resource-group placement in restore/replica provisioning workflow You can now restore a server or create a read replica in a different subscription and resource group in Azure Database for MySQL – Flexible Server. This enhancement offers greater flexibility for cross-environment restores, resource organization, and subscription-level separation, helping meet governance and operational requirements. Learn more. Ability to delete on-demand backup You can now delete on-demand backups in Azure Database for MySQL – Flexible Server, giving you greater control over backup management and storage costs. This feature allows you to remove on-demand backups that are no longer needed, helping maintain a cleaner backup inventory and optimize resource usage. Learn more. Unlocking Regional Insights with the Location Based Capabilities REST API Managing MySQL Flexible Server deployments across Azure regions often means choosing the right Azure region for your MySQL deployment is critical. The new Location-Based Capability Set – List API helps you: Retrieve real-time, region-specific capabilities. Compare SKUs, storage options, backup retention, and HA configurations. Integrate insights into automation pipelines for smarter deployments. This API empowers architects and developers to make informed decisions, reduce misconfigurations, and accelerate deployment cycles. Learn more. Stay Connected We look forward to your feedback as you explore these enhancements and continue building with Azure Database for MySQL. If you have any suggestions or queries about our service, please let us know by emailing us at AskAzureDBforMySQL@service.microsoft.com. You can also submit product ideas and feedback at Azure Database for MySQL Community forum. To learn more about what's new with Flexible Server, see What's new in Azure Database for MySQL - Flexible Server. Stay tuned for more updates and announcements by following us on social media: YouTube | LinkedIn | X. Take care, and thanks for being part of our community!217Views0likes0CommentsLesson Learned #531: Scalar UDF vs Parallelism
Last week I worked on a support case where our customer reported that the exact same query, executed against two identical databases with the same resources, was taking significantly longer on one of them. Both databases had the same number of rows, up-to-date statistics, and identical indexes. We started by collecting the execution plans, and I’d like to share what we found. Comparing both execution plans, in the XML of the execution plan that is taking more time, we found the following line in <QueryPlan DegreeOfParallelism="0" NonParallelPlanReason="TSQLUserDefinedFunctionsNotParallelizable"> However in the XML of execution plan that is taking less time we found <QueryPlan DegreeOfParallelism="1" ContainsInlineScalarTsqlUdfs="true"> So, based on this difference, it is clear that the query is using a Scalar UDF but in one of the database, based on the definition of this Scalar UDF function is not possible to run the query in parallel. But in the other database even using Scalar UDF it is possible. As both databases are using the same compatibility level of 160, we started to analyze what is different on both that leads to this behavior, sharing with you an example. DROP TABLE IF EXISTS dbo.TestData; GO CREATE TABLE dbo.TestData ( ID INT IDENTITY(1,1) PRIMARY KEY, Value1 INT, Value2 INT ); INSERT INTO dbo.TestData (Value1, Value2) SELECT ABS(CHECKSUM(NEWID()) % 10000), ABS(CHECKSUM(NEWID()) % 10000) FROM sys.all_objects a CROSS JOIN sys.all_objects b WHERE a.object_id < 150 AND b.object_id < 150; Let's create the Scalar function that blocks the parallel execution. CREATE OR ALTER FUNCTION dbo.fn_BlockParallel (@v1 INT) RETURNS INT AS BEGIN DECLARE @x INT; SELECT @x = DATEDIFF(MILLISECOND, GETDATE(), SYSDATETIME()); RETURN ISNULL(@x, 0); END; When I executed the following query I see in the XML file the following - <QueryPlan DegreeOfParallelism="0" NonParallelPlanReason="TSQLUserDefinedFunctionsNotParallelizable" CachedPlanSize="16" CompileTime="1" CompileCPU="1" CompileMemory="216"> SELECT ID, dbo.fn_BlockParallel(Value1) FROM dbo.TestData WHERE Value1 > 100 OPTION (MAXDOP 4); GO If I modified the code for a new Scalar UDF, I see: <QueryPlan DegreeOfParallelism="1" CachedPlanSize="16" CompileTime="1" CompileCPU="1" CompileMemory="272" ContainsInlineScalarTsqlUdfs="true"> CREATE OR ALTER FUNCTION dbo.fn_BlockParallel (@v1 INT) RETURNS INT AS BEGIN DECLARE @x INT; SELECT @x = v1 * 2; RETURN @x; END; So, even when using compatibility level 160, certain constructs inside scalar UDFs can prevent inlining, which in turn blocks query parallelism. When performance varies between environments, one of the things to check is whether scalar UDFs are involved, and if they are eligible for inlining. To detect the issue quickly, look at the execution plan XML and check the attributes DegreeOfParallelism, ContainsInlineScalarTsqlUdfs, and NonParallelPlanReason.Lesson Learned #530: Comparing Execution Plans to Expose a Hidden Performance Anti-Pattern
One of the most powerful features of SSMS Copilot is how it lets you compare execution plans and immediately show you performance issues. In this case, I would like to share with you my lesson learned comparing two queries and how they behave very differently inside the engine. We have the following queries, these are using a table _x_y_z_MS_HighCPU that contains 4 millon of rows. The column TextToSearch is a varchar(200) datatype. -- Query 1 SELECT COUNT(*) FROM [MSxyzTest].[_x_y_z_MS_HighCPU] WHERE TextToSearch = N'Value: 9'; -- Query 2 SELECT COUNT(*) FROM [MSxyzTest].[_x_y_z_MS_HighCPU] WHERE TextToSearch = 'Value: 9'; Since the query texts are different, each will have a different query ID in Query Store. By running the following T-SQL, for example, I can identify the query IDs. SELECT qsqt.query_sql_text, qsq.query_id, qsp.plan_id, qsp.query_plan_hash, qsp.last_execution_time FROM sys.query_store_query_text qsqt JOIN sys.query_store_query qsq ON qsqt.query_text_id = qsq.query_text_id JOIN sys.query_store_plan qsp ON qsq.query_id = qsp.query_id WHERE qsqt.query_sql_text LIKE '%SELECT COUNT(*)%' -- FROM [[MSxyzTest]].[[_x_y_z_MS_HighCPU]]%' ORDER BY qsp.last_execution_time DESC; Queries 1 and 2 can be compared directly. Using Copilot, I ran the following prompt: Compare the execution plans for the two queries (query id 1 and query id 2 using Query Store. Highlight any differences in operators, estimated vs actual row counts, or implicit conversions. Running the following prompt : CPU Usage: Please, show the top resource-consuming queries in the current database using Query Store data. Include query text, execution count, duration, CPU time, and logical reads. We could see the impact of using an antipattern:160Views0likes0CommentsLesson Learned #525: Tracking Command Timeouts in Azure SQL: Beyond Query Store with Extended Events
A few days ago, we were working on a support case where our customer was intermittently experiencing command timeouts. What made the case interesting was that queries which usually completed in under one second suddenly started taking more than 10 seconds to execute. Since the application — developed in Python using the ODBC Driver 18 for SQL Server — had a command timeout set to 5 seconds, the following error was triggered every time the threshold was exceeded: Error executing command, retrying in 5 seconds. Attempt 1 of 3 with new timeout 5. Error: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Query timeout expired (0) (SQLExecDirectW)') The application had built-in retry logic, dynamically increasing the timeout in each of the three retry attempts, to allow time for the query to complete and to log enough data for post-error analysis. Example logs from the retry logic: (RunCommandTimeout) - Thread: 39808 - Error executing command, retrying in 5 seconds. Attempt 1 of 3 with new timeout 5. Error: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Query timeout expired (0) (SQLExecDirectW)') INFO:root:Connecting to the DB jmjuradotestdb1 - Thread id 39808 - (Attempt 1/3) INFO:root:Connected to the Database in jmjuradotestdb1 - Thread id 39808 - 0.0445 seconds --- (RunCommandTimeout) - Thread: 39808 - Error executing command, retrying in 9 seconds. Attempt 2 of 3 with new timeout 9. Error: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Query timeout expired (0) (SQLExecDirectW)') INFO:root:Connecting to the DB jmjuradotestdb1 - Thread id 39808 - (Attempt 1/3) INFO:root:Connected to the Database in jmjuradotestdb1 - Thread id 39808 - 0.0532 seconds --- (RunCommandTimeout) - Thread: 39808 - Error executing command, retrying in 13 seconds. Attempt 3 of 3 with new timeout 13. Error: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Query timeout expired (0) (SQLExecDirectW)') (RunCommandTimeout) - Thread: 39808 - Loop:2/5 Execution Time: 9.7537 seconds My first prompt using SSMS Copilot was this "Review the queries that experienced a command timeout or were aborted in the last 30 minutes. Include query text, queryid, duration, and the reason and code for the abort if available." and I got the following results. So, all points that the query 216 got command timeouts. My next question, was, for query ID 216, show the number of total executions reporting that is 28 executions. The response showed 28 executions, but this number didn’t match the number of aborted and non-aborted executions observed in the application logs, why this difference? Checking the table sys.query_store_runtime_stats I found 10 rows all having execution_type = 3, and total executions 28. So, that's mean that Query Store aggregates query execution data over a fixed interval. So, the execution_type is an indicator that at least an execution during this runtime interval was aborted. So, at least several of them were aborted and other not. To obtain a more granular and accurate picture, I created an Extended Events session to capture these events using ring_buffer target. CREATE EVENT SESSION [CommandAborted] ON DATABASE ADD EVENT sqlserver.attention( ACTION ( sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.username, sqlserver.database_name, sqlserver.sql_text ) ) ADD TARGET package0.ring_buffer WITH (MAX_MEMORY = 4096KB, EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS); GO ALTER EVENT SESSION [CommandAborted] ON DATABASE STATE = START; after reproducing the command timeout scenario again, I was able to see only the aborted executions. So, in this case, 28 executions were executed and 7 executions were aborted. WITH RingBufferXML AS ( SELECT CAST(t.target_data AS XML) AS target_data FROM sys.dm_xe_database_session_targets t JOIN sys.dm_xe_database_sessions s ON t.event_session_address = s.address WHERE t.target_name = 'ring_buffer' AND s.name = 'CommandAborted' ) SELECT x.value('@name', 'varchar(50)') AS event_name, x.value('@timestamp', 'datetime2') AS event_time, x.value('(action[@name="client_app_name"]/value)[1]', 'nvarchar(256)') AS client_app_name, x.value('(action[@name="sql_text"]/value)[1]', 'nvarchar(max)') AS sql_text, x.value('(data[@name="duration"]/value)[1]', 'bigint') AS duration_microseconds, CAST(x.value('(data[@name="duration"]/value)[1]', 'bigint') / 1000000.0 AS decimal(10,3)) AS duration_seconds FROM RingBufferXML CROSS APPLY target_data.nodes('//event') AS tab(x) WHERE x.value('@name', 'varchar(50)') = 'attention' and x.value('(action[@name="client_app_name"]/value)[1]', 'nvarchar(256)') = 'TEST-DataCon' ORDER BY event_time DESC;200Views0likes0CommentsLesson Learned #524: Optimizing Power BI with DirectQuery
In some situations, customers that are using Power BI and DirectQuery reported performance issues depending on how the query has been defined by Power BI. At DataCon 2025 this June in Seattle, I had the great opportunity to present some performance recommendations in this area, based on following articles that we published on our blog some time ago: Lesson Learned #247: All started with the phrase: In PowerBI Direct Query is slow - Indexed views | Microsoft Community Hub Lesson Learned #249: All started with the phrase: In PowerBI Direct Query is slow-Partitioned table | Microsoft Community Hub Lesson Learned #250: All started with the phrase: In PowerBI Direct Query is slow-ColumnStore Index | Microsoft Community Hub In this folder you could find all the materials that we used to deliver this session. This lab helps us better understand where performance gains can be achieved in our database — making it easier to identify what to optimize and how. Also, using the Copilot feature added in SQL Server Management Studio v.21 I would like to share some prompt that we used during the lab that it was very useful during the troubleshooting scenario that we divided in 3 areas: Analysis Phase: List all tables in the 'Fact' and 'Dimension' schemas with space usage in MB and number of rows List all tables in the 'Fact' and 'Dimension' schemas with their structure, including data types, primary keys, foreign keys and indexes. Then provide optimization suggestions for DirectQuery scenarios in Power BI Show the name of the tables and their relation among them List all foreign key relationships between tables in the 'Fact' and 'Dimension' schemas, showing the cardinality and referenced columns Could you please let me know what is the meaning of every table? Describe all schemas in this database, listing the number of tables and views per schema Create a textual data model (ER-style) representation showing how all Fact and Dimension tables are connected. Maintenance Plans: List all statistics in the database that have not been updated in the last 7 days, showing table name, number of rows, and last update date List all indexes in the database with fragmentation higher than 30%. Provide the T-SQL to rebuild each table in the 'Dimension' and 'Fact' schemas in ONLINE mode, and another T-SQL statement for updating automatic statistics List all tables with allocated space but zero rows, or with excessive reserved space not used by actual data Performance Troubleshooting Phase: I have this query, what are the improvements for better performance that we could apply? Please simplify the query and explain it. Explain in plain language what the following SQL query does, including the purpose of each subquery and the final WHERE clause Show a histogram of data distribution for key columns used in joins or filters, such as SaleDate, ProductCategory, or Region Can this query be transformed into a schemabound indexed view that pre-aggregates the sales by [Fiscal Month Label] to improve DirectQuery performance?210Views0likes0CommentsLesson Learned #523: Measuring Import Time -Parsing SqlPackage Logs with PowerShell
This week I'm working on a service request who was experiencing long import times when restoring a large BACPAC into Azure SQL Database, I need to understand where time was being spent inside SqlPackage.exe. I rely on the diagnostics log and the PowerShell to analyze this time. The file contains valuable information that we can extract and summarize using PowerShell. I developed a small PowerShell Script with the following idea: Classifies every entry (Information, Verbose‑25, Verbose‑19, …). Tracks cumulative time for each class. Flags any operation whose delta exceeds 10 seconds with a warning. Produces two tables at the end: Summary per Level (counts + total seconds). Verbose‑25 Operations sorted by elapsed time. I used Verbose-25 (Verbose Operation plus operation ), because I identified that the lines contains the elapsed-time of the operation done. Those are usually the slowest parts. How the Script Works Read the content 5000 lines at a time. Parser every line running Process‑Line function to obtain 3 variables Level, Id, Timestamp, Message. If the level is not Verbose-25 (operation finished), the time is measured against the previous timestamp otherwise for Perf: text Operation ended we use elapsed ms. I added a line that when the delta > 10 s triggers Write‑Warning. $logPath = "C:\temp\Exampledf.txt" $prevStamp = $null $Salida = $null [int]$Lines= 0 $stats = @{} $Verbose25 = @{} function Process-Line { param ( [string]$line, [ref]$prevStamp ) if ($line -notmatch 'Microsoft\.Data\.Tools\.Diagnostics\.Tracer') { return "" } $tail = $Line.Substring($Line.IndexOf('Tracer') + 6).Trim() $c1 = $tail.IndexOf(':') if ($c1 -lt 0) { return "" } $level = $tail.Substring(0, $c1).Trim() $rest = $tail.Substring($c1 + 1).Trim() $c2 = $rest.IndexOf(':') if ($c2 -lt 0) { return "" } $id = $rest.Substring(0, $c2).Trim() $rest = $rest.Substring($c2 + 1).Trim() if ($rest.Length -lt 19) { return "" } $stamp = $rest.Substring(0, 19) $msg = $rest.Substring(19).Trim() if ($msg.StartsWith(':')) { $msg = $msg.Substring(1).Trim() } If($Level -eq "Verbose") { $levelKey = "$level-$id" # Verbose-25, Verbose-19… } else { $levelKey=$level } $delta = 0 if ($msg -like 'Perf: Operation ended*' -and $Level -eq "Verbose") { # Ej.: "...elapsed in ms): StartImportTable,[schema].[table],58" $elapsedMs = ($msg.Split(',')[-1]).Trim() if ($elapsedMs -match '^\d+$') { $delta = [double]$elapsedMs / 1000 } $Verbose25[$msg] = @{ ElapsedTime = [double]$elapsedMs / 1000 } $prevStamp.Value = [datetime]$stamp } else { $curr = [datetime]$stamp if ($prevStamp.Value) { $delta = ($curr - $prevStamp.Value).TotalSeconds } $prevStamp.Value = $curr } # ---- Update the summary ----------------------------------------------- if (-not $stats.ContainsKey($levelKey)) { $stats[$levelKey] = @{ Count = 0; Total = 0 } } $stats[$levelKey].Count++ $stats[$levelKey].Total += $delta return "$levelKey $delta $($msg.Trim())" } # Read and show line (every 5000) Get-Content -Path $logPath -ReadCount 5000 | ForEach-Object { foreach ($line in $_) { $Lines++ $Salida = Process-Line -line $line -prevStamp ([ref]$prevStamp) if ($Salida) { $deltaToken = [double]($Salida.Split()[1]) if ($deltaToken -gt 10) { Write-Warning "$Lines $Salida" } if ($Lines % 5000 -eq 0 -and $Salida) { Write-Output "$Lines Text: $Salida" } } } } Write-Output "`n--- Summary per Level -----------------------------------------" Write-Output "Lines Read: $Lines" $stats.GetEnumerator() | Sort-Object Name | ForEach-Object { [pscustomobject]@{ Level = $_.Name Operations = $_.Value.Count TotalTimeSec = [math]::Round($_.Value.Total, 3) } } | Format-Table -AutoSize Write-Output "`n--- Verbose-25 Operations -------------------------------------" $Verbose25.GetEnumerator() | Sort-Object @{ Expression = { [double]$_.Value.ElapsedTime }; Descending = $true } | ForEach-Object { [pscustomobject]@{ Operation = $_.Name ElapsedTimeSec = [double]$_.Value.ElapsedTime } } | Format-Table -AutoSize Examples:Model Context Protocol (MCP) Server for Azure Database for MySQL
We are excited to introduce a new MCP Server for integrating your AI models with data hosted in Azure Database for MySQL. By utilizing this server, you can effortlessly connect any AI application that supports MCP to your MySQL flexible server (using either MySQL password-based authentication or Microsoft Entra authentication methods), enabling you to provide your business data as meaningful context in a standardized and secure manner.1.8KViews2likes0Comments