copilot
10 TopicsUnlocking AI-Driven Data Access: Azure Database for MySQL Support via the Azure MCP Server
Step into a new era of data-driven intelligence with the fusion of Azure MCP Server and Azure Database for MySQL, where your MySQL data is no longer just stored, but instantly conversational, intelligent and action-ready. By harnessing the open-standard Model Context Protocol (MCP), your AI agents can now query, analyze and automate in natural language, accessing tables, surfacing insights and acting on your MySQL-driven business logic as easily as chatting with a colleague. It’s like giving your data a voice and your applications a brain, all within Azure’s trusted cloud platform. We are excited to announce that we have added support for Azure Database for MySQL in Azure MCP Server. The Azure MCP Server leverages the Model Context Protocol (MCP) to allow AI agents to seamlessly interact with various Azure services to perform context-aware operations such as querying databases and managing cloud resources. Building on this foundation, the Azure MCP Server now offers a set of tools that AI agents and apps can invoke to interact with Azure Database for MySQL - enabling them to list and query databases, retrieve schema details of tables, and access server configurations and parameters. These capabilities are delivered through the same standardized interface used for other Azure services, making it easier to the adopt the MCP standard for leveraging AI to work with your business data and operations across the Azure ecosystem. Before we delve into these new tools and explore how to get started with them, let’s take a moment to refresh our understanding of MCP and the Azure MCP Server - what they are, how they work, and why they matter. MCP architecture and key components The Model Context Protocol (MCP) is an emerging open protocol designed to integrate AI models with external data sources and services in a scalable, standardized, and secure manner. MCP dictates a client-server architecture with four key components: MCP Host, MCP Client, MCP Server and external data sources, services and APIs that provide the data context required to enhance AI models. To explain briefly, an MCP Host (AI apps and agents) includes an MCP client component that connects to one or more MCP Servers. These servers are lightweight programs that securely interface with external data sources, services and APIs and exposes them to MCP clients in the form of standardized capabilities called tools, resources and prompts. Learn more: MCP Documentation What is Azure MCP Server? Azure offers a multitude of cloud services that help developers build robust applications and AI solutions to address business needs. The Azure MCP Server aims to expose these powerful services for agentic usage, allowing AI systems to perform operations that are context-aware of your Azure resources and your business data within them, while ensuring adherence to the Model Context Protocol. It supports a wide range of Azure services and tools including Azure AI Search, Azure Cosmos DB, Azure Storage, Azure Monitor, Azure CLI and Developer CLI extensions. This means that you can empower AI agents, apps and tools to: Explore your Azure resources, such as listing and retrieving details on your Azure subscriptions, resource groups, services, databases, and tables. Search, query and analyze your data and logs. Execute CLI and Azure Developer CLI commands directly, and more! Learn more: Azure MCP Server GitHub Repository Introducing new Azure MCP Server tools to interact with Azure Database for MySQL The Azure MCP Server now includes the following tools that allow AI agents to interact with Azure Database for MySQL and your valuable business data residing in these servers, in accordance with the MCP standard: Tool Description Example Prompts azmcp_mysql_server_list List all MySQL servers in a subscription & resource group "List MySQL servers in resource group 'prod-rg'." "Show MySQL servers in region 'eastus'." azmcp_mysql_server_config_get Retrieve the configuration of a MySQL server "What is the backup retention period for server 'my-mysql-server'?" "Show storage allocation for server 'my-mysql-server'." azmcp_mysql_server_param_get Retrieve a specific parameter of a MySQL server "Is slow_query_log enabled on server my-mysql-server?" "Get innodb_buffer_pool_size for server my-mysql-server." azmcp_mysql_server_param_set Set a specific parameter of a MySQL server to a specific value "Set max_connections to 500 on server my-mysql-server." "Set wait_timeout to 300 on server my-mysql-server." azmcp_mysql_table_list List all tables in a MySQL database "List tables starting with 'tmp_' in database 'appdb'." "How many tables are in database 'analytics'?" azmcp_mysql_table_schema_get Get the schema of a specific table in a MySQL database "Show indexes for table 'transactions' in database 'billing'." "What is the primary key for table 'users' in database 'auth'?" azmcp_mysql_database_query Executes a SELECT query on a MySQL Database. The query must start with SELECT and cannot contain any destructive SQL operations for security reasons. “How many orders were placed in the last 30 days in the salesdb.orders table?” “Show the number of new users signed up in the last week in appdb.users grouped by day.” These interactions are secured using Microsoft Entra authentication, which enables seamless, identity-based access to Azure Database for MySQL - eliminating the need for password storage and enhancing overall security. How are these new tools in the Azure MCP Server different from the standalone MCP Server for Azure Database for MySQL? We have integrated the key capabilities of the Azure Database for MySQL MCP server into the Azure MCP Server, making it easier to connect your agentic apps not only to Azure Database for MySQL but also to other Azure services through one unified and secure interface! How to get started Installing and running the Azure MCP Server is quick and easy! Use GitHub Copilot in Visual Studio Code to gain meaningful insights from your business data in Azure Database for MySQL. Pre-requisites Install Visual Studio Code. Install GitHub Copilot and GitHub Copilot Chat extensions. An Azure Database for MySQL with Microsoft Entra authentication enabled. Ensure that the MCP Server is installed on a system with network connectivity and credentials to connect to Azure Database for MySQL. Installation and Testing Please use this guide for installation: Azure MCP Server Installation Guide Try the following prompts with your Azure Database for MySQL: Azure Database for MySQL tools for Azure MCP Server Try it out and share your feedback! Start using Azure MCP Server with the MySQL tools today and let our cloud services become your AI agent’s most powerful ally. We’re counting on your feedback - every comment, suggestion, or bug-report helps us build better tools together. Stay tuned: more features and capabilities are on the horizon! Feel free to comment below or write to us with your feedback and queries at AskAzureDBforMySQL@service.microsoft.com.156Views1like0CommentsBuild Smarter with Azure HorizonDB
By: Maxim Lukiyanov, PhD, Principal PM Manager; Abe Omorogbe, Senior Product Manager; Shreya R. Aithal, Product Manager II; Swarathmika Kakivaya, Product Manager II Today, at Microsoft Ignite, we are announcing a new PostgreSQL database service - Azure HorizonDB. You can read the announcement here, and in this blog you can learn more about HorizonDB’s AI features and development tools. Azure HorizonDB is designed for the full spectrum of modern database needs - from quickly building new AI applications, to scaling enterprise workloads to unprecedented levels of performance and availability, to managing your databases efficiently and securely. To help with building new AI applications we are introducing 3 features: DiskANN Advanced Filtering, built-in AI model management, and integration with Microsoft Foundry. To help with database management we are introducing a set of new capabilities in PostgreSQL extension for Visual Studio Code, as well as announcing General Availability of the extension. Let’s dive into AI features first. DiskANN Advanced Filtering We are excited to announce a new enhancement in the Microsoft’s state of the art vector indexing algorithm DiskANN – DiskANN Advanced Filtering. Advanced Filtering addresses a common problem in vector search – combining vector search with filtering. In real-world applications where queries often include constraints like price ranges, ratings, or categories, traditional vector search approaches, such as pgvector’s HNSW, rely on multiple step retrieval and post-filtering, which can make search extremely slow. DiskANN Advanced Filtering solves this by combining filter and search into one operation - while the graph of vectors is traversed during the vector search, each vector is also checked for filter predicate match, ensuring that only the correct vectors are retrieved. Under the hood, it works in a 3-step process: first creating a bitmap of relevant rows using indexes on attributes such as price or rating, then performing a filter-aware graph traversal against the bitmap, and finally, validating and ordering the results for accuracy. This integrated approach delivers dramatically faster and more efficient filtered vector searches. Initial benchmarks show that enabling Advanced Filtering on DiskANN reduces query latency by up to 3x, depending on filter selectivity. AI Model Management Another exciting feature of HorizonDB is AI Model Management. This feature automates Microsoft Foundry model provisioning during database deployment and instantly activates database semantic operators. This eliminates tens of setup and configuration steps and simplifies the development of new AI apps and agents. AI Model Management elevates the experience of using semantic operators within PostgreSQL. When activated, it provisions key models for embedding, semantic ranking and generation via Foundry, installs and configures the azure_ai extension to enable the operators, establishes secure connections, integrates model management, monitoring and cost management within HorizonDB. What would otherwise require significant manual effort and context-switching between Foundry and PostgreSQL for configuration, management, and monitoring is now possible with just a few clicks, all without leaving the PostgreSQL environment. You can also continue to bring your own Foundry models, with a simplified and enhanced process for registering your custom model endpoints in the azure_ai extension. Microsoft Foundry Integration Microsoft Foundry offers a comprehensive technology stack for building AI apps and agents. But building modern agents capable of reasoning, acting, and collaborating is impossible without connection to data. To facilitate that connection, we are excited to announce a new PostgreSQL connector in Microsoft Foundry. The connector is designed using a new standard in data connectivity – Model Context Protocol (MCP). It enables Foundry agents to interact with HorizonDB securely and intelligently, using natural language instead of SQL, and leveraging Microsoft Entra ID to ensure secure connection. In addition to HorizonDB this connector also supports Azure Database for PostgreSQL (ADP). This integration allows Foundry agents to perform tasks like: Exploring database schemas Retrieving records and insights Performing analytical queries Executing vector similarity searches for semantic search use cases All through natural language, without compromising enterprise security or compliance. To get started with Foundry Integration, follow these setup steps to deploy your own HorizonDB (requires participation in Private Preview) or ADP and connect it to Foundry in just a few steps. PostgreSQL extension for VS Code is Generally Available We’re excited to announce that the PostgreSQL extension for Visual Studio Code is now Generally Available. This extension garnered significant popularity within the PostgreSQL community since it’s preview in May’25 reaching more than 200K installs. It is the easiest way to connect to a PostgreSQL database from your favorite editor, manage your databases, and take advantage of built-in AI capabilities without ever leaving VS Code. The extension works with any PostgreSQL whether it's on-premises or in the cloud, and also supports unique features of Azure HorizonDB and Azure Database for PostgreSQL (ADP). One of the key new capabilities is Metrics Intelligence, which uses Copilot and real-time telemetry of HorizonDB or ADP to help you diagnose and fix performance issues in seconds. Instead of digging through logs and query plans, you can open the Performance Dashboard, see a CPU spike, and ask Copilot to investigate. The extension sends a rich prompt that tells Copilot to analyze live metrics, identify the root cause, and propose an actionable fix. For example, Copilot might find a full table scan on a large table, recommend a composite index on the filter columns, create that index, and confirm the query plan now uses it. The result is dramatic: you can investigate and resolve the CPU spike in seconds, with no manual scripting or guesswork, and with no prior PostgreSQL expertise required. The extension also makes it easier to work with graph data. HorizonDB and ADP support open-source graph extension Apache AGE. This turns these services into fully managed graph databases. You can run graph queries against HorizonDB and immediately visualize the results as an interactive graph inside VS Code. This helps you understand relationships in your data faster, whether you’re exploring customer journeys, network topologies, or knowledge graphs - all without switching tools. In Conclusion Azure HorizonDB brings together everything teams need to build, run, and manage modern, AI-powered applications on PostgreSQL. With DiskANN Advanced Filtering, you can deliver low-latency, filtered vector search at scale. With built-in AI Model Management and Microsoft Foundry integration, you can provision models, wire up semantic operators, and connect agents to your data with far fewer steps and far less complexity. And with the PostgreSQL extension for Visual Studio Code, you get an intuitive, AI-assisted experience for performance tuning and graph visualization, right inside the tools you already use. HorizonDB is now available in private preview. If you’re interested in building AI apps and agents on a fully managed, PostgreSQL-compatible service with built-in AI and rich developer tooling, sign-up for Private Preview: https://aka.ms/PreviewHorizonDB.964Views4likes0CommentsLessons Learned #537: Copilot Prompts for Troubleshooting on Azure SQL Database
We had the opportunity to share our experience in several community sessions how SSMS Copilot can help across multiple phases of troubleshooting. In this article, I would like to share a set of prompts we found in those sessions and show how to apply them to an example query. During a performance incident, we captured the following query, generated by PowerBI. SELECT TOP (1000001) * FROM ( SELECT [t2].[Fiscal Month Label] AS [c38], SUM([t5].[Total Excluding Tax]) AS [a0], SUM([t5].[Total Including Tax]) AS [a1] FROM ( SELECT [$Table].[Sale Key] as [Sale Key], [$Table].[City Key] as [City Key], [$Table].[Customer Key] as [Customer Key], [$Table].[Bill To Customer Key] as [Bill To Customer Key], [$Table].[Stock Item Key] as [Stock Item Key], [$Table].[Invoice Date Key] as [Invoice Date Key], [$Table].[Delivery Date Key] as [Delivery Date Key], [$Table].[Salesperson Key] as [Salesperson Key], [$Table].[WWI Invoice ID] as [WWI Invoice ID], [$Table].[Description] as [Description], [$Table].[Package] as [Package], [$Table].[Quantity] as [Quantity], [$Table].[Unit Price] as [Unit Price], [$Table].[Tax Rate] as [Tax Rate], [$Table].[Total Excluding Tax] as [Total Excluding Tax], [$Table].[Tax Amount] as [Tax Amount], [$Table].[Profit] as [Profit], [$Table].[Total Including Tax] as [Total Including Tax], [$Table].[Total Dry Items] as [Total Dry Items], [$Table].[Total Chiller Items] as [Total Chiller Items], [$Table].[Lineage Key] as [Lineage Key] FROM [Fact].[Sale] as [$Table] ) AS [t5] INNER JOIN ( SELECT [$Table].[Date] as [Date], [$Table].[Day Number] as [Day Number], [$Table].[Day] as [Day], [$Table].[Month] as [Month], [$Table].[Short Month] as [Short Month], [$Table].[Calendar Month Number] as [Calendar Month Number], [$Table].[Calendar Month Label] as [Calendar Month Label], [$Table].[Calendar Year] as [Calendar Year], [$Table].[Calendar Year Label] as [Calendar Year Label], [$Table].[Fiscal Month Number] as [Fiscal Month Number], [$Table].[Fiscal Month Label] as [Fiscal Month Label], [$Table].[Fiscal Year] as [Fiscal Year], [$Table].[Fiscal Year Label] as [Fiscal Year Label], [$Table].[ISO Week Number] as [ISO Week Number] FROM [Dimension].[Date] as [$Table] ) AS [t2] ON [t5].[Delivery Date Key] = [t2].[Date] GROUP BY [t2].[Fiscal Month Label] ) AS [MainTable] WHERE ( NOT([a0] IS NULL) OR NOT([a1] IS NULL) ) I structure the investigation in three areas: Analysis – understand the data model, sizes, and relationships. List all tables in the 'Fact' and 'Dimension' schemas with space usage in MB and number of rows. The name of the tables and their relations among them. Please, provide a textual representation for all relations. List all foreign key relationships between tables in the 'Fact' and 'Dimension' schemas, showing the cardinality and referenced columns. Could you please let me know what is the meaning of every table? Describe all schemas in this database, listing the number of tables and views per schema. Create a textual data model (ER-style) representation showing how all Fact and Dimension tables are connected. Maintenance Plan Check – verify statistics freshness, index health/fragmentation, partition layout, and data quality. List all statistics in the database that have not been updated in the last 7 days, showing table name, number of rows, and last update date. List all indexes in the database with fragmentation higher than 30%, including table name, index name, and page count. Please, provide the T-SQL to rebuild all indexes in ONLINE mode and UPDATE STATISTICS for all tables that are automatic statistics. Check for fact table rows that reference dimension keys which no longer exist (broken foreign key integrity). Find queries that perform table scans on large tables where no indexes are used, based on recent execution plans. Performance Improvements – simplify/reshape the query and consider indexed views, columnstore, partitioning, and missing indexes. In this part, I would like to spend more time about these prompts, for example the following ones, help me to understand the performance issue, simplify the query text and also, explains what the query is doing. Identify the longest-running query in the last 24 hours provide the full text of the query Please simplify the query Explain me the query Explain in plain language what the following SQL query does, including the purpose of each subquery and the final WHERE clause. Show a histogram of data distribution for key columns used in joins or filters, such as SaleDate, ProductCategory, or Region. Finally, using this prompt I could find a lot of useful information how to improve the execution of this query: Analyze the following SQL query and provide a detailed performance review tailored for Azure SQL Database Hyperscale and Power BI DirectQuery scenarios. For each recommendation, estimate the potential performance improvement as a percentage (e.g. query runtime reduction, I/O savings, etc.). 1. Could this query benefit from a schemabound indexed view or a materialized view? Estimate the performance gain if implemented. 2. Is there any missing index on the involved tables that would improve join or filter efficiency? Include the suggested index definition and expected benefit. 3. Would using a clustered or nonclustered columnstore index on the main fact table improve performance? Estimate the potential gain in query time or storage. 4. Could partitioning the fact table improve performance by enabling partition elimination? If so, suggest the partition key and scheme, and estimate improvement. 5. Are current statistics sufficient for optimal execution plans? Recommend updates if needed and estimate impact. 6. Does this query preserve query folding when used with Power BI DirectQuery? If not, identify what breaks folding and suggest how to fix it. 7. Recommend any query rewrites or schema redesigns, along with estimated performance improvements for each. I got a lot of improvements suggestions about it: Evaluated a schemabound indexed view that pre‑aggregates by month (see Reference Implementations), then pointed Power BI to the view. Ensured clustered columnstore on Fact.Sale; considered a targeted rowstore NCI on [Delivery Date Key] INCLUDE ([Total Excluding Tax], [Total Including Tax]) when columnstore alone wasn’t sufficient. Verified statistics freshness on join/aggregate columns and enabled incremental stats for partitions. Checked partitioning by date to leverage elimination for common slicers.270Views0likes0CommentsLesson Learned #529: Troubleshooting Application Slowness Using SSMS Copilot
Some days ago, I worked on a support case where a customer reported application slowness affecting multiple users. Instead of jumping into traces or manually writing diagnostic queries, we used SSMS Copilot to investigate the issue. I would like to share with you how we diagnosed and understood the root cause. To illustrate the case, let’s walk through a simplified example: we create a new table, and right after that, we add a new column to it. CREATE TABLE Ejemplo2 (ID INT) BEGIN TRANSACTION ALTER TABLE dbo.eJEMPLO2 ADD NuevoCampo INT NULL Using SQL Server Management Studio and Copilot we executed the following prompt: Please, provide all currently running or suspended sessions. Include session ID, status, command, wait type (if any), application_name, wait time, and current SQL text. We got the following results: I executed multiple times the same prompt and always the session ID 67 is in suspended mode and Wait_type LCK_M_SCH_S, for this reason, I run a new prompt: Please, provide all sessions that are currently blocked by another session. Include session ID, the blocking session ID, wait type, and the blocked SQL text . At the end, I found that the session 51 is blocking the session ID 67 and for this reason, I run a new prompt: do we any have active transaction pending for commit for the session ID 51. So, I understand that the Session ID 51 has a transaction open, so, let's ask the details of the session 51, with a new prompt: Please, show the most recent SQL statement executed by session ID 51, even if the session is currently sleeping or not running any active request. Include the session status and login name as well. Use sys.dm_exec_connections and most_recent_sql_handle to retrieve the query text if necessary. Well, we identified the problem, the session ID 67 is running a SELECT * FROM sys.Ejemplo2 but it's beging blocked by the session 51. Session ID 51 hasn’t finished its transaction, and now we either need to commit, rollback, or kill that session, especially if we don’t have access to the application that owns it. Before resolving the issue, I asked Copilot an additional prompt: Please, explain why session ID 67 is currently waiting. Include the wait type, and explanation of that, the resource being waited on, how long it has been waiting (seconds), and the SQL text. Also identify if another session is blocking it. The name of the object and schema Please, provide recommendations to prevent or reduce this kind of blocking situation in the future, based on the current wait type and blocking scenario observed with session ID 67. Please, summarize all current blocking chains in the system. Include blocking session IDs, blocked session IDs, wait types, wait durations, login names, and SQL statements involved.188Views0likes0CommentsLesson Learned #530: Comparing Execution Plans to Expose a Hidden Performance Anti-Pattern
One of the most powerful features of SSMS Copilot is how it lets you compare execution plans and immediately show you performance issues. In this case, I would like to share with you my lesson learned comparing two queries and how they behave very differently inside the engine. We have the following queries, these are using a table _x_y_z_MS_HighCPU that contains 4 millon of rows. The column TextToSearch is a varchar(200) datatype. -- Query 1 SELECT COUNT(*) FROM [MSxyzTest].[_x_y_z_MS_HighCPU] WHERE TextToSearch = N'Value: 9'; -- Query 2 SELECT COUNT(*) FROM [MSxyzTest].[_x_y_z_MS_HighCPU] WHERE TextToSearch = 'Value: 9'; Since the query texts are different, each will have a different query ID in Query Store. By running the following T-SQL, for example, I can identify the query IDs. SELECT qsqt.query_sql_text, qsq.query_id, qsp.plan_id, qsp.query_plan_hash, qsp.last_execution_time FROM sys.query_store_query_text qsqt JOIN sys.query_store_query qsq ON qsqt.query_text_id = qsq.query_text_id JOIN sys.query_store_plan qsp ON qsq.query_id = qsp.query_id WHERE qsqt.query_sql_text LIKE '%SELECT COUNT(*)%' -- FROM [[MSxyzTest]].[[_x_y_z_MS_HighCPU]]%' ORDER BY qsp.last_execution_time DESC; Queries 1 and 2 can be compared directly. Using Copilot, I ran the following prompt: Compare the execution plans for the two queries (query id 1 and query id 2 using Query Store. Highlight any differences in operators, estimated vs actual row counts, or implicit conversions. Running the following prompt : CPU Usage: Please, show the top resource-consuming queries in the current database using Query Store data. Include query text, execution count, duration, CPU time, and logical reads. We could see the impact of using an antipattern:184Views0likes0CommentsLesson Learned #525: Tracking Command Timeouts in Azure SQL: Beyond Query Store with Extended Events
A few days ago, we were working on a support case where our customer was intermittently experiencing command timeouts. What made the case interesting was that queries which usually completed in under one second suddenly started taking more than 10 seconds to execute. Since the application — developed in Python using the ODBC Driver 18 for SQL Server — had a command timeout set to 5 seconds, the following error was triggered every time the threshold was exceeded: Error executing command, retrying in 5 seconds. Attempt 1 of 3 with new timeout 5. Error: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Query timeout expired (0) (SQLExecDirectW)') The application had built-in retry logic, dynamically increasing the timeout in each of the three retry attempts, to allow time for the query to complete and to log enough data for post-error analysis. Example logs from the retry logic: (RunCommandTimeout) - Thread: 39808 - Error executing command, retrying in 5 seconds. Attempt 1 of 3 with new timeout 5. Error: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Query timeout expired (0) (SQLExecDirectW)') INFO:root:Connecting to the DB jmjuradotestdb1 - Thread id 39808 - (Attempt 1/3) INFO:root:Connected to the Database in jmjuradotestdb1 - Thread id 39808 - 0.0445 seconds --- (RunCommandTimeout) - Thread: 39808 - Error executing command, retrying in 9 seconds. Attempt 2 of 3 with new timeout 9. Error: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Query timeout expired (0) (SQLExecDirectW)') INFO:root:Connecting to the DB jmjuradotestdb1 - Thread id 39808 - (Attempt 1/3) INFO:root:Connected to the Database in jmjuradotestdb1 - Thread id 39808 - 0.0532 seconds --- (RunCommandTimeout) - Thread: 39808 - Error executing command, retrying in 13 seconds. Attempt 3 of 3 with new timeout 13. Error: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Query timeout expired (0) (SQLExecDirectW)') (RunCommandTimeout) - Thread: 39808 - Loop:2/5 Execution Time: 9.7537 seconds My first prompt using SSMS Copilot was this "Review the queries that experienced a command timeout or were aborted in the last 30 minutes. Include query text, queryid, duration, and the reason and code for the abort if available." and I got the following results. So, all points that the query 216 got command timeouts. My next question, was, for query ID 216, show the number of total executions reporting that is 28 executions. The response showed 28 executions, but this number didn’t match the number of aborted and non-aborted executions observed in the application logs, why this difference? Checking the table sys.query_store_runtime_stats I found 10 rows all having execution_type = 3, and total executions 28. So, that's mean that Query Store aggregates query execution data over a fixed interval. So, the execution_type is an indicator that at least an execution during this runtime interval was aborted. So, at least several of them were aborted and other not. To obtain a more granular and accurate picture, I created an Extended Events session to capture these events using ring_buffer target. CREATE EVENT SESSION [CommandAborted] ON DATABASE ADD EVENT sqlserver.attention( ACTION ( sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.username, sqlserver.database_name, sqlserver.sql_text ) ) ADD TARGET package0.ring_buffer WITH (MAX_MEMORY = 4096KB, EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS); GO ALTER EVENT SESSION [CommandAborted] ON DATABASE STATE = START; after reproducing the command timeout scenario again, I was able to see only the aborted executions. So, in this case, 28 executions were executed and 7 executions were aborted. WITH RingBufferXML AS ( SELECT CAST(t.target_data AS XML) AS target_data FROM sys.dm_xe_database_session_targets t JOIN sys.dm_xe_database_sessions s ON t.event_session_address = s.address WHERE t.target_name = 'ring_buffer' AND s.name = 'CommandAborted' ) SELECT x.value('@name', 'varchar(50)') AS event_name, x.value('@timestamp', 'datetime2') AS event_time, x.value('(action[@name="client_app_name"]/value)[1]', 'nvarchar(256)') AS client_app_name, x.value('(action[@name="sql_text"]/value)[1]', 'nvarchar(max)') AS sql_text, x.value('(data[@name="duration"]/value)[1]', 'bigint') AS duration_microseconds, CAST(x.value('(data[@name="duration"]/value)[1]', 'bigint') / 1000000.0 AS decimal(10,3)) AS duration_seconds FROM RingBufferXML CROSS APPLY target_data.nodes('//event') AS tab(x) WHERE x.value('@name', 'varchar(50)') = 'attention' and x.value('(action[@name="client_app_name"]/value)[1]', 'nvarchar(256)') = 'TEST-DataCon' ORDER BY event_time DESC;229Views0likes0CommentsBuild AI-Ready Apps and Agents with PostgreSQL on Azure
As developers, we’re constantly looking for ways to build smarter, faster, and more scalable applications. The Microsoft Reactor series, Build AI apps with Azure Database for PostgreSQL, is a four-part livestream experience designed to help you do just that—by combining the power of PostgreSQL with Azure’s AI capabilities. Dive into the world of AI apps and agents with Azure Database for PostgreSQL in this engaging video series—your ideal starting point for building intelligent solutions and improving your workflow. Get ready to explore the fundamentals of AI and discover how vector support in databases can elevate your applications. Uncover how innovative tools like the Visual Studio Code extension for PostgreSQL and GitHub Copilot can make your database work faster and more efficient. You'll also see how to create intelligent apps and AI agents using frameworks such as LangChain and Semantic Kernel. Why This Series Matters PostgreSQL is already a favorite among developers for its flexibility and open-source strength. But when paired with Azure’s AI services, it becomes a launchpad for intelligent applications. This series walks you through how to: Orchestrate AI agents using PostgreSQL as a foundation. Enhance semantic search with vector support and indexes like DiskANN. Integrate Azure AI services to enrich your data and user experiences. Boost productivity with tools like the Visual Studio Code PostgreSQL extension and GitHub Copilot. What You'll Learn Each session is packed with practical insights: Episode 1: Laying the foundation: AI-powered apps and agents with Azure Database for PostgreSQL We introduce key AI concepts, setting the stage for a deeper understanding of Large Language Models (LLMs) and its applications, we will explore the capabilities of Azure Database for PostgreSQL, focusing on how its vector support enables advanced semantic search through technologies like DiskANN indexes. We'll also discuss the Azure AI extension, which brings powerful AI features to your data projects, helping you enrich your applications with enhanced search relevance and intelligent insights, and provide a solid foundation for leveraging these tools in your own solutions. Register here Episode 2: Accelerate your data and AI tasks with the VS Code extension for PostgreSQL and GitHub Copilot This talk will delve into how the Visual Studio Code extension for PostgreSQL can streamline your database management, while GitHub Copilot's AI-powered assistance can boost your productivity. Learn how to seamlessly integrate these tools to enhance your workflow, automate repetitive tasks, and write efficient code faster. Whether you're a developer, data scientist, or database administrator, this session will provide you with practical insights and techniques to elevate your data and AI projects. Join us to learn how to effectively use these advanced tools and take your data skills to the next level. Register here Episode 3: Build your own AI copilot for financial apps with PostgreSQL Join us to discover how to transform traditional financial applications into intelligent, AI-powered solutions with Azure Database for PostgreSQL. In this hands-on session, you'll learn to integrate generative AI for high-quality responses to financial queries using PDF-based Statements of Work and invoices, perform AI-driven data validation, apply the Azure AI extension, implement vector search with DiskANN indexes, enhance results with semantic re-ranking, use the LangChain framework, and leverage GraphRAG on Azure Database for PostgreSQL. By the end, you’ll have gained practical skills to build end-to-end AI-driven applications using your own data and projects. Register here Episode 4: Build advanced AI Agents with PostgreSQL Using a sample dataset of legal cases, we’ll show how AI technologies empower intelligent agents to provide high-quality answers to legal queries. In this session, you’ll learn to build an advanced AI agent with Azure Database for PostgreSQL, integrating generative AI for enhanced data validation, retrieval-augmented generation (RAG), semantic re-ranking, Semantic Kernel, and GraphRAG via the Apache AGE Graph extension. This practical demonstration offers insights into developing robust, intelligent solutions using your own data. Register here Join us for an inspiring and hands-on experience—don’t miss out! Get the full series details and register now: https://aka.ms/postgres-ai-reactor-series548Views2likes0CommentsLesson Learned #524: Optimizing Power BI with DirectQuery
In some situations, customers that are using Power BI and DirectQuery reported performance issues depending on how the query has been defined by Power BI. At DataCon 2025 this June in Seattle, I had the great opportunity to present some performance recommendations in this area, based on following articles that we published on our blog some time ago: Lesson Learned #247: All started with the phrase: In PowerBI Direct Query is slow - Indexed views | Microsoft Community Hub Lesson Learned #249: All started with the phrase: In PowerBI Direct Query is slow-Partitioned table | Microsoft Community Hub Lesson Learned #250: All started with the phrase: In PowerBI Direct Query is slow-ColumnStore Index | Microsoft Community Hub In this folder you could find all the materials that we used to deliver this session. This lab helps us better understand where performance gains can be achieved in our database — making it easier to identify what to optimize and how. Also, using the Copilot feature added in SQL Server Management Studio v.21 I would like to share some prompt that we used during the lab that it was very useful during the troubleshooting scenario that we divided in 3 areas: Analysis Phase: List all tables in the 'Fact' and 'Dimension' schemas with space usage in MB and number of rows List all tables in the 'Fact' and 'Dimension' schemas with their structure, including data types, primary keys, foreign keys and indexes. Then provide optimization suggestions for DirectQuery scenarios in Power BI Show the name of the tables and their relation among them List all foreign key relationships between tables in the 'Fact' and 'Dimension' schemas, showing the cardinality and referenced columns Could you please let me know what is the meaning of every table? Describe all schemas in this database, listing the number of tables and views per schema Create a textual data model (ER-style) representation showing how all Fact and Dimension tables are connected. Maintenance Plans: List all statistics in the database that have not been updated in the last 7 days, showing table name, number of rows, and last update date List all indexes in the database with fragmentation higher than 30%. Provide the T-SQL to rebuild each table in the 'Dimension' and 'Fact' schemas in ONLINE mode, and another T-SQL statement for updating automatic statistics List all tables with allocated space but zero rows, or with excessive reserved space not used by actual data Performance Troubleshooting Phase: I have this query, what are the improvements for better performance that we could apply? Please simplify the query and explain it. Explain in plain language what the following SQL query does, including the purpose of each subquery and the final WHERE clause Show a histogram of data distribution for key columns used in joins or filters, such as SaleDate, ProductCategory, or Region Can this query be transformed into a schemabound indexed view that pre-aggregates the sales by [Fiscal Month Label] to improve DirectQuery performance?231Views0likes0CommentsLeveraging GitHub Copilot for T-SQL Code Conversion: A Deep Dive into Code Explainability
Converting database code between different relational database management systems (RDBMS) is an essential part of database migration , especially when moving from Oracle’s PL/SQL to SQL Server’s T-SQL. Both Oracle and SQL Server have unique syntax, functions, and programming conventions that require significant adjustments when converting complex SQL queries, stored procedures, or functions. While manual conversion can be a painstaking process, GitHub Copilot, an AI-powered code assistant, can ease this burden by offering real-time suggestions and automating many aspects of the conversion. In the first part of this two-part blog, we explore how GitHub Copilot can be a powerful tool for code conversion, helping developers quickly adapt code from one language or framework to another. By providing context-aware suggestions and completions, Copilot simplifies the process of rewriting and refactoring code. In the second part, we dive deeper into Copilot’s explainability feature, showcasing how it enhances the code conversion process from Oracle to Azure SQL. Code explainability in GitHub Copilot refers to the tool’s ability to offer suggestions and comments that help developers understand what a given piece of code is doing. When dealing with database migrations, such as converting Oracle SQL code to Azure SQL (SQL Server), GitHub Copilot’s ability to explain, suggest, and refactor code can ease the transition. It can assist in making the conversion process smoother, more efficient, and less error-prone by explaining the logic behind Oracle-specific queries and suggesting corresponding changes in Azure SQL’s syntax We'll go through multiple examples, analyze the differences between the two languages, and show how GitHub Copilot handles these challenges. Understanding the Key Differences: PL/SQL vs T-SQL Before jumping into examples, it's important to understand few of the fundamental differences between PL/SQL (Oracle) and T-SQL (SQL Server): Syntax: While both are procedural SQL dialects, there are key differences in the way they handle variables, control structures, and flow control. Functions: Each platform has its own set of built-in functions (e.g., string manipulation, date handling, etc.), and these need to be mapped correctly during conversion. Error Handling: Error handling and exception management differ between the two, with PL/SQL using EXCEPTION blocks and T-SQL using TRY...CATCH. Cursor Handling: While both support cursors for iterating through results, their syntax differs. Leveraging GitHub Copilot for Oracle PL/SQL to SQL Server T-SQL Code Conversion: A Deep Dive into Complex Examples with Explainability Converting database code between different relational database management systems (RDBMS) is an essential part of database migration , especially when moving from Oracle’s PL/SQL to SQL Server’s T-SQL. Both Oracle and SQL Server have unique syntax, functions, and programming conventions that require significant adjustments when converting complex SQL queries, stored procedures, or functions. While manual conversion can be a painstaking process, GitHub Copilot, an AI-powered code assistant, can ease this burden by offering real-time suggestions and automating many aspects of the conversion. In the first part of this two-part blog, we explore how GitHub Copilot can be a powerful tool for code conversion, helping developers quickly adapt code from one language or framework to another. By providing context-aware suggestions and completions, Copilot simplifies the process of rewriting and refactoring code. In the second part, we dive deeper into Copilot’s explainability feature, showcasing how it enhances the code conversion process from Oracle to Azure SQL. Code explainability in GitHub Copilot refers to the tool’s ability to offer suggestions and comments that help developers understand what a given piece of code is doing. When dealing with database migrations, such as converting Oracle SQL code to Azure SQL (SQL Server), GitHub Copilot’s ability to explain, suggest, and refactor code can ease the transition. It can assist in making the conversion process smoother, more efficient, and less error-prone by explaining the logic behind Oracle-specific queries and suggesting corresponding changes in Azure SQL’s syntax We'll go through multiple examples, analyse the differences between the two languages, and show how GitHub Copilot handles these challenges. Understanding the Key Differences: PL/SQL vs T-SQL Before jumping into examples, it's important to understand few of the fundamental differences between PL/SQL (Oracle) and T-SQL (SQL Server): Syntax: While both are procedural SQL dialects, there are key differences in the way they handle variables, control structures, and flow control. Functions: Each platform has its own set of built-in functions (e.g., string manipulation, date handling, etc.), and these need to be mapped correctly during conversion. Error Handling: Error handling and exception management differ between the two, with PL/SQL using EXCEPTION blocks and T-SQL using TRY...CATCH. Cursor Handling: While both support cursors for iterating through results, their syntax differs. Example 1: Complex Stored Procedure Conversion Let’s start with a complex example that involves handling parameters, cursors, and error handling. We'll look at a PL/SQL stored procedure in Oracle that processes employee records and outputs their details. CREATE OR REPLACE PROCEDURE GetEmployeeDetails (emp_id IN NUMBER) IS CURSOR emp_cursor IS SELECT employee_id, first_name, last_name, hire_date FROM employees WHERE employee_id = emp_id; emp_record emp_cursor%ROWTYPE; emp_full_name VARCHAR2(100); BEGIN OPEN emp_cursor; LOOP FETCH emp_cursor INTO emp_record; EXIT WHEN emp_cursor%NOTFOUND; -- Concatenate first and last name emp_full_name := emp_record.first_name || ' ' || emp_record.last_name; -- Output employee details DBMS_OUTPUT.PUT_LINE('Employee ID: ' || emp_record.employee_id); DBMS_OUTPUT.PUT_LINE('Full Name: ' || emp_full_name); DBMS_OUTPUT.PUT_LINE('Hire Date: ' || TO_CHAR(emp_record.hire_date, 'YYYY-MM-DD')); END LOOP; CLOSE emp_cursor; EXCEPTION WHEN NO_DATA_FOUND THEN DBMS_OUTPUT.PUT_LINE('No employee found with the given ID.'); WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('An error occurred.'); END; Oracle PL/SQL Example In Oracle, we might have a procedure that uses a cursor to fetch employee records and print them. The procedure also includes error handling for potential issues during execution. Key Points in Oracle PL/SQL: Cursor: PL/SQL uses CURSOR for iterating through rows. Concatenation: Oracle uses || for string concatenation. Error Handling: PL/SQL uses EXCEPTION blocks for handling errors such as NO_DATA_FOUND. Converting to SQL Server T-SQL with GitHub Copilot When you type the PL/SQL code into an IDE with GitHub Copilot enabled (e.g., Visual Studio Code), Copilot will start suggesting the equivalent SQL Server T-SQL code. Here’s how Copilot converts the procedure. Copilot’s Suggested SQL Server T-SQL Code CREATE PROCEDURE GetEmployeeDetails (@emp_id INT) AS BEGIN DECLARE @emp_id INT, @first_name NVARCHAR(50), @last_name NVARCHAR(50); DECLARE @emp_full_name NVARCHAR(100); DECLARE emp_cursor CURSOR FOR SELECT employee_id, first_name, last_name, hire_date FROM employees WHERE employee_id = @emp_id; OPEN emp_cursor; FETCH NEXT FROM emp_cursor INTO @emp_id, @first_name, @last_name, @hire_date; WHILE @@FETCH_STATUS = 0 BEGIN -- Concatenate first and last name SET @emp_full_name = @first_name + ' ' + @last_name; -- Output employee details PRINT 'Employee ID: ' + CAST(@emp_id AS NVARCHAR); PRINT 'Full Name: ' + @emp_full_name; PRINT 'Hire Date: ' + CONVERT(NVARCHAR, @hire_date, 23); FETCH NEXT FROM emp_cursor INTO @emp_id, @first_name, @last_name, @hire_date; END CLOSE emp_cursor; DEALLOCATE emp_cursor; BEGIN TRY -- Error handling code (example) END TRY BEGIN CATCH PRINT 'An error occurred.'; END CATCH END; Key Points in T-SQL: Cursor: SQL Server uses DECLARE CURSOR to declare a cursor, similar to PL/SQL but with different syntax for opening and fetching. Concatenation: SQL Server uses + for string concatenation. Error Handling: SQL Server uses TRY...CATCH for error handling, unlike PL/SQL’s EXCEPTION block. GitHub Copilot provides a T-SQL equivalent for an Oracle PL/SQL stored procedure. Explanation of the Conversion Process Cursor Declaration and Fetching: PL/SQL uses CURSOR with OPEN, FETCH, and CLOSE to process rows. T-SQL uses DECLARE CURSOR to define the cursor and fetches rows with FETCH NEXT. Both approaches are similar, but T-SQL requires explicit DEALLOCATE to release the cursor. Error Handling: PL/SQL uses EXCEPTION blocks to handle errors like NO_DATA_FOUND. T-SQL uses TRY...CATCH blocks to capture and handle errors, a major difference in error handling between the two systems. String Concatenation: PL/SQL uses || for concatenating strings, while T-SQL uses +. This is a simple yet crucial change when converting between the two dialects. Example 2: Complex Query with Date Handling Let’s now look at a query that involves date handling and conditional logic. These are often tricky when converting between PL/SQL and T-SQL due to differences in how each platform handles dates. Oracle PL/SQL Query Example SELECT employee_id, first_name, hire_date, CASE WHEN hire_date < TO_DATE('2000-01-01', 'YYYY-MM-DD') THEN 'Veteran' ELSE 'New Hire' END AS employee_status FROM employees WHERE hire_date BETWEEN TO_DATE('1990-01-01', 'YYYY-MM-DD') AND SYSDATE; Key Points in PL/SQL: TO_DATE: Used to convert strings into dates. SYSDATE: Returns the current system date. CASE: Used for conditional logic to categorize employees as either "Veteran" or "New Hire". Copilot’s Suggested SQL Server T-SQL Code SELECT employee_id, first_name, hire_date, CASE WHEN hire_date < CONVERT(DATE, '2000-01-01', 23) THEN 'Veteran' ELSE 'New Hire' END AS employee_status FROM employees WHERE hire_date BETWEEN CONVERT(DATE, '1990-01-01', 23) AND GETDATE(); Key Points in T-SQL: CONVERT: SQL Server uses CONVERT for date formatting, similar to Oracle’s TO_DATE, but with different syntax and style codes. GETDATE(): Equivalent to Oracle’s TO_DATE In conclusion, GitHub Copilot streamlines code conversion by offering both quick suggestions and detailed explanations. The first part highlighted its ability to assist with code transitions, while the second part focused on how its explainability feature enhances the Oracle to Azure SQL migration, providing guided, efficient, and error-free conversion. This combination accelerates development and reduces potential issues during migration.1.8KViews2likes0CommentsGitHub Copilot and SSMA: Strap a GenAI conversion booster to your Oracle to SQL Migrations
In this blog, we’ll explore how GitHub Copilot and SQL Server Migration Assistant (SSMA) for Oracle can supercharge your code conversion journey from PL/SQL to T-SQL. Learn how both these tools bring comprehensive conversion rule set and Generative AI capabilities together to simplify and expedite your migration journey from Oracle to Azure SQL.4.8KViews6likes0Comments