copilot
7 TopicsLesson Learned #529: Troubleshooting Application Slowness Using SSMS Copilot
Some days ago, I worked on a support case where a customer reported application slowness affecting multiple users. Instead of jumping into traces or manually writing diagnostic queries, we used SSMS Copilot to investigate the issue. I would like to share with you how we diagnosed and understood the root cause. To illustrate the case, let’s walk through a simplified example: we create a new table, and right after that, we add a new column to it. CREATE TABLE Ejemplo2 (ID INT) BEGIN TRANSACTION ALTER TABLE dbo.eJEMPLO2 ADD NuevoCampo INT NULL Using SQL Server Management Studio and Copilot we executed the following prompt: Please, provide all currently running or suspended sessions. Include session ID, status, command, wait type (if any), application_name, wait time, and current SQL text. We got the following results: I executed multiple times the same prompt and always the session ID 67 is in suspended mode and Wait_type LCK_M_SCH_S, for this reason, I run a new prompt: Please, provide all sessions that are currently blocked by another session. Include session ID, the blocking session ID, wait type, and the blocked SQL text . At the end, I found that the session 51 is blocking the session ID 67 and for this reason, I run a new prompt: do we any have active transaction pending for commit for the session ID 51. So, I understand that the Session ID 51 has a transaction open, so, let's ask the details of the session 51, with a new prompt: Please, show the most recent SQL statement executed by session ID 51, even if the session is currently sleeping or not running any active request. Include the session status and login name as well. Use sys.dm_exec_connections and most_recent_sql_handle to retrieve the query text if necessary. Well, we identified the problem, the session ID 67 is running a SELECT * FROM sys.Ejemplo2 but it's beging blocked by the session 51. Session ID 51 hasn’t finished its transaction, and now we either need to commit, rollback, or kill that session, especially if we don’t have access to the application that owns it. Before resolving the issue, I asked Copilot an additional prompt: Please, explain why session ID 67 is currently waiting. Include the wait type, and explanation of that, the resource being waited on, how long it has been waiting (seconds), and the SQL text. Also identify if another session is blocking it. The name of the object and schema Please, provide recommendations to prevent or reduce this kind of blocking situation in the future, based on the current wait type and blocking scenario observed with session ID 67. Please, summarize all current blocking chains in the system. Include blocking session IDs, blocked session IDs, wait types, wait durations, login names, and SQL statements involved.149Views0likes0CommentsLesson Learned #530: Comparing Execution Plans to Expose a Hidden Performance Anti-Pattern
One of the most powerful features of SSMS Copilot is how it lets you compare execution plans and immediately show you performance issues. In this case, I would like to share with you my lesson learned comparing two queries and how they behave very differently inside the engine. We have the following queries, these are using a table _x_y_z_MS_HighCPU that contains 4 millon of rows. The column TextToSearch is a varchar(200) datatype. -- Query 1 SELECT COUNT(*) FROM [MSxyzTest].[_x_y_z_MS_HighCPU] WHERE TextToSearch = N'Value: 9'; -- Query 2 SELECT COUNT(*) FROM [MSxyzTest].[_x_y_z_MS_HighCPU] WHERE TextToSearch = 'Value: 9'; Since the query texts are different, each will have a different query ID in Query Store. By running the following T-SQL, for example, I can identify the query IDs. SELECT qsqt.query_sql_text, qsq.query_id, qsp.plan_id, qsp.query_plan_hash, qsp.last_execution_time FROM sys.query_store_query_text qsqt JOIN sys.query_store_query qsq ON qsqt.query_text_id = qsq.query_text_id JOIN sys.query_store_plan qsp ON qsq.query_id = qsp.query_id WHERE qsqt.query_sql_text LIKE '%SELECT COUNT(*)%' -- FROM [[MSxyzTest]].[[_x_y_z_MS_HighCPU]]%' ORDER BY qsp.last_execution_time DESC; Queries 1 and 2 can be compared directly. Using Copilot, I ran the following prompt: Compare the execution plans for the two queries (query id 1 and query id 2 using Query Store. Highlight any differences in operators, estimated vs actual row counts, or implicit conversions. Running the following prompt : CPU Usage: Please, show the top resource-consuming queries in the current database using Query Store data. Include query text, execution count, duration, CPU time, and logical reads. We could see the impact of using an antipattern:145Views0likes0CommentsLesson Learned #525: Tracking Command Timeouts in Azure SQL: Beyond Query Store with Extended Events
A few days ago, we were working on a support case where our customer was intermittently experiencing command timeouts. What made the case interesting was that queries which usually completed in under one second suddenly started taking more than 10 seconds to execute. Since the application — developed in Python using the ODBC Driver 18 for SQL Server — had a command timeout set to 5 seconds, the following error was triggered every time the threshold was exceeded: Error executing command, retrying in 5 seconds. Attempt 1 of 3 with new timeout 5. Error: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Query timeout expired (0) (SQLExecDirectW)') The application had built-in retry logic, dynamically increasing the timeout in each of the three retry attempts, to allow time for the query to complete and to log enough data for post-error analysis. Example logs from the retry logic: (RunCommandTimeout) - Thread: 39808 - Error executing command, retrying in 5 seconds. Attempt 1 of 3 with new timeout 5. Error: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Query timeout expired (0) (SQLExecDirectW)') INFO:root:Connecting to the DB jmjuradotestdb1 - Thread id 39808 - (Attempt 1/3) INFO:root:Connected to the Database in jmjuradotestdb1 - Thread id 39808 - 0.0445 seconds --- (RunCommandTimeout) - Thread: 39808 - Error executing command, retrying in 9 seconds. Attempt 2 of 3 with new timeout 9. Error: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Query timeout expired (0) (SQLExecDirectW)') INFO:root:Connecting to the DB jmjuradotestdb1 - Thread id 39808 - (Attempt 1/3) INFO:root:Connected to the Database in jmjuradotestdb1 - Thread id 39808 - 0.0532 seconds --- (RunCommandTimeout) - Thread: 39808 - Error executing command, retrying in 13 seconds. Attempt 3 of 3 with new timeout 13. Error: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Query timeout expired (0) (SQLExecDirectW)') (RunCommandTimeout) - Thread: 39808 - Loop:2/5 Execution Time: 9.7537 seconds My first prompt using SSMS Copilot was this "Review the queries that experienced a command timeout or were aborted in the last 30 minutes. Include query text, queryid, duration, and the reason and code for the abort if available." and I got the following results. So, all points that the query 216 got command timeouts. My next question, was, for query ID 216, show the number of total executions reporting that is 28 executions. The response showed 28 executions, but this number didn’t match the number of aborted and non-aborted executions observed in the application logs, why this difference? Checking the table sys.query_store_runtime_stats I found 10 rows all having execution_type = 3, and total executions 28. So, that's mean that Query Store aggregates query execution data over a fixed interval. So, the execution_type is an indicator that at least an execution during this runtime interval was aborted. So, at least several of them were aborted and other not. To obtain a more granular and accurate picture, I created an Extended Events session to capture these events using ring_buffer target. CREATE EVENT SESSION [CommandAborted] ON DATABASE ADD EVENT sqlserver.attention( ACTION ( sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.username, sqlserver.database_name, sqlserver.sql_text ) ) ADD TARGET package0.ring_buffer WITH (MAX_MEMORY = 4096KB, EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS); GO ALTER EVENT SESSION [CommandAborted] ON DATABASE STATE = START; after reproducing the command timeout scenario again, I was able to see only the aborted executions. So, in this case, 28 executions were executed and 7 executions were aborted. WITH RingBufferXML AS ( SELECT CAST(t.target_data AS XML) AS target_data FROM sys.dm_xe_database_session_targets t JOIN sys.dm_xe_database_sessions s ON t.event_session_address = s.address WHERE t.target_name = 'ring_buffer' AND s.name = 'CommandAborted' ) SELECT x.value('@name', 'varchar(50)') AS event_name, x.value('@timestamp', 'datetime2') AS event_time, x.value('(action[@name="client_app_name"]/value)[1]', 'nvarchar(256)') AS client_app_name, x.value('(action[@name="sql_text"]/value)[1]', 'nvarchar(max)') AS sql_text, x.value('(data[@name="duration"]/value)[1]', 'bigint') AS duration_microseconds, CAST(x.value('(data[@name="duration"]/value)[1]', 'bigint') / 1000000.0 AS decimal(10,3)) AS duration_seconds FROM RingBufferXML CROSS APPLY target_data.nodes('//event') AS tab(x) WHERE x.value('@name', 'varchar(50)') = 'attention' and x.value('(action[@name="client_app_name"]/value)[1]', 'nvarchar(256)') = 'TEST-DataCon' ORDER BY event_time DESC;180Views0likes0CommentsBuild AI-Ready Apps and Agents with PostgreSQL on Azure
As developers, we’re constantly looking for ways to build smarter, faster, and more scalable applications. The Microsoft Reactor series, Build AI apps with Azure Database for PostgreSQL, is a four-part livestream experience designed to help you do just that—by combining the power of PostgreSQL with Azure’s AI capabilities. Dive into the world of AI apps and agents with Azure Database for PostgreSQL in this engaging video series—your ideal starting point for building intelligent solutions and improving your workflow. Get ready to explore the fundamentals of AI and discover how vector support in databases can elevate your applications. Uncover how innovative tools like the Visual Studio Code extension for PostgreSQL and GitHub Copilot can make your database work faster and more efficient. You'll also see how to create intelligent apps and AI agents using frameworks such as LangChain and Semantic Kernel. Why This Series Matters PostgreSQL is already a favorite among developers for its flexibility and open-source strength. But when paired with Azure’s AI services, it becomes a launchpad for intelligent applications. This series walks you through how to: Orchestrate AI agents using PostgreSQL as a foundation. Enhance semantic search with vector support and indexes like DiskANN. Integrate Azure AI services to enrich your data and user experiences. Boost productivity with tools like the Visual Studio Code PostgreSQL extension and GitHub Copilot. What You'll Learn Each session is packed with practical insights: Episode 1: Laying the foundation: AI-powered apps and agents with Azure Database for PostgreSQL We introduce key AI concepts, setting the stage for a deeper understanding of Large Language Models (LLMs) and its applications, we will explore the capabilities of Azure Database for PostgreSQL, focusing on how its vector support enables advanced semantic search through technologies like DiskANN indexes. We'll also discuss the Azure AI extension, which brings powerful AI features to your data projects, helping you enrich your applications with enhanced search relevance and intelligent insights, and provide a solid foundation for leveraging these tools in your own solutions. Register here Episode 2: Accelerate your data and AI tasks with the VS Code extension for PostgreSQL and GitHub Copilot This talk will delve into how the Visual Studio Code extension for PostgreSQL can streamline your database management, while GitHub Copilot's AI-powered assistance can boost your productivity. Learn how to seamlessly integrate these tools to enhance your workflow, automate repetitive tasks, and write efficient code faster. Whether you're a developer, data scientist, or database administrator, this session will provide you with practical insights and techniques to elevate your data and AI projects. Join us to learn how to effectively use these advanced tools and take your data skills to the next level. Register here Episode 3: Build your own AI copilot for financial apps with PostgreSQL Join us to discover how to transform traditional financial applications into intelligent, AI-powered solutions with Azure Database for PostgreSQL. In this hands-on session, you'll learn to integrate generative AI for high-quality responses to financial queries using PDF-based Statements of Work and invoices, perform AI-driven data validation, apply the Azure AI extension, implement vector search with DiskANN indexes, enhance results with semantic re-ranking, use the LangChain framework, and leverage GraphRAG on Azure Database for PostgreSQL. By the end, you’ll have gained practical skills to build end-to-end AI-driven applications using your own data and projects. Register here Episode 4: Build advanced AI Agents with PostgreSQL Using a sample dataset of legal cases, we’ll show how AI technologies empower intelligent agents to provide high-quality answers to legal queries. In this session, you’ll learn to build an advanced AI agent with Azure Database for PostgreSQL, integrating generative AI for enhanced data validation, retrieval-augmented generation (RAG), semantic re-ranking, Semantic Kernel, and GraphRAG via the Apache AGE Graph extension. This practical demonstration offers insights into developing robust, intelligent solutions using your own data. Register here Join us for an inspiring and hands-on experience—don’t miss out! Get the full series details and register now: https://aka.ms/postgres-ai-reactor-seriesLesson Learned #524: Optimizing Power BI with DirectQuery
In some situations, customers that are using Power BI and DirectQuery reported performance issues depending on how the query has been defined by Power BI. At DataCon 2025 this June in Seattle, I had the great opportunity to present some performance recommendations in this area, based on following articles that we published on our blog some time ago: Lesson Learned #247: All started with the phrase: In PowerBI Direct Query is slow - Indexed views | Microsoft Community Hub Lesson Learned #249: All started with the phrase: In PowerBI Direct Query is slow-Partitioned table | Microsoft Community Hub Lesson Learned #250: All started with the phrase: In PowerBI Direct Query is slow-ColumnStore Index | Microsoft Community Hub In this folder you could find all the materials that we used to deliver this session. This lab helps us better understand where performance gains can be achieved in our database — making it easier to identify what to optimize and how. Also, using the Copilot feature added in SQL Server Management Studio v.21 I would like to share some prompt that we used during the lab that it was very useful during the troubleshooting scenario that we divided in 3 areas: Analysis Phase: List all tables in the 'Fact' and 'Dimension' schemas with space usage in MB and number of rows List all tables in the 'Fact' and 'Dimension' schemas with their structure, including data types, primary keys, foreign keys and indexes. Then provide optimization suggestions for DirectQuery scenarios in Power BI Show the name of the tables and their relation among them List all foreign key relationships between tables in the 'Fact' and 'Dimension' schemas, showing the cardinality and referenced columns Could you please let me know what is the meaning of every table? Describe all schemas in this database, listing the number of tables and views per schema Create a textual data model (ER-style) representation showing how all Fact and Dimension tables are connected. Maintenance Plans: List all statistics in the database that have not been updated in the last 7 days, showing table name, number of rows, and last update date List all indexes in the database with fragmentation higher than 30%. Provide the T-SQL to rebuild each table in the 'Dimension' and 'Fact' schemas in ONLINE mode, and another T-SQL statement for updating automatic statistics List all tables with allocated space but zero rows, or with excessive reserved space not used by actual data Performance Troubleshooting Phase: I have this query, what are the improvements for better performance that we could apply? Please simplify the query and explain it. Explain in plain language what the following SQL query does, including the purpose of each subquery and the final WHERE clause Show a histogram of data distribution for key columns used in joins or filters, such as SaleDate, ProductCategory, or Region Can this query be transformed into a schemabound indexed view that pre-aggregates the sales by [Fiscal Month Label] to improve DirectQuery performance?191Views0likes0CommentsLeveraging GitHub Copilot for T-SQL Code Conversion: A Deep Dive into Code Explainability
Converting database code between different relational database management systems (RDBMS) is an essential part of database migration , especially when moving from Oracle’s PL/SQL to SQL Server’s T-SQL. Both Oracle and SQL Server have unique syntax, functions, and programming conventions that require significant adjustments when converting complex SQL queries, stored procedures, or functions. While manual conversion can be a painstaking process, GitHub Copilot, an AI-powered code assistant, can ease this burden by offering real-time suggestions and automating many aspects of the conversion. In the first part of this two-part blog, we explore how GitHub Copilot can be a powerful tool for code conversion, helping developers quickly adapt code from one language or framework to another. By providing context-aware suggestions and completions, Copilot simplifies the process of rewriting and refactoring code. In the second part, we dive deeper into Copilot’s explainability feature, showcasing how it enhances the code conversion process from Oracle to Azure SQL. Code explainability in GitHub Copilot refers to the tool’s ability to offer suggestions and comments that help developers understand what a given piece of code is doing. When dealing with database migrations, such as converting Oracle SQL code to Azure SQL (SQL Server), GitHub Copilot’s ability to explain, suggest, and refactor code can ease the transition. It can assist in making the conversion process smoother, more efficient, and less error-prone by explaining the logic behind Oracle-specific queries and suggesting corresponding changes in Azure SQL’s syntax We'll go through multiple examples, analyze the differences between the two languages, and show how GitHub Copilot handles these challenges. Understanding the Key Differences: PL/SQL vs T-SQL Before jumping into examples, it's important to understand few of the fundamental differences between PL/SQL (Oracle) and T-SQL (SQL Server): Syntax: While both are procedural SQL dialects, there are key differences in the way they handle variables, control structures, and flow control. Functions: Each platform has its own set of built-in functions (e.g., string manipulation, date handling, etc.), and these need to be mapped correctly during conversion. Error Handling: Error handling and exception management differ between the two, with PL/SQL using EXCEPTION blocks and T-SQL using TRY...CATCH. Cursor Handling: While both support cursors for iterating through results, their syntax differs. Leveraging GitHub Copilot for Oracle PL/SQL to SQL Server T-SQL Code Conversion: A Deep Dive into Complex Examples with Explainability Converting database code between different relational database management systems (RDBMS) is an essential part of database migration , especially when moving from Oracle’s PL/SQL to SQL Server’s T-SQL. Both Oracle and SQL Server have unique syntax, functions, and programming conventions that require significant adjustments when converting complex SQL queries, stored procedures, or functions. While manual conversion can be a painstaking process, GitHub Copilot, an AI-powered code assistant, can ease this burden by offering real-time suggestions and automating many aspects of the conversion. In the first part of this two-part blog, we explore how GitHub Copilot can be a powerful tool for code conversion, helping developers quickly adapt code from one language or framework to another. By providing context-aware suggestions and completions, Copilot simplifies the process of rewriting and refactoring code. In the second part, we dive deeper into Copilot’s explainability feature, showcasing how it enhances the code conversion process from Oracle to Azure SQL. Code explainability in GitHub Copilot refers to the tool’s ability to offer suggestions and comments that help developers understand what a given piece of code is doing. When dealing with database migrations, such as converting Oracle SQL code to Azure SQL (SQL Server), GitHub Copilot’s ability to explain, suggest, and refactor code can ease the transition. It can assist in making the conversion process smoother, more efficient, and less error-prone by explaining the logic behind Oracle-specific queries and suggesting corresponding changes in Azure SQL’s syntax We'll go through multiple examples, analyse the differences between the two languages, and show how GitHub Copilot handles these challenges. Understanding the Key Differences: PL/SQL vs T-SQL Before jumping into examples, it's important to understand few of the fundamental differences between PL/SQL (Oracle) and T-SQL (SQL Server): Syntax: While both are procedural SQL dialects, there are key differences in the way they handle variables, control structures, and flow control. Functions: Each platform has its own set of built-in functions (e.g., string manipulation, date handling, etc.), and these need to be mapped correctly during conversion. Error Handling: Error handling and exception management differ between the two, with PL/SQL using EXCEPTION blocks and T-SQL using TRY...CATCH. Cursor Handling: While both support cursors for iterating through results, their syntax differs. Example 1: Complex Stored Procedure Conversion Let’s start with a complex example that involves handling parameters, cursors, and error handling. We'll look at a PL/SQL stored procedure in Oracle that processes employee records and outputs their details. CREATE OR REPLACE PROCEDURE GetEmployeeDetails (emp_id IN NUMBER) IS CURSOR emp_cursor IS SELECT employee_id, first_name, last_name, hire_date FROM employees WHERE employee_id = emp_id; emp_record emp_cursor%ROWTYPE; emp_full_name VARCHAR2(100); BEGIN OPEN emp_cursor; LOOP FETCH emp_cursor INTO emp_record; EXIT WHEN emp_cursor%NOTFOUND; -- Concatenate first and last name emp_full_name := emp_record.first_name || ' ' || emp_record.last_name; -- Output employee details DBMS_OUTPUT.PUT_LINE('Employee ID: ' || emp_record.employee_id); DBMS_OUTPUT.PUT_LINE('Full Name: ' || emp_full_name); DBMS_OUTPUT.PUT_LINE('Hire Date: ' || TO_CHAR(emp_record.hire_date, 'YYYY-MM-DD')); END LOOP; CLOSE emp_cursor; EXCEPTION WHEN NO_DATA_FOUND THEN DBMS_OUTPUT.PUT_LINE('No employee found with the given ID.'); WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('An error occurred.'); END; Oracle PL/SQL Example In Oracle, we might have a procedure that uses a cursor to fetch employee records and print them. The procedure also includes error handling for potential issues during execution. Key Points in Oracle PL/SQL: Cursor: PL/SQL uses CURSOR for iterating through rows. Concatenation: Oracle uses || for string concatenation. Error Handling: PL/SQL uses EXCEPTION blocks for handling errors such as NO_DATA_FOUND. Converting to SQL Server T-SQL with GitHub Copilot When you type the PL/SQL code into an IDE with GitHub Copilot enabled (e.g., Visual Studio Code), Copilot will start suggesting the equivalent SQL Server T-SQL code. Here’s how Copilot converts the procedure. Copilot’s Suggested SQL Server T-SQL Code CREATE PROCEDURE GetEmployeeDetails (@emp_id INT) AS BEGIN DECLARE @emp_id INT, @first_name NVARCHAR(50), @last_name NVARCHAR(50); DECLARE @emp_full_name NVARCHAR(100); DECLARE emp_cursor CURSOR FOR SELECT employee_id, first_name, last_name, hire_date FROM employees WHERE employee_id = @emp_id; OPEN emp_cursor; FETCH NEXT FROM emp_cursor INTO @emp_id, @first_name, @last_name, @hire_date; WHILE @@FETCH_STATUS = 0 BEGIN -- Concatenate first and last name SET @emp_full_name = @first_name + ' ' + @last_name; -- Output employee details PRINT 'Employee ID: ' + CAST(@emp_id AS NVARCHAR); PRINT 'Full Name: ' + @emp_full_name; PRINT 'Hire Date: ' + CONVERT(NVARCHAR, @hire_date, 23); FETCH NEXT FROM emp_cursor INTO @emp_id, @first_name, @last_name, @hire_date; END CLOSE emp_cursor; DEALLOCATE emp_cursor; BEGIN TRY -- Error handling code (example) END TRY BEGIN CATCH PRINT 'An error occurred.'; END CATCH END; Key Points in T-SQL: Cursor: SQL Server uses DECLARE CURSOR to declare a cursor, similar to PL/SQL but with different syntax for opening and fetching. Concatenation: SQL Server uses + for string concatenation. Error Handling: SQL Server uses TRY...CATCH for error handling, unlike PL/SQL’s EXCEPTION block. GitHub Copilot provides a T-SQL equivalent for an Oracle PL/SQL stored procedure. Explanation of the Conversion Process Cursor Declaration and Fetching: PL/SQL uses CURSOR with OPEN, FETCH, and CLOSE to process rows. T-SQL uses DECLARE CURSOR to define the cursor and fetches rows with FETCH NEXT. Both approaches are similar, but T-SQL requires explicit DEALLOCATE to release the cursor. Error Handling: PL/SQL uses EXCEPTION blocks to handle errors like NO_DATA_FOUND. T-SQL uses TRY...CATCH blocks to capture and handle errors, a major difference in error handling between the two systems. String Concatenation: PL/SQL uses || for concatenating strings, while T-SQL uses +. This is a simple yet crucial change when converting between the two dialects. Example 2: Complex Query with Date Handling Let’s now look at a query that involves date handling and conditional logic. These are often tricky when converting between PL/SQL and T-SQL due to differences in how each platform handles dates. Oracle PL/SQL Query Example SELECT employee_id, first_name, hire_date, CASE WHEN hire_date < TO_DATE('2000-01-01', 'YYYY-MM-DD') THEN 'Veteran' ELSE 'New Hire' END AS employee_status FROM employees WHERE hire_date BETWEEN TO_DATE('1990-01-01', 'YYYY-MM-DD') AND SYSDATE; Key Points in PL/SQL: TO_DATE: Used to convert strings into dates. SYSDATE: Returns the current system date. CASE: Used for conditional logic to categorize employees as either "Veteran" or "New Hire". Copilot’s Suggested SQL Server T-SQL Code SELECT employee_id, first_name, hire_date, CASE WHEN hire_date < CONVERT(DATE, '2000-01-01', 23) THEN 'Veteran' ELSE 'New Hire' END AS employee_status FROM employees WHERE hire_date BETWEEN CONVERT(DATE, '1990-01-01', 23) AND GETDATE(); Key Points in T-SQL: CONVERT: SQL Server uses CONVERT for date formatting, similar to Oracle’s TO_DATE, but with different syntax and style codes. GETDATE(): Equivalent to Oracle’s TO_DATE In conclusion, GitHub Copilot streamlines code conversion by offering both quick suggestions and detailed explanations. The first part highlighted its ability to assist with code transitions, while the second part focused on how its explainability feature enhances the Oracle to Azure SQL migration, providing guided, efficient, and error-free conversion. This combination accelerates development and reduces potential issues during migration.1.7KViews2likes0CommentsGitHub Copilot and SSMA: Strap a GenAI conversion booster to your Oracle to SQL Migrations
In this blog, we’ll explore how GitHub Copilot and SQL Server Migration Assistant (SSMA) for Oracle can supercharge your code conversion journey from PL/SQL to T-SQL. Learn how both these tools bring comprehensive conversion rule set and Generative AI capabilities together to simplify and expedite your migration journey from Oracle to Azure SQL.4.7KViews6likes0Comments