sqlserver2025
21 TopicsSQL Spotlight: Ignite 2025
Ignite 2025: Your SQL Guide Microsoft Ignite 2025 is less than 30 days away! Whether you’re a data professional, DBA, or developer, this event is your chance to dive in, expand knowledge, and connect with fellow SQL experts. Join Microsoft leaders and customers as they share best practices, practical insights from across the SQL portfolio. Be the first to hear the latest releases, dive in with hands-on labs, and new learning paths to grow your skill set. Grab your coffee and add the following sessions to your schedule! Be sure to check back for more updates: 5 SQL Sessions You Don’t Want to Miss (in person and online) 1. BRK124: SQL Server 2025: The AI-ready enterprise database Tue, Nov 18 | 3:45 PM – 4:30 PM PST | Learn how SQL Server 2025 is redefining what’s possible for enterprise data with AI integration, native JSON, REST APIs, and vector search. 2. BRK156: How Levi’s is transforming their IT estate with Azure Wed, Nov 19 | 2:45 PM - 3:30 PM PST Hear how American icon Levi’s transformed its 150 year-business by migrating Windows Server, SQL Server, SAP and Oracle workloads to Azure. 3. BRK126: Build scalable AI apps with Azure SQL Database Hyperscale Tue, Nov 18 | 5:00 PM – 5:45 PM PST | Learn how Azure SQL Hyperscale together with AI Foundry delivers a more modern, secure, scalable solution for building AI apps. Industrial IT company Hexagon and investment company, BlackRock will join us onstage to share their experience, demos and more. 4. BRK125: Meet the performance-enhanced next-gen Azure SQL Managed Instance Wed, Nov 19 | 10:15 AM – 11:00 AM PST | This session is all about to get the most from your SQL Server workloads with Azure SQL Managed Instance. 5. BRK220: SQL database in Fabric: The unified database for AI apps and analytics Wed, Nov 19 | 11:30 AM – 12:15 PM PST | See how the latest from SQL in Fabric brings transactional and analytical workloads together in one end-to-end AI-driven solution. Theater sessions (in person only) 1. THR707: Elevate SQL development with VSCode, GitHub Copilot and new drivers Tue, Nov 18 | 2:00 PM – 2:30 PM PST | A demo-heavy theater session showing how AI-driven tools streamline SQL development. Love coding efficiency? This one’s for you. 2. THR711: Smarter SQL: GitHub Copilot + SSMS 22 Thu, Nov 20 | 3:30 PM – 4:00 PM PST | Hands-on exploration of AI-powered Copilots in SQL Server Management Studio and beyond. 3. THR704: Accelerate SQL Migrations with AI-assisted experience in Azure Arc Thu, Nov 20 | 9:30 AM – 10:00 AM PST | Learn how Azure Arc simplifies SQL migrations with near-zero downtime. For the builders: Labs (in person only) LAB533: Build scalable AI & Data Solutions in SQL Database in Microsoft Fabric LAB530: Build new AI Applications with Azure SQL Databases Connect with us at the Expert Meet Up stations! (in person only) Come back and learn more about the onsite experiences we’re bringing to San Francisco!206Views0likes0CommentsAnnouncing SQLCon 2026: Better Together with FabCon!
We’re thrilled to unveil SQLCon 2026, the premier Microsoft SQL Community Conference, co-located with the Microsoft Fabric Community Conference (FabCon) from March 16–20, 2026! This year, we’re bringing the best of both worlds under one roof—uniting the vibrant SQL and Fabric communities for a truly next-level experience. Whether you’re passionate about SQL Server, Azure SQL, SQL in Fabric, SQL Tools, migration and modernization, database security, or building AI-powered apps with SQL, SQLCon 2026 has you covered. Dive into 50+ breakout sessions and 4 expert-led workshops designed to help you optimize, innovate, and connect. Why are SQLCon + FabCon better together? One registration, double the value: Register for either conference and get full access to both—mix and match sessions, keynotes, and community events to fit your interests. Shared spaces, shared energy: Enjoy the same expo hall, registration desk, conference app, and community lounge. Network with peers across the data platform spectrum. Unforgettable experiences: Join us for both keynotes at the State Farm Arena and celebrate at the legendary attendee party at the Georgia Aquarium. Our goal is to reignite the SQL Community spirit—restoring the robust networks, friendships, and career-building opportunities that make this ecosystem so special. SQLCon is just the beginning of a renewed commitment to connect at conferences, user groups, online, and at regional events. Early Access Pricing Extended! Register by November 14th and save $200 with code SQLCMTY200. Register Now! Want to share your expertise? The Call for Content is open until November 20th for both conferences! Let’s build the future of data—together. See you at SQLCon + FabCon!965Views5likes0CommentsMicrosoft at PASS Data Community Summit 2025
Microsoft is excited to be back as the Sapphire Sponsor for this year’s PASS Data Community Summit, together with AMD. It’ll be a week packed with keynotes, learning pathways, sessions, breakfast, giveaways and more! Our experts and engineers will be there and ready to share all things SQL Server 2025, cover the latest from Azure SQL, Microsoft Fabric and more—delivered on a foundation of AMD-powered innovation. Whether your goals are modernizing for performance and AI readiness or building intelligent apps and agents, we’ll have you covered. Join us to “Connect, Share and Learn” alongside the rest of your peers at the PASS community. CVP of Azure Databases, Shireesh Thota, is back as the keynote speaker, and will be joined by leaders across Microsoft Data including Priya Sathy and Bob Ward—highlighting how Microsoft and AMD are shaping the future of data together. Join us on Day 2, Thursday, November 20 th . Come back and check as new sessions are added: Learning Pathways: Becoming Azure SQL DBA – High Availability and BCDR Dr. Dani Ljepava, Bob Ward, John Morehouse Learn how to evolve your Azure SQL DBA skills in the domain of High Availability (HA), Business Continuity and Disaster Recovery (BCDR) from the perspective of on-premises DBA’s. Becoming Azure SQL DBA – Security, Compliance, Threats, Connectivity Pam Lahoud, Joey Dantoni Evolve your Azure SQL DBA skills in the domain of security, compliance, authentication and connectivity, from the perspective of an on-premises DBA now supporting databases in Azure. Becoming Azure SQL DBA – Performance Monitoring, Tuning, and Alerting Erin Stellato, Pam Lahoud, Monica Morehouse (Rathbun) Extend your Azure SQL DBA skills in the domain of performance monitoring, tuning, and alerting from the perspective of on-premises DBA. Becoming Azure SQL DBA – Copilot and AI Davide Mauri, Erin Stellato Unlock the future of data productivity with a hands-on exploration of AI-powered Copilots! We’ll dive into Copilot in SQL Server Management Studio (SSMS), Microsoft Copilot in Azure with NL2SQL and SQL, and Copilot for SQL Databases in Microsoft Fabric. Becoming an Azure SQL DBA - New Opportunities for DBAs in Azure Bob Ward, Dr. Dani Ljepava, Erin Stellato, Pam Lahoud Explore new skill development opportunities as an Azure SQL DBA. Many traditional SQL Server DBA tasks have been partially or fully delegated to Microsoft, and this shift in responsibilities provides a great opportunity to invest in developing new cloud skills that will help you excel as an Azure SQL DBA champion. General Sessions Breakfast with the Microsoft Data Leadership Team JUST ADDED! Shireesh Thota, Priya Sathy, Bob Ward, Erin Stellato Get your day started early at PASS Data Community Summit with a free breakfast and a Q&A session with a panel of leaders across Microsoft hosted by Bob Ward. Tell us what is top of mind for you across SQL Server, Azure SQL, Microsoft Fabric and topics like AI. This is always one of the most popular sessions at the PASS Data Community Summit, so you won’t want to miss it! Lunch and Learn with Bob Ward JUST ADDED! Join us for a lunch and learn with Bob Ward as he covers all things SQL and dives into his latest release: SQL Server 2025 Unveiled: The AI-Ready Enterprise Database with Microsoft Fabric Integration. Space will be extremely limited, bring your lunch and join this exclusive session! Inside SQL Server 2025 Bob Ward Join Bob Ward and friends to go deep into the next major release of SQL Server, SQL Server 2025, the Enterprise AI-ready database. You will learn the fundamentals and a deep dive of all the new capabilities in the release so you can plan and make key decisions on when and how to upgrade. SQL Database in Fabric: The Unified Database for AI Apps and Analytics JUST ADDED! Amar Digamber Patil, Yo-Lei Chen, Basudep Sarkar Discover how SQL in Fabric brings transactional and analytical workloads together in one cloud-native database. In this session, we’ll show how developers and data teams can simplify AI-driven application development with near real-time insights and built-in AI, seamless OneLake integration, and end-to-end analytics—all in a single, unified experience. Building Scalable, Secure AI Apps with Azure SQL Hyperscale Build AI apps that run securely and scale with your needs with Azure SQL Database Hyperscale. We’ll cover native vector indexes for semantic search, read scale‑out for low‑latency RAG, using the model of your choice, from T‑SQL. Ecosystem sessions Smarter GitHub Copilot + SSMS 22 Erin Stellato Discover how GitHub Copilot is transforming the way you write T-SQL and optimize your SQL databases inside SQL Server Management Studio (SSMS) 22. In this session, we’ll showcase the newest SSMS 22 features alongside real-world demos of GitHub Copilot, highlighting how AI assistance can speed up query writing, reduce errors, and boost productivity. Booth Experiences What: Book signing "SQL Server 2025 Unveiled" with Bob Ward When: Wednesday, November 19th evening reception Where: Microsoft and AMD booth (#404) Join us as we celebrate the release of "SQL Server 2025 Unveiled" at the PASS Summit evening reception! Limited to the first 200 attendees. Now on demand! Get a jump start on your PASS experience. "The AI Opportunity: From Ground to Cloud" webinar is now available on demand! Get an early preview of the sessions at PASS Summit and a closer look into SQL Server 2025. Hosted by Redgate’s Kellyn Gorman and Bob Ward, sign up and watch today! --Exclusive Offer for PASS Attendees-- As a special offer from Microsoft, use the code AZURE150 to receive $150 off your 3-day conference pass. Don’t miss this opportunity to connect, grow, and learn with the community. Register today298Views0likes0CommentsSmarter Parallelism: Degree of parallelism feedback in SQL Server 2025
🚀 Introduction With SQL Server 2025, we have made Degree of parallelism (DOP) feedback an on by default feature. Originally introduced in SQL Server 2022, DOP feedback is now a core part of the platform’s self-tuning capabilities, helping workloads scale more efficiently without manual tuning. The feature works with database compatibility 160 or higher. ⚙️ What Is DOP feedback? DOP feedback is part of the Intelligent Query Processing (IQP) family of features. It dynamically adjusts the number of threads (DOP) used by a query based on runtime performance metrics like CPU time and elapsed time. If a query that has generated a parallel plan consistently underperforms due to excessive parallelism, the DOP feedback feature will reduce the DOP for future executions without requiring recompilation. Currently, DOP feedback will only recommend reductions to the degree of parallelism setting on a per query plan basis. The Query Store must be enabled for every database where DOP feedback is used, and be in a "read write" state. This feedback loop is: Persistent: Stored in Query Store. Persistence is not currently available for Query Store on readable secondaries. This is subject to change in the near future, and we'll provide an update to it's status after that occurs. Adaptive: Adjusts a query’s DOP, monitors those adjustments, and reverts any changes to a previous DOP if performance regresses. This part of the system relies on Query Store being enabled as it relies on the runtime statistics captured within the Query Store. Scoped: Controlled via the DOP_FEEDBACK database-scoped configuration or at the individual query level with the use of the DISABLE_DOP_FEEDBACK query hint. 🧪 How It Works Initial Execution: SQL Server compiles and executes a query with a default or manually set DOP. Monitoring: Runtime stats are collected and compared across executions. Adjustment: If inefficiencies are detected, DOP is lowered (minimum of 2). Validation: If performance improves and is stable, the new DOP is persisted. If not, the DOP recommendation will be reverted to the previously known good DOP setting, which is typically the original setting that the feature used as a baseline. At the end of the validation period any feedback that has been persisted, regardless of its state (i.e. stabilized, reverted, no recommendation, etc.) can be viewed by querying the sys.query_store_plan_feedback system catalog view: SELECT qspf.feature_desc, qsq.query_id, qsp.plan_id, qspf.plan_feedback_id, qsqt.query_sql_text, qsp.query_plan, qspf.state_desc, qspf.feedback_data, qspf.create_time, qspf.last_updated_time FROM sys.query_store_query AS qsq INNER JOIN sys.query_store_plan AS qsp ON qsp.query_id = qsq.query_id INNER JOIN sys.query_store_query_text AS qsqt ON qsqt.query_text_id = qsq.query_text_id INNER JOIN sys.query_store_plan_feedback AS qspf ON qspf.plan_id = qsp.plan_id WHERE qspf.feature_id = 3; 🆕 What’s New in SQL Server 2025? Enabled by Default: No need to toggle the database scoped configuration on, DOP feedback is active out of the box. Improved Stability: Enhanced validation logic ensures fewer regressions. Better Integration: Works seamlessly with other IQP features like Memory Grant feedback , Cardinality Estimation feedback, and Parameter Sensitive Plan (PSP) optimization. 📊 Visualizing the Feedback Loop 🧩 How can I see if DOP feedback is something that would be beneficial for me? Without setting up an Extended Event session for deeper analysis, looking over some of the data in the Query Store can be useful in determining if DOP feedback would find interesting enough queries for it to engage. At a minimum, if your SQL Server instance is operating with parallelism enabled and has: o a MAXDOP value of 0 (not generally recommended) or a MAXDOP value greater than 2 o you observe multiple queries have execution runtimes of 10 seconds or more along with a degree of parallelism of 4 or greater o and have an execution count 15 or more according to the output from the query below SELECT TOP 20 qsq.query_id, qsrs.plan_id, [replica_type] = CASE WHEN replica_group_id = '1' THEN 'PRIMARY' WHEN replica_group_id = '2' THEN 'SECONDARY' WHEN replica_group_id = '3' THEN 'GEO SECONDARY' WHEN replica_group_id = '4' THEN 'GEO HA SECONDARY' ELSE TRY_CONVERT(NVARCHAR (200), qsrs.replica_group_id) END, AVG(qsrs.avg_dop) as dop, SUM(qsrs.count_executions) as execution_count, AVG(qsrs.avg_duration)/1000000.0 as duration_in_seconds, MIN(qsrs.min_duration)/1000000.0 as min_duration_in_seconds FROM sys.query_store_runtime_stats qsrs INNER JOIN sys.query_store_plan qsp ON qsp.plan_id = qsrs.plan_id INNER JOIN sys.query_store_query qsq ON qsq.query_id = qsp.query_id GROUP BY qsrs.plan_id, qsq.query_id, qsrs.replica_group_id HAVING MIN(qsrs.min_duration)/1000000.0 >= 10 ORDER BY dop desc, execution_count desc; 🧠 Behind the Scenes: How Feedback Is Evaluated DOP feedback uses a rolling window of recent executions (typically 15) to evaluate: Average CPU time Standard deviation of CPU time Adjusted elapsed time* Stability of performance across executions If the adjusted DOP consistently improves efficiency without regressing performance, it is persisted. Otherwise, the system reverts to the last known good configuration (also knows as the default dop to the system). As an example, if the dop for a query started out with a value of 8, and DOP feedback determined that a DOP of 4 was an optimal number; if over the period of the rolling window and while the query is in the validation phase, if the query performance varied more than expected, DOP feedback will undo it's change of 4 and set the query back to having a DOP of 8. 🧠 Note: The adjusted elapsed time intentionally excludes wait statistics that are not relevant to parallelism efficiency. This includes ignoring buffer latch, buffer I/O, and network I/O waits, which are external to parallel query execution. This ensures that feedback decisions are based solely on CPU and execution efficiency, not external factors like I/O or network latency. 🧭 Best Practices Enable Query Store: This is required for DOP feedback to function. Monitor DOP feedback extended events SQL Server provides a set of extended events to help you monitor and troubleshoot the DOP feedback lifecycle. Below is a sample script to create a session that captures key events, followed by a breakdown of what each event means. IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'dop_xevents') DROP EVENT SESSION [dop_xevents] ON SERVER; GO CREATE EVENT SESSION [dop_xevents] ON SERVER ADD EVENT sqlserver.dop_feedback_analysis_stopped, ADD EVENT sqlserver.dop_feedback_eligible_query, ADD EVENT sqlserver.dop_feedback_provided, ADD EVENT sqlserver.dop_feedback_reassessment_failed, ADD EVENT sqlserver.dop_feedback_reverted, ADD EVENT sqlserver.dop_feedback_stabilized -- ADD EVENT sqlserver.dop_feedback_validation WITH ( MAX_MEMORY = 4096 KB, EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS, MAX_DISPATCH_LATENCY = 30 SECONDS, MAX_EVENT_SIZE = 0 KB, MEMORY_PARTITION_MODE = NONE, TRACK_CAUSALITY = OFF, STARTUP_STATE = OFF ); ⚠️ Note: The extended event that has been commented out (dop_feedback_validation) is part of the debug channel. Enabling it may introduce additional overhead and should be used with caution in production environments. 📋 DOP Feedback Extended Events Reference Event Name Description dop_feedback_eligible_query Fired when a query plan becomes eligible for DOP feedback. Captures initial runtime stats like CPU time and adjusted elapsed time. dop_feedback_analysis_stopped Indicates that SQL Server has stopped analyzing a query for DOP feedback. Reasons include high variance in stats or the optimal DOP has already been achieved. dop_feedback_provided Fired when SQL Server provides a new DOP recommendation for a query. Includes baseline and feedback stats. dop_feedback_reassessment_failed Indicates that a previously persisted feedback DOP was reassessed and found to be invalid, restarting the feedback cycle. dop_feedback_reverted Fired when feedback is rolled back due to performance regression. Includes baseline and feedback stats. dop_feedback_stabilized Indicates that feedback has been validated and stabilized. After stabilization, additional adjustment to the feedback can be made when the system reassesses the feedback on a periodic basis. 🔍 Understanding the feedback_data JSON in DOP feedback In the "How it works" section of this article, we had provided a sample script that showed some of the data that can be persisted within the sys.query_store_plan_feedback catalog view. When DOP feedback stabilizes, SQL Server stores a JSON payload in the feedback_data column of that view, figuring out how to interpret that data can sometimes be challenging. From a structural perspective, the feedback_data field contains a JSON object with two main sections; LastGoodFeedback and BaselineStats. As an example { "LastGoodFeedback": { "dop": "2", "avg_cpu_time_ms": "12401", "avg_adj_elapsed_time_ms": "12056", "std_cpu_time_ms": "380", "std_adj_elapsed_time_ms": "342" }, "BaselineStats": { "dop": "4", "avg_cpu_time_ms": "17843", "avg_adj_elapsed_time_ms": "13468", "std_cpu_time_ms": "333", "std_adj_elapsed_time_ms": "328" } } Section Field Description LastGoodFeedback dop The DOP value that was validated and stabilized for future executions. avg_cpu_time_ms Average CPU time (in milliseconds) for executions using the feedback DOP. avg_adj_elapsed_time_ms Adjusted elapsed time (in milliseconds), excluding irrelevant waits. std_cpu_time_ms Standard deviation of CPU time across executions. std_adj_elapsed_time_ms Standard deviation of adjusted elapsed time. BaselineStats dop The original DOP used before feedback was applied. avg_cpu_time_ms Average CPU time for the baseline executions. avg_adj_elapsed_time_ms Adjusted elapsed time for the baseline executions. std_cpu_time_ms Standard deviation of CPU time for the baseline. std_adj_elapsed_time_ms Standard deviation of adjusted elapsed time for the baseline. One method that can be used to extract this data could be to utilize the JSON_VALUE function: SELECT qspf.plan_id, qs.query_id, qt.query_sql_text, qsp.query_plan_hash, qspf.feature_desc, -- LastGoodFeedback metrics JSON_VALUE(qspf.feedback_data, '$.LastGoodFeedback.dop') AS last_good_dop, JSON_VALUE(qspf.feedback_data, '$.LastGoodFeedback.avg_cpu_time_ms') AS last_good_avg_cpu_time_ms, JSON_VALUE(qspf.feedback_data, '$.LastGoodFeedback.avg_adj_elapsed_time_ms') AS last_good_avg_adj_elapsed_time_ms, JSON_VALUE(qspf.feedback_data, '$.LastGoodFeedback.std_cpu_time_ms') AS last_good_std_cpu_time_ms, JSON_VALUE(qspf.feedback_data, '$.LastGoodFeedback.std_adj_elapsed_time_ms') AS last_good_std_adj_elapsed_time_ms, -- BaselineStats metrics JSON_VALUE(qspf.feedback_data, '$.BaselineStats.dop') AS baseline_dop, JSON_VALUE(qspf.feedback_data, '$.BaselineStats.avg_cpu_time_ms') AS baseline_avg_cpu_time_ms, JSON_VALUE(qspf.feedback_data, '$.BaselineStats.avg_adj_elapsed_time_ms') AS baseline_avg_adj_elapsed_time_ms, JSON_VALUE(qspf.feedback_data, '$.BaselineStats.std_cpu_time_ms') AS baseline_std_cpu_time_ms, JSON_VALUE(qspf.feedback_data, '$.BaselineStats.std_adj_elapsed_time_ms') AS baseline_std_adj_elapsed_time_ms FROM sys.query_store_plan_feedback AS qspf JOIN sys.query_store_plan AS qsp ON qspf.plan_id = qsp.plan_id JOIN sys.query_store_query AS qs ON qsp.query_id = qs.query_id JOIN sys.query_store_query_text AS qt ON qs.query_text_id = qt.query_text_id WHERE qspf.feature_desc = 'DOP Feedback' AND ISJSON(qspf.feedback_data) = 1; 🧪 Why This Matters This JSON structure is critical for: Debugging regressions: You can compare baseline and feedback statistics to understand if a change in DOP helped or hurt a set of queries. Telemetry and tuning: Tools can be used to parse this JSON payload to surface insights in performance dashboards. Transparency: It provides folks that care about the database visibility into how SQL Server is adapting to their workload. 📚 Learn More Intelligent Query Processing: degree of parallelism feedback Degree of parallelism (DOP) feedback Intelligent query processing in SQL databases Microsoft SQL Server1.2KViews0likes0CommentsSQL Server Management Studio (SSMS) 21 is now generally available (GA)
The SQL Tools team is thrilled to announce the general availability of SQL Server Management Studio (SSMS) 21. SSMS 21 GA brings a modern installation and update experience through the Visual Studio installer, 64-bit support, Git integration, and initial dark theme support.112KViews5likes43CommentsData Virtualization with PolyBase for SQL Server 2025
Building upon the innovations introduced in SQL Server 2022, SQL Server 2025 enhances Data Virtualization by prioritizing usability, strengthening security with expanded authentication options, and providing improved support for Linux environments. Key Features Native Support for Parquet, CSV, and Delta: The SQL engine now offers built-in capabilities to query data files such as CSV, Parquet, and Delta, eliminating the need for the optional “PolyBase Query Service for External Data” installation. PolyBase services are required solely when querying external databases. Support for TDS 8: PolyBase in SQL Server 2025 includes support for TDS 8 for connections between SQL Server instances. Managed Identity Support for Arc-enabled SQL Server: With Arc-enabled SQL Server 2025, Managed Identity is now available for use with PolyBase, providing secure connections to Azure Storage Accounts. ODBC Support for Linux: PolyBase on SQL Server 2025 for Linux now supports ODBC data sources, matching Windows functionality. Use Cases Business Insights: Access and analyze distributed data across diverse sources without moving it, enabling faster business intelligence, and reporting. Azure Integration: Seamlessly connect to Azure Storage Accounts, leveraging managed identities for secure cloud access and hybrid scenarios. Data Tiering: Offload cold or infrequently accessed data to external storage like Azure, reducing storage costs while keeping the data available for queries. Simplified ETL and Data Exploration: Streamline Extract, Transform, Load (ETL) and reverse-ETL processes by natively supporting common data formats such as CSV, Parquet, and Delta. Cross-platform Data Access: Achieve parity and flexibility by connecting to ODBC data sources on both Windows and Linux environments. Enhanced Security and Compliance: Strengthen data protection using TDS 8 and managed identities, ensuring secure access to external data. Getting Started SQL Server 2025’s makes data virtualization easier and safer than ever, get started today downloading: SQL Server 2025 Preview Trial To know more about Data Virtualization improvements check, Introducing Data Virtualization with PolyBase - SQL Server | Microsoft Learn301Views1like0CommentsSQL Server 2025 Preview RC1: Now Supporting Red Hat Enterprise Linux (RHEL) 10
We’re happy to announce that SQL Server 2025 Release Candidate 1 (RC1) now includes preview support for Red Hat Enterprise Linux (RHEL) 10, expanding our commitment to modern, secure, and flexible Linux-based deployments. RHEL 10 Support in SQL Server 2025 RC1 You can now deploy SQL Server 2025 Preview on RHEL10 for your Dev/Test environments using the Enterprise Evaluation Edition, which is valid for 180 days. For your production workloads you could use SQL Server 2022 on RHEL 9 or Ubuntu 22.04. Deploying SQL Server 2025 RC1 on RHEL10 You can follow the Quickstart: Install SQL Server and create a database on RHEL10 to install SQL Server and create a database on RHEL10. It walks you through everything—from preparing your system to installing and configuring SQL Server. To explore the latest improvements in SQL Server 2025 RC1, check out What's New in SQL Server 2025 - SQL Server | Microsoft Learn. I was particularly interested in testing the new Half-precision float support in vector data type. To do this, I deployed SQL Server RHEL10 (the tag is 2025-RC1-rhel-10) container on WSL2 and I already have Docker Desktop installed on my local machine to manage containers. I launched the SQL Server 2025 RC1 container, connected to it using SQL Server Management Studio (SSMS), and successfully tested the vector data type enhancement. docker pull mcr.microsoft.com/mssql/rhel/server:2025-RC1-rhel-10 docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=passwordshouldbestrong" \ -e "MSSQL_AGENT_ENABLED=true" \ -p 14337:1433 --name sql2025RC1RHEL10 --hostname sql2025RC1RHEL10 \ -d mcr.microsoft.com/mssql/rhel/server:2025-RC1-rhel-10 SELECT @@VERSION GO CREATE DATABASE SQL2025onRHEL10 GO USE SQL2025onRHEL10 GO -- Step 0: Enable Preview Features ALTER DATABASE SCOPED CONFIGURATION SET PREVIEW_FEATURES = ON; GO -- Step 1: Create a Table with a VECTOR(5, float16) Column CREATE TABLE dbo.Articles ( id INT PRIMARY KEY, title NVARCHAR(100), content NVARCHAR(MAX), embedding VECTOR(5, float16) ); -- Step 2: Insert Sample Data INSERT INTO Articles (id, title, content, embedding) VALUES (1, 'Intro to AI', 'This article introduces AI concepts.', '[0.1, 0.2, 0.3, 0.4, 0.5]'), (2, 'Deep Learning', 'Deep learning is a subset of ML.', '[0.2, 0.1, 0.4, 0.3, 0.6]'), (3, 'Neural Networks', 'Neural networks are powerful models.', '[0.3, 0.3, 0.2, 0.5, 0.1]'), (4, 'Machine Learning Basics', 'ML basics for beginners.', '[0.4, 0.5, 0.1, 0.2, 0.3]'), (5, 'Advanced AI', 'Exploring advanced AI techniques.', '[0.5, 0.4, 0.6, 0.1, 0.2]'); -- Step 3: Perform a Vector Similarity Search Using VECTOR_DISTANCE function DECLARE @v VECTOR(5, float16) = '[0.3, 0.3, 0.3, 0.3, 0.3]'; SELECT TOP (3) id, title, VECTOR_DISTANCE('cosine', @v, embedding) AS distance FROM dbo.Articles ORDER BY distance; -- Step 4: Optionally Create a Vector Index CREATE VECTOR INDEX vec_idx ON Articles(embedding) WITH ( metric = 'cosine', type = 'diskANN' ); -- Step 5: Perform a Vector Similarity Search DECLARE @qv VECTOR(5, float16) = '[0.3, 0.3, 0.3, 0.3, 0.3]'; SELECT t.id, t.title, t.content, s.distance FROM VECTOR_SEARCH( table = Articles AS t, column = embedding, similar_to = @qv, metric = 'cosine', top_n = 3 ) AS s ORDER BY s.distance, t.title; Conclusion The addition of RHEL10 support in SQL Server 2025 Preview is a major milestone in delivering a modern, secure, and flexible data platform for Linux users. We encourage you explore these new capabilities and share your feedback to help us continue enhancing SQL Server for the Linux ecosystem. You can share your feedback using any of the following methods: Email us at sqlpreviewpackage@microsoft.com with your thoughts and suggestions. Submit your ideas on Azure Ideas (Use the SQL Server on Linux Group on the left side of the page) Alternatively, you can open issues related to the preview packages Issues · microsoft/mssql-docker (github.com) on GitHub. We hope you give SQL Server 2025 preview on RHEL10 a try - and we look forward to hearing what you think!664Views2likes0CommentsMicrosoft and PASS Summit are going on tour!
Microsoft and PASS Summit are going on tour! Join us for this two-day event Sep 15 - 16 in Dallas, TX, together with our partners Pure Storage. With keynotes, pre-cons, general sessions and more this event will cover all things SQL Server, Microsoft Fabric, Azure SQL and more. Connect with Microsoft and other community experts. Registertoday and take a peek into the agenda below: Pre-Con: Deep-Dive Workshop: Accelerating AI, Performance and Resilience with SQL Server 2025 SQL Server 2025 delivers powerful new capabilities for AI, developer productivity, and performance all ready to run at scale on modern infrastructure. In this 1/2 day deep dive workshop, you'll get a first look at what's new and how to put it to work immediately. Learn directly from Microsoft and Pure Storage experts how to harness SQL Server’s vector capabilities and walk through real-world demos covering semantic search, change event streaming, and using external tables to manage vector embeddings. You'll also see how new REST APIs securely expose SQL Server's internals for automation and observability including snapshots, performance monitoring, and backup management. The workshop wraps up with insights into core engine enhancements: optimized locking, faster backups using ZSTD compression all running on a modern Pure Storage foundation that brings scale and resilience to your data platform. Whether you're a DBA, developer, or architect, this session will equip you with practical strategies for harnessing Microsoft SQL Server 2025 and Pure Storage to accelerate your organization's AI and data goals. Day 2: Building an LLM Chatbot on Your Laptop - No Cloud Required! Want to build a chatbot that can answer questions using your own data? This session shows you how, and no cloud is required! In this session, you will learn how to build a Retrieval-Augmented Generation (RAG) chatbot using open-source tools combined with SQL Server for vector storage - all from your laptop. We will cover topics such as LLM fundamentals, embeddings, similarity search, and how to integrate LLMs with your own data. By the end of the session, you will have a working chatbot and practical knowledge of how AI can enhance your data platform and a new way to elevate your SQL Server skills with AI. Day 2: An Inside Look at Building SQL Server/Azure and Fabric Data Warehouse Have you ever wondered how Microsoft builds and releases its database engines (SQL Server, SQL Azure, Fabric Data Warehouse)? This talk goes through some of the behind-the-scenes details about how the engineering team works, how features come together, and how the engineers learn how to build the software you use to manage your data. Having the Regional Summit in Dallas gives us a unique opportunity to talk about our engineering office in Austin and how it contributed features and enhancements to recent releases. Come and learn about which SQL features are built right here in Texas! Day2: AI Ready Apps with SQL Database in Microsoft Fabric Explore how to build enterprise-grade Retrieval-Augmented Generation (RAG) systems by harnessing the power of SQL AI features, vector-based search, and Microsoft Fabric. This session delves into modern architectures that integrate structured data with large language model (LLM) capabilities to enable real-time, intelligent, and secure applications. And much more! Learn more: PASS Summit On Tour Dallas - PASS Data Community Summit Register today: PASS Summit On Tour266Views0likes0CommentsSQL Server 2025: introducing optimized Halloween protection
Executive summary Optimized Halloween protection, available in the public preview of SQL Server 2025 starting with the CTP 2.0 release, reduces tempdb space consumption and improves query performance by redesigning the way the database engine solves the Halloween problem. An example in the appendix shows CPU and elapsed time of a query reduced by about 50% while eliminating all tempdb space consumption. Update 2025-09-02 During public preview of SQL Server 2025, we identified a potential data integrity issue that might occur if optimized Halloween protection is enabled. While the probability of encountering this issue is low, we take data integrity seriously. Therefore, we temporarily removed optimized Halloween protection from SQL Server 2025, starting with the RC 0 release. The fix for this issue is in progress. In the coming months, we plan to make optimized Halloween protection available in Azure SQL Database and Azure SQL Managed Instance with the always-up-to-date update policy. Enabling optimized Halloween protection in a future SQL Server 2025 update is under consideration as well. The Halloween problem The Halloween problem, named so because it was discovered on Halloween in 1976, occurs when a data modification language (DML) statement changes data in such a way that the same statement unexpectedly processes the same row more than once. Traditionally, the SQL Server database engine protects DML statements from the Halloween problem by introducing a spool operator in the query plan, or by taking advantage of another blocking operator already present in the plan, such as a sort or a hash match. If a spool operator is used, it creates a temporary copy of the data to be modified before any modifications are made to the data in the table. While the protection spool avoids the Halloween problem, it comes with downsides: The spool requires extra resources: space in tempdb, disk I/O, memory, and CPU. Statement processing by the downstream query operators is blocked until the data is fully written into the spool. The spool adds query plan complexity that can cause the query optimizer to generate a less optimal plan. Optimized Halloween protection removes these downsides by making the spool operator unnecessary. How it works When accelerated database recovery (ADR) is enabled, each statement in a transaction obtains a unique statement identifier, known as nest ID. As each row is modified by a DML statement, it is stamped with the nest ID of the statement. This is required to provide the ACID transaction semantics with ADR. During DML statement processing, when the storage engine reads the data, it skips any row that has the same nest ID as the current DML statement. This means that the query processor doesn't see the rows already processed by the statement, therefore avoiding the Halloween problem. How to use optimized Halloween protection To enable optimized Halloween protection for a database, the following prerequisites are required: ADR must be enabled on the database. The database must use compatibility level 170. The OPTIMIZED_HALLOWEEN_PROTECTION database-scoped configuration must be enabled. The OPTIMIZED_HALLOWEEN_PROTECTION database-scoped configuration is enabled by default. This means that when you enable ADR for a database using compatibility level 170, it will use optimized Halloween protection. You can ensure that a database uses optimized Halloween protection by executing the following statements: ALTER DATABASE [<database-name-placeholder>] SET ACCELERATED_DATABASE_RECOVERY = ON WITH ROLLBACK IMMEDIATE; ALTER DATABASE [<database-name-placeholder>] SET COMPATIBILITY_LEVEL = 170; ALTER DATABASE SCOPED CONFIGURATION SET OPTIMIZED_HALLOWEEN_PROTECTION = ON; You can also enable and disable optimized Halloween protection at the query level by using the ENABLE_OPTIMIZED_HALLOWEEN_PROTECTION and DISABLE_OPTIMIZED_HALLOWEEN_PROTECTION query hints, either directly in the query, or via Query Store hints. These hints work under any compatibility level and take precedence over the OPTIMIZED_HALLOWEEN_PROTECTION database-scoped configuration. When optimized Halloween protection is used for an operator in the query plan, the OptimizedHalloweenProtectionUsed property of the operator in the XML query plan is set to True. For more details, see optimized Halloween protection in documentation. Conclusion Optimized Halloween protection is another Intelligent Query Processing feature that improves query performance and reduces resource consumption when you upgrade to SQL Server 2025, without having to make any changes to your query workloads. We are looking forward to your feedback about this and other features during the public preview of SQL Server 2025 and beyond. You can leave comments on this blog post, email us at intelligentqp@microsoft.com, or leave feedback at https://aka.ms/sqlfeedback. Appendix The following script shows how optimized Halloween protection removes the protection spool in the query plan, and reduces tempdb usage, CPU time, and duration when enabled. /* Requires the WideWorldImporters sample database. SQL Server backup: https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/WideWorldImporters-Full.bak Bacpac: https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/WideWorldImporters-Standard.bacpac */ /* Ensure that optimized Halloween protection prerequisites are in place */ ALTER DATABASE WideWorldImporters SET ACCELERATED_DATABASE_RECOVERY = ON WITH ROLLBACK IMMEDIATE; ALTER DATABASE WideWorldImporters SET COMPATIBILITY_LEVEL = 170; ALTER DATABASE SCOPED CONFIGURATION SET OPTIMIZED_HALLOWEEN_PROTECTION = ON; GO /* Validate configuration */ SELECT d.compatibility_level, d.is_accelerated_database_recovery_on, dsc.name, dsc.value FROM sys.database_scoped_configurations AS dsc CROSS JOIN sys.databases AS d WHERE dsc.name = 'OPTIMIZED_HALLOWEEN_PROTECTION' AND d.name = DB_NAME(); GO /* Create the test table and add data */ DROP TABLE IF EXISTS dbo.OptimizedHPDemo; BEGIN TRANSACTION; SELECT * INTO dbo.OptimizedHPDemo FROM Sales.Invoices ALTER TABLE dbo.OptimizedHPDemo ADD CONSTRAINT PK_OptimizedHPDemo PRIMARY KEY CLUSTERED (InvoiceID) ON USERDATA; COMMIT; GO /* Ensure that Query Store is enabled and is capturing all queries */ ALTER DATABASE WideWorldImporters SET QUERY_STORE = ON (OPERATION_MODE = READ_WRITE, QUERY_CAPTURE_MODE = ALL); /* Empty Query Store to start with a clean slate */ ALTER DATABASE WideWorldImporters SET QUERY_STORE CLEAR; GO /* Disable optimized Halloween protection as the baseline */ ALTER DATABASE SCOPED CONFIGURATION SET OPTIMIZED_HALLOWEEN_PROTECTION = OFF; GO /* Insert data selecting from the same table. This requires Halloween protection so that the same row cannot be selected and inserted repeatedly. */ BEGIN TRANSACTION; INSERT INTO dbo.OptimizedHPDemo ( InvoiceID, CustomerID, BillToCustomerID, OrderID, DeliveryMethodID, ContactPersonID, AccountsPersonID, SalespersonPersonID, PackedByPersonID, InvoiceDate, CustomerPurchaseOrderNumber, IsCreditNote, CreditNoteReason, Comments, DeliveryInstructions, InternalComments, TotalDryItems, TotalChillerItems, DeliveryRun, RunPosition, ReturnedDeliveryData, ConfirmedDeliveryTime, ConfirmedReceivedBy, LastEditedBy, LastEditedWhen ) SELECT InvoiceID + 1000000 AS InvoiceID, CustomerID, BillToCustomerID, OrderID, DeliveryMethodID, ContactPersonID, AccountsPersonID, SalespersonPersonID, PackedByPersonID, InvoiceDate, CustomerPurchaseOrderNumber, IsCreditNote, CreditNoteReason, Comments, DeliveryInstructions, InternalComments, TotalDryItems, TotalChillerItems, DeliveryRun, RunPosition, ReturnedDeliveryData, ConfirmedDeliveryTime, ConfirmedReceivedBy, LastEditedBy, LastEditedWhen FROM dbo.OptimizedHPDemo; ROLLBACK; GO /* Enable optimized Halloween protection. Execute the following statement in its own batch. */ ALTER DATABASE SCOPED CONFIGURATION SET OPTIMIZED_HALLOWEEN_PROTECTION = ON; GO /* Execute the same query again */ BEGIN TRANSACTION; INSERT INTO dbo.OptimizedHPDemo ( InvoiceID, CustomerID, BillToCustomerID, OrderID, DeliveryMethodID, ContactPersonID, AccountsPersonID, SalespersonPersonID, PackedByPersonID, InvoiceDate, CustomerPurchaseOrderNumber, IsCreditNote, CreditNoteReason, Comments, DeliveryInstructions, InternalComments, TotalDryItems, TotalChillerItems, DeliveryRun, RunPosition, ReturnedDeliveryData, ConfirmedDeliveryTime, ConfirmedReceivedBy, LastEditedBy, LastEditedWhen ) SELECT InvoiceID + 1000000 AS InvoiceID, CustomerID, BillToCustomerID, OrderID, DeliveryMethodID, ContactPersonID, AccountsPersonID, SalespersonPersonID, PackedByPersonID, InvoiceDate, CustomerPurchaseOrderNumber, IsCreditNote, CreditNoteReason, Comments, DeliveryInstructions, InternalComments, TotalDryItems, TotalChillerItems, DeliveryRun, RunPosition, ReturnedDeliveryData, ConfirmedDeliveryTime, ConfirmedReceivedBy, LastEditedBy, LastEditedWhen FROM dbo.OptimizedHPDemo; ROLLBACK; GO /* Examine query runtime statistics and plans for the two executions of the same query. */ SELECT q.query_id, q.query_hash, qt.query_sql_text, p.plan_id, rs.count_executions, rs.avg_tempdb_space_used * 8 / 1024. AS tempdb_space_mb, FORMAT(rs.avg_cpu_time / 1000., 'N0') AS avg_cpu_time_ms, FORMAT(rs.avg_duration / 1000., 'N0') AS avg_duration_ms, TRY_CAST(p.query_plan AS xml) AS xml_query_plan FROM sys.query_store_runtime_stats AS rs INNER JOIN sys.query_store_plan AS p ON rs.plan_id = p.plan_id INNER JOIN sys.query_store_query AS q ON p.query_id = q.query_id INNER JOIN sys.query_store_query_text AS qt ON q.query_text_id = qt.query_text_id WHERE q.query_hash = 0xC6ADB023512BBCCC; /* For the second execution with optimized Halloween protection: 1. tempdb space usage is zero 2. CPU time and duration are reduced by about 50% 3. The Clustered Index Insert operator in the query plan has the OptimizedHalloweenProtection property set to True */2.7KViews2likes0CommentsAnnouncing SQL Server 2025 Release Candidate 0 (RC0)
We are excited to announce the availability of SQL Server 2025 Release Candidate 0 (RC0), building on the momentum of the first public preview released on May 15, 2025. This release marks a significant step forward in database innovation, introducing new development features and critical enhancements to performance, security, and analytics. Focus on Preview Features: Opt-In Innovation With RC0, SQL Server 2025 introduces a new way to experiment and adopt cutting-edge capabilities through the PREVIEW_FEATURES database scoped configuration. This opt-in mechanism empowers developers and database administrators to explore new features in a controlled, flexible manner. To learn more, please review the release notes for detailed guidance on activating and leveraging these preview features. What’s New in RC0? AI Enhancements Vector Search and Index: Easily create vector indexes and execute vector searches using the PREVIEW_FEATURES configuration. Better intra-query parallelism with hyperthreading. Enabled usage of vectors on MacOS via Rosetta 2 translation layer. Text Chunking: The AI_GENERATE_CHUNKS feature enables efficient text chunking, available via PREVIEW_FEATURES. ONNX Model Integration: CREATE EXTERNAL MODEL now supports local ONNX models hosted directly in SQL Server, enabled through PREVIEW_FEATURES. Improved Metadata: The sys.vector_indexes view now follow established naming convention and inherits columns from sys.indexes. Linux Support Broaden Your Linux Choices with Ubuntu 24.04 Support SQL Server 2025 RC0 now runs on Ubuntu 24.04 LTS, giving you more flexibility to standardize on the latest long‑term support Linux distribution. Starting in SQL Server 2025 (17.x) Preview, TLS 1.3 is enabled by default. Database Engine Improvements Time-bound extended event sessions stop automatically after the specified time elapses. Cardinality Estimation (CE) Feedback for Expressions—cache persistence: Learns from repeated query patterns to pick better row estimates and plans. With persistence, that learning will survive restarts and failovers for steadier performance and less retuning. Analytics PolyBase now supports managed identity communications to Microsoft Azure Blob Storage and Microsoft Azure Data Lake Storage. Language and Query Features JARO_WINKLER_DISTANCE returns float (previously real). JARO_WINKLER_SIMILARITY returns int (previously real). REGEXP_REPLACE and REGEXP_SUBSTR now support large object (LOB) types (varchar(max) and nvarchar(max)). REGEXP_LIKE is now SARGable, allowing index usage for faster query execution. Availability and Security SQL Server Agent, Linked Servers, defaults to encryption mandatory, using the latest security standards—TDS 8.0 and TLS 1.3. Configure TLS 1.3 encryption for Always On availability groups, failover cluster instances (FCI), replication, and log shipping with TDS 8.0 support. Replication defaults to OLEDB version 19 for inter-instance communication and enforces TLS 1.3 with Encrypt=Strict by default. Fabric mirroring Resource governance can now be used for Fabric Mirroring to isolate and manage resource consumption. Dynamic maxtrans configuration option to optimize Fabric mirroring performance Getting Started SQL Server 2025 RC0 offers a unique opportunity to explore the latest features via the PREVIEW_FEATURES configuration. These opt-in innovations are designed to foster experimentation and rapid adoption of upcoming capabilities. We encourage all users to review the release notes and take advantage of these powerful new tools. Accelerating SQL Server 2025 momentum: Announcing the first release candidate Thank you for being a part of the SQL Server community. Your feedback continues to shape the future of our platform. Download SQL Server 2025 RC0 today and unlock the next era of database excellence!1.5KViews2likes1Comment