sqlserver
46 TopicsSQL Server 2025 is Now Generally Available
Today at Ignite, we announce the general availability of SQL Server 2025. This marks the latest milestone in the more than 30-year history of SQL Server. It is also a key part of our commitment to the one consistent SQL promise, delivering consistent experience across on-premises, cloud, and SaaS environments, with one engine and one unified platform. Built on SQL Server’s foundation of best-in-class security, performance and availability, SQL Server 2025 is the AI-ready enterprise database and it redefines what's possible for enterprise data. With built-in AI and developer-first enhancements, SQL Server 2025 empowers customers to accelerate AI innovation using the data they already have, securely and at scale, all within SQL Server using the familiar T-SQL language. SQL Server 2025 is designed to meet customers where they are, whether on-premises, in the cloud, or in hybrid environments, helping you build intelligent, secure, scalable, and consistent solutions that drive real business outcomes. SQL Server 2025 is experiencing significant momentum, as evidenced by 10,000 organizations participating in the public preview and 100,000 active SQL Server 2025 databases. Leading customers like Mediterranean Shipping Company (MSC), Infios, and Buhler are already advancing with SQL Server 2025, supported by a robust ecosystem of technology partners including AMD, Canonical, HPE, Lenovo, NVIDIA, Pure Storage and Red Hat. Key Innovations in SQL Server 2025 AI built-in AI is now integrated directly into the SQL Server engine, enabling advanced semantic search for deeper insights and natural language experiences across enterprise data. Model management is built into T-SQL, supporting seamless integration with Microsoft Foundry, Azure OpenAI Service, OpenAI, Ollama, and more—deployable securely anywhere, from on premises to the cloud. Developers can easily switch between models without changing code, and essential AI building blocks like vector embedding, text chunking, and DiskANN indexing are natively supported. Integration with frameworks such as LangChain and Semantic Kernel accelerates AI-powered app development. At Ivanti, our mission is to elevate human potential by managing, protecting, and automating technology to drive continuous innovation. SQL Server 2025 plays a crucial role in helping us achieve this goal. By harnessing the advanced capabilities of SQL Server 2025 and Azure OpenAI, we are building intelligent, agentic tools that empower customers to access knowledge and resolve incidents faster. Sirjad Parakkat, Vice President, AI Engineering | Ivanti Made for developers This release is the most significant for SQL developers in a decade, streamlining development and boosting productivity. Native JSON support, REST APIs,RegEx and Fuzzy string match enable richer data enrichment and validation. Change event streaming allows real-time, event-driven applications by streaming changes directly from transaction to Azure Event Hubs, reducing resource overhead compared to CDC. SQL Tooling SQL Server 2025 delivers major updates across the data platform. SQL Server Management Studio (SSMS 22) is now generally available, offering official support for SQL Server 2025, enhanced AI assistance, and ARM64 support. SSMS 22 also includes AI assistance when you install the GitHub Copilot workload, which leverages the same GitHub subscription you use with GitHub Copilot in Visual Studio or VS Code. The Microsoft Python Driver for SQL Server (mssql-python) is generally available, providing a modern, high-performance connector with Entra ID authentication. "SQL Server 2025 offers two major functionalities which are very important to us and will bring SQL Server into the future – native API calls and RAG. In the past we’ve had to use custom assemblies for making API calls, which can be a huge problem when you have to make hundreds of thousands of API calls and the remote systems are slow to respond, creating large queues and high CPU load in SQL. With RAG and vector search, we can now implement countless AI possibilities, making data searchable in ways previously impossible.” Alex Ivanov, CTO, eDynamix Best-in-class security, performance, and availability SQL Server 2025 builds on its foundation as the most secure database in the last decade, introducing modern identity and encryption practices, including Microsoft Entra managed identities for improved credential management. Optimized locking reduces lock memory consumption, minimizes blocking, and boosts concurrency. Tempdb space resource governance improves server reliability. Optional parameter plan optimization makes query performance more stable. SQL Server 2025 continues to strengthen its mission-critical capabilities with enhancements to Always On availability groups (AGs) and disaster recovery options. The focus is on faster failover, improved diagnostics, and hybrid flexibility. Preliminary benchmarks show SQL Server 2025 running on AMD EPYC processors with HPE hardware delivers measurable gains in performance and value. For performance, the 10TB workload sets a new record for SQL Server. In price-performance, SQL Server 2025 achieves a 4% improvement in the 3TB category compared to previous results. “At Infios, we are very excited about several new features in SQL Server 2025 and the vast amount of opportunities for performance improvements. We are most excited about the optimized locking feature and how that can drastically help reduce locking across our customers and all their unique workloads. Optional Parameter Plan Optimization (OPPO) could also be huge for us with SQL Server being able to reduce parameter sniffing issues. Persisted statistics on secondary replicas will also be beneficial for the rare occurrence that we have a failover event. While we’ve been pleased with all the improvements to tempdb in previous versions, resource governance to prevent runaway queries and consuming large amounts of disk space in SQL 2025 is a big improvement for us. ” Tim Radney, SaaS Operations Manager, Infios Cloud agility through Azure and Fabric SQL Server 2025 enhances cloud agility with support for database mirroring in Fabric, enabling near real-time analytics with zero-ETL and offloading analytical workloads. Azure Arc integration continues to provide unified management, security, and governance for SQL estates across on-premises and cloud environments, empowering organizations to scale and modernize with confidence. “With Fabric Mirroring in SQL Server 2025, ExponentHR can effortlessly mirror numerous datasets to fabric, enabling near real-time analytics. This technology has alleviated the need for expensive and complex ETL operations and enables more productivity for our customers. Thanks to SQL Server 2025’s built-in cloud connectivity, we can directly process large amounts of data efficiently and overcome traditional bottlenecks.” -- Brent Carlson, IT Manager, ExponentHR SQL Server 2025 on Linux SQL Server 2025 on Linux introduces several important enhancements. Security is strengthened with TLS 1.3 support, custom password policies, and signed container images. Platform support expands to include RHEL 10 and Ubuntu 24.04, while performance is improved through tmpfs support for tempdb and container-based deployments. Advanced analytics are enabled with generic ODBC data source support via PolyBase. Developer experience is streamlined with Visual Studio Code integration for local container deployment using the mssql extension and validated deployment patterns in partnership with Red Hat, supporting modern workloads and AI scenarios across hybrid environments. "The work we’re doing with Microsoft to optimize SQL Server on Red Hat Enterprise Linux is a powerful testament to the strength of our collaboration. With the new features in SQL Server, including support for Red Hat Enterprise Linux 10 and enabling streamlined deployment via Red Hat Ansible Automation Platform, we are making it easier than ever for customers to deploy and manage this critical workload across the hybrid cloud. This collaboration extends beyond just enabling core performance to deliver innovative, validated patterns, such as leveraging Red Hat Enterprise Linux AI with SQL Server for retrieval-augmented generation (RAG) and generative AI scenarios, and providing a more consistent experience for customers, whether they are deploying via the Azure Marketplace or on-premises. Our mutual goal is to minimize complexity, increase confidence and help enterprises harness the full potential of their data and AI investments on a trusted, open foundation." - Gunner Hellekson, Vice President and General Manager, Red Hat Enterprise Linux, Red Hat SQL Server 2025 on Azure Virtual Machines Run SQL Server 2025—any edition, Standard, Enterprise, Enterprise Developer, or the new Standard Developer Edition—on Azure Virtual Machines, using optimized VM families like Mbdsv3, Ebdsv5/6, and FXmdsv2 for high performance. Pair with Premium SSD v2 or Ultra Disk storage to achieve fast throughput, low latency, and excellent scalability. Deploy quickly from the Azure portal with features including configurable settings, flexible licensing, storage setup for data, logs, and tempdb, automated patching, and Best Practice Assessment (BPA). Get started today to leverage SQL Server 2025 and Azure’s high performance and flexibility. Preview Features & Flexibility In SQL Server 2025, customers can explore new database features using an opt-in mechanism through database-scoped configurations. Certain features, such as vector indexes, are introduced this way, allowing customers to try them in preview even while SQL Server is generally available. These features will become fully available in a future SQL Server 2025 update, at which point the database-scoped configuration will no longer be required. Our goal is to make preview features generally available within approximately 12 months, guided by customer feedback and our commitment to delivering high-quality experiences. Learn more. Product Changes SQL Server 2025 brings important changes to the product lineup. Standard edition changes: Resource limits have increased to support up to 32 cores and 256 GB of memory. Resource governor is now available in Standard edition. The newly launched Standard Developer edition offers full feature parity with the Standard edition, enabling development and testing that mirrors production environment capabilities. Power BI Report Server entitlement is now included for all editions except the Express edition, adding value for customers. Express edition changes: The maximum database size is now increased to 50 GB per database. The Express Advanced mode has been consolidated into a single, unified SQL Express edition, featuring all feature capabilities that were available in Express Advanced. Discontinuing Web edition in SQL Server 2025 release: SQL Server 2022 is the final version of the Web edition, with SQL Server 2022 Web edition remaining supported until January 2033 in line with Microsoft’s fixed lifecycle policy. If you've been using the Web edition for cost-effective web applications, now is a great time to consider migrating to Azure SQL. Azure SQL offers an affordable, scalable solution that is well-suited for modern web workloads. For multi-tenant apps, Azure SQL Database elastic pools provide flexible pricing and easy management—making the move to Azure SQL a smart choice for future growth. If you remain on-premises or use Azure SQL Virtual Machines, upgrade to the Standard edition. Modern Reporting and Analytics On-premises SQL Server Reporting Service (SSRS) consolidated into Power BI Report Server is now the default reporting solution, unifying paginated and interactive reports for all paid SQL Server licenses. Learn more. SQL Server Analysis Services 2025 introduces major performance enhancements, including improved MDX query efficiency, parallel DirectQuery execution, and visual DAX calculations for simplified modeling. It also adds new DAX functions, client library updates, and deprecates PowerPivot for SharePoint, while discontinuing HTTP access via msmdpump.dll by default. Learn more. SQL Server Integration Services (SSIS) now introduces support for the Microsoft SqlClient Data Provider in ADO.NET connection manager, enhancing connectivity and modernizing data integration workflows. Learn more. Partner Momentum Partners such as AMD, Intel, and HPE are collaborating on advanced performance and high availability solutions, including benchmark testing on AMD EPYC and Intel Xeon processors, with HPE achieving world record results for performance and price/performance. NVIDIA is working with SQL Server 2025 to enable streamlined deployment of GPU-optimized AI models using built-in REST APIs, supporting flexible AI workloads across environments. Pure Storage is delivering high availability and fast backup solutions through deep integration with SQL Server 2025, including metadata-aware snapshots and automation for simplified operations. Additionally, Microsoft works closely with partners like Canonical and Red Hat to ensure SQL Server is integrated seamlessly and operates effectively within the Linux ecosystem, providing customers with robust and reliable database solutions across a broader range of environments. Get Started Today SQL Server 2025 reaffirms Microsoft’s commitment to innovation, performance, and developer empowerment. We thank our customers, partners, and community for your ongoing support and feedback. We look forward to seeing what you build next with the AI-ready enterprise database. Download SQL Server 2025 today One consistent SQL: the launchpad from legacy to innovation Learn more through documentation and our Mechanics video Master SQL Server 2025 with a full learning path and claim your badge Get started with Azure SQL Share your feedback at SQL Community15KViews3likes4CommentsUnlocking Enterprise AI: SQL Server 2025 and NVIDIA Nemotron RAG Accelerate AI
Today, most of the world’s data still remains untapped, sitting in databases, documents, and systems across organizations. Enterprises are racing to unlock this data’s value by building the next wave of generative AI applications—solutions that can answer questions, summarize documents, and drive smarter decisions. At the heart of these innovations are retrieval-augmented generation (RAG) pipelines, which enable users to interactively engage with large amount of data that continuously evolves. Yet, as promising as RAG pipelines are, enterprises face real challenges in making them work at scale. Handling both structured and unstructured data, processing massive volumes efficiently, and ensuring privacy and security are just a few hurdles. This is where the integration between SQL Server 2025 and NVIDIA Nemotron RAG models, deployed as NVIDIA NIM microservices, comes in, offering a new approach that streamlines AI deployment and delivers enterprise-grade performance—whether you’re running workloads in the cloud or on-premises. “As AI becomes core to every enterprise, organizations need efficient and compliant ways to bring intelligence to their data,” said Joey Conway, Senior Director of Generative AI software at NVIDIA. “With SQL Server 2025’s built-in AI and NVIDIA Nemotron RAG, deployed as NIM microservices, enterprises can deploy and run AI models close to their data on premises or in the cloud without complex integration, accelerating innovation while maintaining data sovereignty and control.” Overcoming the complexity of generating embeddings at scale Customer challenge Building responsive AI applications using RAG requires converting SQL data into vector embeddings—a process that feeds huge amounts of text through complex neural networks. This is inherently parallel and compute-intensive, often creating performance bottlenecks that prevent real-time data indexing. The result? Slow applications and poor user experiences. Moreover, enterprises need flexibility. Different embedding models excel at different tasks—semantic search, recommendations, classification—and each comes with its own tradeoffs in accuracy, speed, and cost. Businesses want to mix and match models, balance premium performance with budget constraints, and stay resilient against model deprecation or API changes. Furthermore, rapid experimentation and adaptation are key to staying ahead and thus developers want models that offer flexible customization and full transparency. The Solution: SQL Server 2025 + NVIDIA Nemotron RAG SQL Server 2025 brings AI closer to your data, allowing you to natively and securely connect to any model hosted anywhere. You can generate embeddings directly in SQL using extensions to T-SQL —no need for new languages, frameworks, or third-party tools. By connecting SQL Server 2025 to the llama-nemotron-embed-1b-v2 embedding model from NVIDIA, you eliminate bottlenecks and deliver the massive throughput needed for real-time embedding generation. llama-nemotron-embed-1b-v2 is a best in class embedding model that offers multilingual and cross-lingual text question-answering retrieval with long context support and optimized data storage. This model is part of NVIDIA Nemotron RAG models, a collection of extraction, embedding, reranking models, fine-tuned with the Nemotron RAG datasets and scripts, to achieve the best accuracy. These models offer flexible customization, enabling easy fine-tuning and rapid experimentation. They also offer full transparency with open access to models, datasets, and scripts. Llama-nemotron-embed-1b-v2 is the model of choice for embedding workflows, but this high-speed inference pipeline is not limited to this model and can potentially call any optimized AI model as an NVIDIA NIM microservice, seamlessly powering every stage of the RAG pipeline. From multimodal data ingestion and advanced retrieval to reranking, all operations run directly on your data within SQL Server. Such RAG systems can be applied across a wide range of use cases, enabling intelligent, context-aware applications across industries. Customer Benefits: With GPU acceleration and built-in AI of SQL Server 2025, you can achieve optimal inference, ensuring performance that meets the demands of modern applications. Our flexible approach lets you mix and match models to suit different use cases, striking the right balance between accuracy and cost. And with open models that enable vendor flexibility and rapid adaptation, you gain resilience to stay ahead of the curve in an ever-changing AI landscape. Streamlining AI Model Deployment with Enterprise-Grade Confidence Customer Challenge Integrating advanced AI models into enterprise workflows has historically been slow and complex. Specialized teams must manage intricate software dependencies, configure infrastructure, and handle ongoing maintenance—all while navigating the risks of deploying unsupported models in mission-critical environments. This complexity slows innovation, drains engineering resources, and increases risk. The Solution: Simplified, Secure Model Deployment with NVIDIA NIM This collaboration simplifies and de-risks AI deployment. The llama-nemotron-embed-1b-v2 model is available as an NVIDIA NIM microservice for secure, reliable deployment across multiple Azure compute platforms. Prebuilt NIM containers for a broad spectrum of AI models and can be deployed with a single command for easy integration into enterprise-grade AI applications using built-in REST APIs of SQL Server 2025 and just a few lines of code, regardless where you run SQL Server workloads and NVIDIA NIM, on premises or in the cloud. NIM containers package the latest AI models together with the best inference technology from NVIDIA and the community and all dependencies into a ready-to-run container, abstracting away the complexity of environment setup so customers can spin up AI services quickly. Furthermore, NVIDIA NIM is enterprise-grade and is continuously managed by NVIDIA with dedicated software branches, rigorous validation processes, and support. As a result, developers can confidently integrate state-of-the-art AI into their data applications. This streamlined approach significantly reduces development overhead and provides the reliability needed for mission-critical enterprise systems. NVIDIA NIM containers are discoverable and deployable via Microsoft Azure AI Foundry’s model catalog. Customer Benefits Rapid deployment with minimal setup means you can start leveraging AI without specialized engineering, and SQL Server 2025 makes it even easier with built-in support for AI workloads and native REST APIs. Enterprise-grade security and monitoring ensure safe, reliable operations, while SQL Server’s integration with Entra ID and advanced compliance features provide added protection. Direct integration into SQL workflows reduces complexity and risk, and with SQL Server’s hybrid flexibility, you can run seamlessly across on-premises and cloud environments—simplifying modernization while maintaining control. Innovating Without Compromise on Security or Flexibility Customer Challenge Organizations in regulated industries often face a tough choice: adopt powerful AI or maintain strict data residency and compliance. Moving sensitive data to external services is often not an option, and many companies run AI inference workloads both in the cloud and on-premises to balance scalability, privacy, regulatory compliance, and low-latency requirements. The Solution: Flexible, Secure Integration—On-Premises and Cloud SQL Server 2025 enables organizations in regulated environments to securely integrate locally hosted AI models, ensuring data residency and compliance while minimizing network overhead. This architecture boosts throughput by keeping sensitive data on-premises and leveraging SQL Server’s native extensibility for direct model invocation. With SQL Server 2025 and Nemotron RAG, deployed as NVIDIA NIM microservices, you get the best of both worlds. This solution can be seamlessly deployed in the cloud with serverless NVIDIA GPUs on Azure Container Apps (ACA) or on-premises with NVIDIA GPUs on Azure Local. Sensitive data never leaves your secure environment, allowing you to harness the full power of Nemotron models while maintaining complete data sovereignty and meeting the strictest compliance mandates. Customer Benefits SQL Server 2025 helps you maintain compliance by supporting data residency and meeting regulatory standard requirements across regions. Sensitive data stays protected on-premises with enterprise-grade security, including consistent access controls, ledger support, and advanced encryption to minimize risk. At the same time, SQL Server’s hybrid flexibility lets you deploy AI workloads wherever they’re needed—on-premises, in the cloud, or across a hybrid environment—while leveraging built-in AI features like vector search and secure integration with locally hosted models for performance and control. Conclusion: Powering the Next Wave of Enterprise AI The collaboration between Microsoft and NVIDIA is more than a technical integration. It’s designed to help enterprises overcome the toughest challenges in AI deployment. By streamlining vector embedding and vector search, delivering enterprise-grade performance, and enabling secure, flexible integration across cloud and on-premises environments, this joint solution empowers organizations to unlock the full value of their data. Whether you’re building conversational AI, automating document analysis, or driving predictive insights, SQL Server 2025 and NVIDIA Nemotron RAG models, deployed as NIM, provide the tools you need to innovate with confidence. The future of enterprise AI is here and it’s flexible, secure, and built for real business impact. Get started today: Learn more about SQL Server 2025 and download it today Learn more about our joint solution from NVIDIA’s Technical Blog GitHub: Microsoft SQL Server 2025 and NVIDIA Nemotron RAG314Views1like0CommentsAnnouncing General Availability of the mssql-python Driver
We’re thrilled to announce that the mssql-python driver is now Generally Available! 🎉 This milestone marks a major step forward in delivering a modern, high-performance, and developer-friendly experience for Python developers working with SQL Server, Azure SQL and SQL databases in Fabric. Why mssql-python? The mssql-python driver is a DB-API 2.0 compliant driver designed from the ground up to provide speed, simplicity, and security for Python applications connecting to SQL Server. We think data and devops engineers, analysts, and developers who use Python for configuration, analysis or AI will all find something they like. Here’s what makes it stand out: 🚀 Fast and Cross-platform The mssql-python driver is built on an optimized C++ core that delivers high performance and a consistent API across operating systems. Internal benchmarks show that the mssql-python driver performs well across common SQL operations such as SELECT, INSERT, UPDATE, and DELETE. Complex queries, joins, and stored procedures also perform well, making it ideal for both transactional and analytical workloads. 📦 One-Line Installation Whether you are just getting started or sharing scripts you have written, the mssql_python driver makes it easier than ever. Install the driver with a single command: pip install mssql-python No extra dependencies on Windows, no complicated configurations - just install and start coding. 🔐 Simplified Entra ID Authentication Security shouldn’t be hard. The mssql-python driver offers built-in support for Microsoft Entra ID (formerly Azure AD) authentication, reducing complexity and eliminating the need for custom token handling. This makes it easier to build secure, enterprise-grade applications without extra overhead. The mssql-python driver supports Active Directory Default authentication across all operating systems, making it easier to remove credentials from your code while still allowing you to share a script that just works. Key Highlights Cross-Platform Support: Works seamlessly on Windows, Linux, and macOS (including Apple Silicon). Connection Pooling: Efficient, configurable pooling for high-performance applications. Modern Python Experience: Clean APIs, better diagnostics, and improved error handling. Get Started Today Upgrade your Python data workflows with the new mssql-python driver: pip install mssql-python Check out the GitHub repository for documentation, examples, and to join the community. You can also go directly to our PyPI page to download the latest release. Visit the aka.ms/mssql-python#get-started to view more quickstarts. What’s Next? We’re committed to continuous improvement. Upcoming releases will focus on bulk insert optimizations, ORM integrations, and community-driven enhancements. Stay tuned for updates and share your feedback on GitHub!618Views0likes0CommentsReimagining Data Excellence: SQL Server 2025 Accelerated by Pure Storage
SQL Server 2025 is a leap forward as enterprise AI-ready database, unifying analytics, modern AI application development, and mission-critical engine capabilities like security, high availability and performance from ground to cloud. Pure Storage’s all-Flash solutions are engineered to optimize SQL Server workloads, offering faster query performance, reduced latency, and simplified management. Together it helps customers accelerate the modernization of their data estate.249Views2likes1CommentIntroducing optimized locking v2
Executive summary New improvements in optimized locking further reduce the overhead of locks in the MSSQL database engine. For many transactions, row and page locks aren’t taken at all while still retaining ACID transaction semantics. The improvements require no changes in customer workloads or configuration and are a part of optimized locking in SQL Server 2025, Azure SQL Database, SQL database in Fabric, and Azure SQL Managed Instance. Optimized locking recap Optimized locking was originally released in Azure SQL Database in 2023. The feature has two components: Transaction ID (TID) locking Lock After Qualification (LAQ) With TID locking, while individual row and page locks are still taken, they are not held until the end of transaction because the transaction remains protected by a single TID lock. Concurrency is improved because lock escalation is much less likely to occur. Memory usage is also improved because locks are released soon after they are acquired. With LAQ, locks are taken just before a qualified row is modified. This optimistic locking strategy improves concurrency because locks aren’t held while qualifying rows for modification. At Infios, we are very excited about several new features in SQL Server 2025 and the vast amount of opportunities for performance improvements. We are most excited about the optimized locking feature and how that can drastically help reduce locking across our customers and all their unique workloads. - Tim Radney, SaaS Operations Manager at Infios and a Data Platform MVP Improvements In recent months, we have made two improvements that make a database more lock-free and make the benefits of optimized locking available to more workloads: Skip index locks (SIL) Query plan LAQ feedback persistence These improvements are in effect if read committed snapshot isolation (RCSI) is enabled for the database, which is the default in Azure SQL Database and SQL database in Fabric. To gain the most benefit from optimized locking, enable RCSI. Skip index locks With TID locking, lock escalation typically doesn’t occur because locks are released fast enough to avoid crossing the escalation threshold. A large transaction that modifies many rows without lock escalation must acquire and release many locks. These locks don’t consume a lot of memory because they are released quickly. However, each lock acquisition and release requires a call into the Lock Manager component within the engine. Cumulatively for a large transaction, the overhead of these calls might be significant. The SIL improvement reduces this overhead by skipping row and page locks when they are not required to guarantee ACID semantics. When transactions are protected by TID locks, and when RCSI and LAQ are used, an exclusive lock on a modified row is only necessary if there might be other queries accessing the row and expecting it to be stable. Examples of such queries are those that use the Repeatable Read or Serializable isolation levels, or the corresponding locking hints. We call such queries Row Locking Queries, or RLQ. In the SIL improvement, the engine was modified to maintain a special No-RLQ bit for each page in the buffer pool. This bit indicates the absence of RLQ queries accessing any row on the page. When this bit is set, a DML transaction can skip taking row and page locks and instead modify a row with just an exclusive page latch. Based on the Azure SQL Database telemetry, RLQ queries are uncommon in workloads using RCSI or snapshot isolation transactions. Telemetry also tells us that at least half of all DML queries, on average, use LAQ. This indicates that many workloads can benefit from the SIL improvement. Query plan LAQ feedback persistence LAQ uses an optimistic locking strategy. Locks are taken only after a row is qualified for modification, not while qualifying the row. This means that statement processing might need to be internally restarted if another transaction modified the row after it was read but before it was modified by the current transaction. If statement restarts happen frequently, the use of LAQ might introduce a noticeable overhead because the work to process the DML statement needs to be redone. The engine tracks the potentially wasted work and statement restarts. In the original version of optimized locking, this data is aggregated across all statements and tracked in memory at the database level. If the potentially wasted work or statement restarts exceed thresholds, LAQ is disabled for the entire database, and re-enabled if the potentially wasted work and restarts go below the thresholds. With the new improvement, in addition to tracking at the database level, the engine also tracks LAQ feedback at the query plan level and persists it in Query Store. If thresholds are exceeded for a particular query plan, LAQ is disabled only for that plan rather than for the entire database. The rest of the workload still benefits from LAQ. Skipping locks – a demo To demonstrate the SIL improvement, we used SQL Server 2025 to run a workload that executes an UPDATE statement 100 times in a loop. Each statement updates a random number of rows in a table with a clustered index. We used an extended events session with a histogram target to collect the lock_acquired event. Each histogram bucket counts the number of acquired locks for a specific resource type, including KEY and PAGE. Here are the results: For this execution of the demo, 99.6% of key locks and 79% of page locks are skipped. For details, see the Appendix later in this post. Improvement at the Azure scale To zoom out and demonstrate the SIL improvement broadly, the next chart summarizes telemetry data for about 25K databases in Azure SQL Database. The chart shows the ratio of the total skipped to total acquired locks by index type. Broadly, the improvement is most pronounced for nonclustered indexes, where 81% of locks are skipped. Conclusion Improvements in optimized locking show our continuing investment in this core database engine feature. We fine-tuned optimized locking beyond an already successful v1 release and further reduced the overhead of locks. For more information about these two improvements, see Skip index locks and LAQ heuristics. As of this writing, the improvements are enabled in SQL Server 2025 and for most databases in Azure SQL Database and SQL database in Fabric. The rollout to the remainder of the Azure SQL fleet including Azure SQL Managed Instance (with the always-up-to-date and SQL Server 2025 update policy) is in progress. Appendix Here are the T-SQL scripts to demonstrate SIL in action. Demo setup: /* /* Reset the demo */ USE tempdb; ALTER DATABASE olv2 SET SINGLE_USER WITH ROLLBACK IMMEDIATE; DROP DATABASE IF EXISTS olv2; */ CREATE DATABASE olv2; GO USE olv2; /* Enable RCSI, ADR, and optimized locking */ ALTER DATABASE olv2 SET READ_COMMITTED_SNAPSHOT ON WITH ROLLBACK IMMEDIATE; ALTER DATABASE olv2 SET ACCELERATED_DATABASE_RECOVERY = ON WITH ROLLBACK IMMEDIATE; ALTER DATABASE olv2 SET OPTIMIZED_LOCKING = ON WITH ROLLBACK IMMEDIATE; /* Create a sequence to generate clustered index keys */ CREATE SEQUENCE dbo.s_id AS int START WITH 1 INCREMENT BY 1; /* Create a test table */ CREATE TABLE dbo.t ( id int NOT NULL CONSTRAINT df_t_id DEFAULT (NEXT VALUE FOR dbo.s_id), dt datetime2 NOT NULL CONSTRAINT df_t_dt DEFAULT (SYSDATETIME()), u uniqueidentifier NOT NULL CONSTRAINT df_t_uid DEFAULT (NEWID()), s nchar(40) NOT NULL CONSTRAINT df_t_s DEFAULT (REPLICATE('c', 1 + 39 * RAND())), CONSTRAINT pk_t PRIMARY KEY (id) ); /* Insert 50,000 rows */ INSERT INTO dbo.t (s) SELECT REPLICATE('olv2', 10) AS s FROM GENERATE_SERIES(1, 50000); GO /* Create a stored procedure that updates a random number of rows */ CREATE OR ALTER PROCEDURE dbo.update_rows AS SET NOCOUNT, XACT_ABORT ON; /* Get the maximum key value for the clustered index */ DECLARE @MaxKey int = ( SELECT CAST(current_value AS int) FROM sys.sequences WHERE name = 's_id' AND SCHEMA_NAME(schema_id) = 'dbo' ); /* Get a random key value within the key range */ DECLARE @StartKey int = 1 + RAND(CAST(CAST(NEWID() AS varbinary(3)) AS int)) * @MaxKey; /* Get a random number of rows between 1 and 500 */ DECLARE @RowCount int = 1 + RAND() * 499; /* Update rows */ UPDATE TOP (@RowCount) dbo.t SET dt = DEFAULT, u = DEFAULT, s = DEFAULT WHERE id >= @StartKey; GO /* Create an event session to collect lock statistics */ IF EXISTS ( SELECT 1 FROM sys.server_event_sessions WHERE name = N'olv2_locks' ) DROP EVENT SESSION olv2_locks ON SERVER; CREATE EVENT SESSION olv2_locks ON SERVER ADD EVENT sqlserver.lock_acquired ( SET collect_database_name = 1 WHERE database_name = N'olv2' ) ADD TARGET package0.histogram ( SET filtering_event_name = N'sqlserver.lock_acquired', source = N'resource_type', source_type = 0 ); GO Execute the workload and collect a histogram of locks by resource type: USE olv2; ALTER EVENT SESSION olv2_locks ON SERVER STATE = START; GO EXEC dbo.update_rows; GO 100 WITH histogram_target AS ( SELECT TRY_CAST(st.target_data AS xml) AS target_data FROM sys.dm_xe_sessions AS s INNER JOIN sys.dm_xe_session_targets AS st ON s.address = st.event_session_address WHERE s.name = 'olv2_locks' ), lock_type_histogram AS ( SELECT hb.slot.value('(@count)[1]', 'bigint') AS slot_count, hb.slot.value('(value/text())[1]', 'int') AS lock_type FROM histogram_target AS ht CROSS APPLY ht.target_data.nodes('/HistogramTarget/Slot') AS hb(slot) ) SELECT mv.map_value AS lock_type_desc, lth.slot_count AS lock_acquired_count FROM lock_type_histogram AS lth INNER JOIN sys.dm_xe_map_values AS mv ON lth.lock_type = mv.map_key WHERE mv.name = 'lock_resource_type' ORDER BY lock_type_desc; ALTER EVENT SESSION olv2_locks ON SERVER STATE = STOP; GO Each workload run can produce slightly different results because other activities (query compilation, file growth, page splits, etc.) also take locks if they occur during the run. However, in this demo, the number of key locks with SIL enabled is always reduced. For comparison purposes, we used an undocumented and unsupported trace flag 7194 to disable SIL. To reenable SIL, disable this trace flag and then rerun ALTER DATABASE … SET OPTIMIZED_LOCKING = ON statement, or restart the database.245Views0likes0CommentsResource governor - a new beginning
Executive summary In addition to increased CPU and memory limits, we are making another change to benefit customers using the SQL Server Standard edition. Starting with SQL Server 2025, resource governor, previously an Enterprise edition feature, is also available in the Standard edition. A powerful tool Resource governor has been a part of SQL Server for more than 15 years. It’s a powerful, full-featured tool that improves reliability when multiple applications or users share resources on the same SQL Server instance. We rely on Resource Governor to isolate workloads on our SQL Server instances by controlling CPU and memory usage. It helps us ensure that the core of our trading systems remains stable and runs with predictable performance, even when other parts of the systems share the same servers. - Ola Hallengren, Chief Data Platforms Engineer at Saxo Bank and a Data Platform MVP Yet when we ask customers if they use resource governor, a frequent answer is "We can't. We are on Standard edition." Indeed, ever since its initial release, resource governor has been an Enterprise-only feature. That made it less known and less commonly used than it deserves. We heard and acted on your feedback. Starting with SQL Server 2025, resource governor is available in both Standard and Enterprise editions, with identical capabilities. This includes the respective Developer editions as well. In Azure SQL Managed Instance, resource governor is already available in all service tiers. What can it do? Here are just some of the things you can accomplish using resource governor: Limit or reserve CPU bandwidth and IOPS for a workload. Limit the size of query memory grants or reserve memory for query processing. Limit the total number of running requests (queries) for a workload. Abort query batches that exceed a specified amount of CPU time. Set a hard cap on the maximum degree of parallelism (MAXDOP) for queries. (New in SQL Server 2025) Limit the amount of tempdb space a workload can use. Whether you are on Standard or Enterprise edition, if you haven't used resource governor yet, see what it can do for you. To help you learn, we have a new step-by-step guide with best practices and examples, from simple to more advanced. There is also a separate guide focusing on the new tempdb space governance feature. Conclusion With this change, customers on Standard edition get a powerful enterprise-grade tool to control resource contention and improve reliability of their SQL Server environments. We encourage you to learn more about resource governor to understand the full range of its capabilities. Have questions or feedback? Post a comment on this blog post, an idea at https://aka.ms/sqlfeedback, or email us at sql-rg-feedback@microsoft.com. Let’s make resource governor a standard (😊) tool in your toolset!1.7KViews1like0CommentsModern SQL Server Features That Make Life Better
🚀 Excited to share an upcoming session you won’t want to miss! 📌 Modern SQL Server Features That Make Life Better As data platforms evolve, staying ahead of the curve is essential for every database developer and administrator. This session dives into the latest advancements in SQL Server, including powerful capabilities introduced in SQL Server 2022 that transform the way we manage, optimise, and troubleshoot data workloads. 🔍 What you’ll learn: • How Intelligent Query Processing and Query Store simplify performance tuning and troubleshooting • The impact of Memory Grant Feedback and DOP Feedback on real-world workload performance • New T-SQL enhancements that help developers write cleaner, more efficient code • How temporal tables enable trending over time, point-in-time recovery, and fixing accidental data changes • Key modern features that make database operations more scalable, predictable, and efficient Whether you're a DBA or a developer, this session will equip you with practical insights to make your day-to-day work easier — and your SQL Server environments smarter. 💡Join us and elevate your SQL Server expertise! 🗓️ Date: 22 November 2025 ⏰ Time: 18:00 (CET) 🎙️ Speaker: Lee Markum 📌 Topic: Modern SQL Server Features That Make Life Better49Views0likes0CommentsDatabase Innovations: Your Guide to Microsoft Ignite 2025
In two weeks, we will share the latest news and features for SQL, NoSQL, and open-source databases on Azure, Fabric, and SQL Server at Microsoft Ignite. Held November 18 – 21 in San Francisco and online, Ignite is Microsoft’s largest event of the year, and we look forward to sharing product innovations, hearing best practices and insights from customers, and building new connections. Read on for all the Databases breakout sessions, demos, and labs taking place at Microsoft Ignite – and don’t forget to add them to your schedule. All times listed are in PST (Pacific Standard Time). Databases: End-to-end Want to see the top news about databases all at once? Starting after the opening keynote on November 18th, these sessions will provide an overview of the latest innovations and announcements: BRK134 Modern Data, Modern Apps: Innovation with Microsoft Databases (Nov 18th, 1pm) BRK1702 Innovation Session: Microsoft Fabric and Azure Databases - the data estate for AI (Nov 18th, 2:30pm) Explore new features and capabilities Join the product teams from Azure SQL, SQL Server, Azure Cosmos DB, Azure Database for PostgreSQL, and Fabric Databases as they show off the innovative new features that will be announced at Microsoft Ignite. NoSQL (Azure Cosmos DB, DocumentDB, and Cosmos DB in Fabric) BRK132 Move fast, save more with MongoDB-compatible workloads on DocumentDB (Nov 19th, 11:30am) BRK133 How Sitecore built a scalable, isolated SaaS platform on Azure (Nov 19th, 1:30pm) BRK155 Sam’s Club transforms retail mission-critical apps with Azure (Nov 20th, 8:30am) BRK228 Real-time analytics and AI apps with Cosmos DB in Fabric (Nov 20th, 9:45am) THR708 Vibe coding: Ship it faster with GitHub Copilot and Azure Cosmos DB (Nov 20th, 10:30am) BRK135 From DEV to PROD: How to build agentic memory with Azure Cosmos DB (Nov 20th, 11:00am) BRK131 How Veeam delivers planet-scale semantic search with Azure Cosmos DB (Nov 20th, 1:00pm) PostgreSQL (Azure Database for PostgreSQL) THR705 PostgreSQL on Azure: Your launchpad for intelligent apps and agents (Nov 18th, 3:45pm) BRK127 Mission-critical PostgreSQL on Azure (Nov 19th, 9am) BRK130 The blueprint for intelligent AI agents backed by PostgreSQL (Nov 19th, 11:30am) BRK137 Nasdaq Boardvantage: AI-driven governance on PostgreSQL and AI Foundry (Nov 19th, 2:45pm) BRK123 AI-assisted migration: The path to powerful performance on PostgreSQL (Nov 20th, 8:30am) THR706 Simplifying scale-out of PostgreSQL for performant multi-tenant apps (Nov 20th, 10:30am) SQL (SQL Server, Azure SQL, and SQL in Fabric) THR707 Elevate SQL development with VSCode, GitHub Copilot and new drivers (Nov 18th, 2:00pm PT) BRK139 Use Azure Migrate for AI assisted insights and cloud transformation (Nov 18th, 3:45pm PT) BRK126 Build scalable AI apps with Azure SQL Database Hyperscale (Nov 18th, 5pm) BRK125 Meet the performance-enhanced next gen Azure SQL Managed Instance (Nov 19th, 10:15am) BRK220 SQL database in Fabric: The unified database for AI apps and analytics (Nov 19th, 11:30am) BRK156 How Levi’s is transforming their IT estate with Azure (Nov 19th, 2:45pm) BRK124 SQL Server 2025: The AI-ready enterprise database (Nov 19th, 3:45pm PT) THR704 Accelerate SQL Migrations with AI-assisted experience in Azure Arc (Nov 20th, 9:30am) THR711 Smarter SQL: GitHub Copilot + SSMS 22(Nov. 20th, 3:30pm) How they did it - real-world lessons from customers Hear directly from your peers, as developers and IT professionals from a diverse range of organizations share their insights and best practices about using Microsoft databases to support some of their most critical and innovative applications. BRK126 Build scalable AI apps with Azure SQL Database Hyperscale, feat. Blackrock and Hexagon (Nov 18th, 5pm) BRK125 Meet the performance-enhanced next gen Azure SQL Managed Instance feat. Hexure (Nov 19th, 10:15am) BRK130 The blueprint for intelligent AI agents backed by PostgreSQL, feat. AlphaLife Sciences (Nov 19th, 11:30am) BRK133 How Sitecore built a scalable, isolated SaaS platform on Azure (Nov 19th, 1:30pm) BRK137 Nasdaq Boardvantage: AI-driven governance on PostgreSQL and AI Foundry (Nov 19th, 2:45pm) BRK156 How Levi’s is transforming their IT estate with Azure (Nov 19th, 2:45pm) BRK124 SQL Server 2025: The AI-ready enterprise database, feat. Ivanti (Nov 19th, 3:45pm PT) BRK155 Sam’s Club transforms retail mission-critical apps with Azure (Nov 20th, 8:30am) BRK123 AI-assisted migration: The path to powerful performance on PostgreSQL, feat. Apollo Hospitals and DBTune (Nov 20th, 8:30am) BRK135 From DEV to PROD: How to build agentic memory with Azure Cosmos DB, feat. Walmart Chile and IntelePeer (Nov 20th, 11:00am) BRK131 How Veeam delivers planet-scale semantic search with Azure Cosmos DB (Nov 20th, 1:00pm) Get hands on keyboard with your fave database In-person attendees can dive deep into Azure SQL Database, Azure Cosmos DB, and Azure Database for PostgreSQL in technical workshops that run throughout Microsoft Ignite. Space is limited – so make sure you register in advance. LAB515 Build advanced AI Agents with PostgreSQL Tues, Nov. 18: 6:45-8 pm Wed, Nov. 19: 2:00-3:15 pm Thurs, Nov. 20: 1:00 - 2:15pm Fri, Nov 21: 10:45am – 12:00pm LAB518 Multi-Agent Apps with Semantic Kernel, LangChain & Azure Cosmos DB Wed, Nov. 19: 11:45-1:00 pm Thurs, Nov. 20: 2:45 – 4:00pm LAB530 Build new AI Applications with Azure SQL Databases Wed, Nov 18th: 1:00-2:15 pm PT Wed, Nov 19: 10-11:15 am PT Thurs, Nov. 20: 9:00-10:15 am PT LAB534 Build real-time analytics with Cosmos DB in Microsoft Fabric Tues, Nov. 18: 2:45pm – 4:00pm Wed, Nov 19: 10:00 – 11:15am Thurs, Nov 20 10:45am – 12:00pm Fri, Nov 21: 10:45am – 12:00pm Say hi to the Databases team Members of the product engineering and marketing teams will be on-site in the Expert Meet-Up Zone and Community Hub. We look forward to meeting you.477Views1like0CommentsSQL Spotlight: Ignite 2025
Ignite 2025: Your SQL Guide Microsoft Ignite 2025 is less than 30 days away! Whether you’re a data professional, DBA, or developer, this event is your chance to dive in, expand knowledge, and connect with fellow SQL experts. Join Microsoft leaders and customers as they share best practices, practical insights from across the SQL portfolio. Be the first to hear the latest releases, dive in with hands-on labs, and new learning paths to grow your skill set. Grab your coffee and add the following sessions to your schedule! Be sure to check back for more updates: 5 SQL Sessions You Don’t Want to Miss (in person and online) 1. BRK124: SQL Server 2025: The AI-ready enterprise database Tue, Nov 18 | 3:45 PM – 4:30 PM PST | Learn how SQL Server 2025 is redefining what’s possible for enterprise data with AI integration, native JSON, REST APIs, and vector search. 2. BRK156: How Levi’s is transforming their IT estate with Azure Wed, Nov 19 | 2:45 PM - 3:30 PM PST Hear how American icon Levi’s transformed its 150 year-business by migrating Windows Server, SQL Server, SAP and Oracle workloads to Azure. 3. BRK126: Build scalable AI apps with Azure SQL Database Hyperscale Tue, Nov 18 | 5:00 PM – 5:45 PM PST | Learn how Azure SQL Hyperscale together with AI Foundry delivers a more modern, secure, scalable solution for building AI apps. Industrial IT company Hexagon and investment company, BlackRock will join us onstage to share their experience, demos and more. 4. BRK125: Meet the performance-enhanced next-gen Azure SQL Managed Instance Wed, Nov 19 | 10:15 AM – 11:00 AM PST | This session is all about to get the most from your SQL Server workloads with Azure SQL Managed Instance. 5. BRK220: SQL database in Fabric: The unified database for AI apps and analytics Wed, Nov 19 | 11:30 AM – 12:15 PM PST | See how the latest from SQL in Fabric brings transactional and analytical workloads together in one end-to-end AI-driven solution. Theater sessions (in person only) 1. THR707: Elevate SQL development with VSCode, GitHub Copilot and new drivers Tue, Nov 18 | 2:00 PM – 2:30 PM PST | A demo-heavy theater session showing how AI-driven tools streamline SQL development. Love coding efficiency? This one’s for you. 2. THR711: Smarter SQL: GitHub Copilot + SSMS 22 Thu, Nov 20 | 3:30 PM – 4:00 PM PST | Hands-on exploration of AI-powered Copilots in SQL Server Management Studio and beyond. 3. THR704: Accelerate SQL Migrations with AI-assisted experience in Azure Arc Thu, Nov 20 | 9:30 AM – 10:00 AM PST | Learn how Azure Arc simplifies SQL migrations with near-zero downtime. For the builders: Labs (in person only) LAB533: Build scalable AI & Data Solutions in SQL Database in Microsoft Fabric LAB530: Build new AI Applications with Azure SQL Databases Connect with us at the Expert Meet Up stations! (in person only) Come back and learn more about the onsite experiences we’re bringing to San Francisco!461Views0likes0Comments