Executive summary
New improvements in optimized locking further reduce the overhead of locks in the MSSQL database engine. For many transactions, row and page locks aren’t taken at all while still retaining ACID transaction semantics.
The improvements require no changes in customer workloads or configuration and are a part of optimized locking in SQL Server 2025, Azure SQL Database, SQL database in Fabric, and Azure SQL Managed Instance.
Optimized locking recap
Optimized locking was originally released in Azure SQL Database in 2023. The feature has two components:
- Transaction ID (TID) locking
- Lock After Qualification (LAQ)
With TID locking, while individual row and page locks are still taken, they are not held until the end of transaction because the transaction remains protected by a single TID lock. Concurrency is improved because lock escalation is much less likely to occur. Memory usage is also improved because locks are released soon after they are acquired.
With LAQ, locks are taken just before a qualified row is modified. This optimistic locking strategy improves concurrency because locks aren’t held while qualifying rows for modification.
At Infios, we are very excited about several new features in SQL Server 2025 and the vast amount of opportunities for performance improvements. We are most excited about the optimized locking feature and how that can drastically help reduce locking across our customers and all their unique workloads.
- Tim Radney, SaaS Operations Manager at Infios and a Data Platform MVP
Improvements
In recent months, we have made two improvements that make a database more lock-free and make the benefits of optimized locking available to more workloads:
- Skip index locks (SIL)
- Query plan LAQ feedback persistence
These improvements are in effect if read committed snapshot isolation (RCSI) is enabled for the database, which is the default in Azure SQL Database and SQL database in Fabric. To gain the most benefit from optimized locking, enable RCSI.
Skip index locks
With TID locking, lock escalation typically doesn’t occur because locks are released fast enough to avoid crossing the escalation threshold.
A large transaction that modifies many rows without lock escalation must acquire and release many locks. These locks don’t consume a lot of memory because they are released quickly. However, each lock acquisition and release requires a call into the Lock Manager component within the engine. Cumulatively for a large transaction, the overhead of these calls might be significant.
The SIL improvement reduces this overhead by skipping row and page locks when they are not required to guarantee ACID semantics.
When transactions are protected by TID locks, and when RCSI and LAQ are used, an exclusive lock on a modified row is only necessary if there might be other queries accessing the row and expecting it to be stable. Examples of such queries are those that use the Repeatable Read or Serializable isolation levels, or the corresponding locking hints. We call such queries Row Locking Queries, or RLQ.
In the SIL improvement, the engine was modified to maintain a special No-RLQ bit for each page in the buffer pool. This bit indicates the absence of RLQ queries accessing any row on the page. When this bit is set, a DML transaction can skip taking row and page locks and instead modify a row with just an exclusive page latch.
Based on the Azure SQL Database telemetry, RLQ queries are uncommon in workloads using RCSI or snapshot isolation transactions. Telemetry also tells us that at least half of all DML queries, on average, use LAQ. This indicates that many workloads can benefit from the SIL improvement.
Query plan LAQ feedback persistence
LAQ uses an optimistic locking strategy. Locks are taken only after a row is qualified for modification, not while qualifying the row. This means that statement processing might need to be internally restarted if another transaction modified the row after it was read but before it was modified by the current transaction. If statement restarts happen frequently, the use of LAQ might introduce a noticeable overhead because the work to process the DML statement needs to be redone.
The engine tracks the potentially wasted work and statement restarts. In the original version of optimized locking, this data is aggregated across all statements and tracked in memory at the database level. If the potentially wasted work or statement restarts exceed thresholds, LAQ is disabled for the entire database, and re-enabled if the potentially wasted work and restarts go below the thresholds.
With the new improvement, in addition to tracking at the database level, the engine also tracks LAQ feedback at the query plan level and persists it in Query Store. If thresholds are exceeded for a particular query plan, LAQ is disabled only for that plan rather than for the entire database. The rest of the workload still benefits from LAQ.
Skipping locks – a demo
To demonstrate the SIL improvement, we used SQL Server 2025 to run a workload that executes an UPDATE statement 100 times in a loop. Each statement updates a random number of rows in a table with a clustered index.
We used an extended events session with a histogram target to collect the lock_acquired event. Each histogram bucket counts the number of acquired locks for a specific resource type, including KEY and PAGE.
Here are the results:
For this execution of the demo, 99.6% of key locks and 79% of page locks are skipped.
For details, see the Appendix later in this post.
Improvement at the Azure scale
To zoom out and demonstrate the SIL improvement broadly, the next chart summarizes telemetry data for about 25K databases in Azure SQL Database. The chart shows the ratio of the total skipped to total acquired locks by index type.
Broadly, the improvement is most pronounced for nonclustered indexes, where 81% of locks are skipped.
Conclusion
Improvements in optimized locking show our continuing investment in this core database engine feature. We fine-tuned optimized locking beyond an already successful v1 release and further reduced the overhead of locks.
For more information about these two improvements, see Skip index locks and LAQ heuristics.
As of this writing, the improvements are enabled in SQL Server 2025 and for most databases in Azure SQL Database and SQL database in Fabric. The rollout to the remainder of the Azure SQL fleet including Azure SQL Managed Instance (with the always-up-to-date and SQL Server 2025 update policy) is in progress.
Appendix
Here are the T-SQL scripts to demonstrate SIL in action.
Demo setup:
/*
/* Reset the demo */
USE tempdb;
ALTER DATABASE olv2 SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE IF EXISTS olv2;
*/
CREATE DATABASE olv2;
GO
USE olv2;
/* Enable RCSI, ADR, and optimized locking */
ALTER DATABASE olv2 SET READ_COMMITTED_SNAPSHOT ON WITH ROLLBACK IMMEDIATE;
ALTER DATABASE olv2 SET ACCELERATED_DATABASE_RECOVERY = ON WITH ROLLBACK IMMEDIATE;
ALTER DATABASE olv2 SET OPTIMIZED_LOCKING = ON WITH ROLLBACK IMMEDIATE;
/* Create a sequence to generate clustered index keys */
CREATE SEQUENCE dbo.s_id
AS int
START WITH 1 INCREMENT BY 1;
/* Create a test table */
CREATE TABLE dbo.t
(
id int NOT NULL CONSTRAINT df_t_id DEFAULT (NEXT VALUE FOR dbo.s_id),
dt datetime2 NOT NULL CONSTRAINT df_t_dt DEFAULT (SYSDATETIME()),
u uniqueidentifier NOT NULL CONSTRAINT df_t_uid DEFAULT (NEWID()),
s nchar(40) NOT NULL CONSTRAINT df_t_s DEFAULT (REPLICATE('c', 1 + 39 * RAND())),
CONSTRAINT pk_t PRIMARY KEY (id)
);
/* Insert 50,000 rows */
INSERT INTO dbo.t (s)
SELECT REPLICATE('olv2', 10) AS s
FROM GENERATE_SERIES(1, 50000);
GO
/*
Create a stored procedure that updates a random number of rows
*/
CREATE OR ALTER PROCEDURE dbo.update_rows
AS
SET NOCOUNT, XACT_ABORT ON;
/* Get the maximum key value for the clustered index */
DECLARE @MaxKey int = (
SELECT CAST(current_value AS int)
FROM sys.sequences
WHERE name = 's_id'
AND
SCHEMA_NAME(schema_id) = 'dbo'
);
/* Get a random key value within the key range */
DECLARE @StartKey int = 1 + RAND(CAST(CAST(NEWID() AS varbinary(3)) AS int)) * @MaxKey;
/* Get a random number of rows between 1 and 500 */
DECLARE @RowCount int = 1 + RAND() * 499;
/* Update rows */
UPDATE TOP (@RowCount) dbo.t
SET dt = DEFAULT,
u = DEFAULT,
s = DEFAULT
WHERE id >= @StartKey;
GO
/* Create an event session to collect lock statistics */
IF EXISTS (
SELECT 1
FROM sys.server_event_sessions
WHERE name = N'olv2_locks'
)
DROP EVENT SESSION olv2_locks ON SERVER;
CREATE EVENT SESSION olv2_locks ON SERVER
ADD EVENT sqlserver.lock_acquired
(
SET collect_database_name = 1
WHERE database_name = N'olv2'
)
ADD TARGET package0.histogram
(
SET filtering_event_name = N'sqlserver.lock_acquired',
source = N'resource_type',
source_type = 0
);
GO
Execute the workload and collect a histogram of locks by resource type:
USE olv2;
ALTER EVENT SESSION olv2_locks ON SERVER STATE = START;
GO
EXEC dbo.update_rows;
GO 100
WITH
histogram_target AS
(
SELECT TRY_CAST(st.target_data AS xml) AS target_data
FROM sys.dm_xe_sessions AS s
INNER JOIN sys.dm_xe_session_targets AS st
ON s.address = st.event_session_address
WHERE s.name = 'olv2_locks'
),
lock_type_histogram AS
(
SELECT hb.slot.value('(@count)[1]', 'bigint') AS slot_count,
hb.slot.value('(value/text())[1]', 'int') AS lock_type
FROM histogram_target AS ht
CROSS APPLY ht.target_data.nodes('/HistogramTarget/Slot') AS hb(slot)
)
SELECT mv.map_value AS lock_type_desc,
lth.slot_count AS lock_acquired_count
FROM lock_type_histogram AS lth
INNER JOIN sys.dm_xe_map_values AS mv
ON lth.lock_type = mv.map_key
WHERE mv.name = 'lock_resource_type'
ORDER BY lock_type_desc;
ALTER EVENT SESSION olv2_locks ON SERVER STATE = STOP;
GO
Each workload run can produce slightly different results because other activities (query compilation, file growth, page splits, etc.) also take locks if they occur during the run. However, in this demo, the number of key locks with SIL enabled is always reduced.
For comparison purposes, we used an undocumented and unsupported trace flag 7194 to disable SIL. To reenable SIL, disable this trace flag and then rerun ALTER DATABASE … SET OPTIMIZED_LOCKING = ON statement, or restart the database.