Blog Post

Azure SQL Blog
5 MIN READ

Stop defragmenting and start living: introducing auto index compaction

Dimitri_Furman's avatar
Mar 18, 2026

Executive summary

Automatic index compaction is a new built-in feature in the MSSQL database engine that compacts indexes in background and with minimal overhead.

Now you can:

  • Stop using scheduled index maintenance jobs.
  • Reduce storage space consumption and save costs.
  • Improve performance by reducing CPU, memory, and disk I/O consumption.

Today, we announce a public preview of automatic index compaction in Azure SQL Database, Azure SQL Managed Instance with the always-up-to-date update policy, and SQL database in Fabric.

Index maintenance without maintenance jobs

Enable automatic index compaction for a database with a single T-SQL command:

ALTER DATABASE [database-name] SET AUTOMATIC_INDEX_COMPACTION = ON;

Once enabled, you no longer need to set up, maintain, and monitor resource intensive index maintenance jobs, a time-consuming operational task for many DBA teams today.

As the data in the database changes, a background process consolidates rows from partially filled data pages into a smaller number of filled up pages, and then removes the empty pages. Index bloat is eliminated – the same amount of data now uses a minimal amount of storage space.

A conceptual view of the index compaction process

Resource consumption is reduced because the database engine needs fewer disk IOs and less CPU and memory to process the same amount of data.

By design, the background compaction process acts on the recently modified pages only. This means that its own resource consumption is much lower compared to the traditional index maintenance operations (index rebuild and reorganize), which process all pages in an index or its partition.

For a detailed description of how the feature works, a comparison between automatic index compaction and the traditional index maintenance operations, and the ways to monitor the compaction process, see automatic index compaction in documentation.

Compaction in action

To see the effects of automatic index compaction, we wrote a stored procedure that simulates a write-intensive OLTP workload. Each execution of the procedure inserts, updates, deletes, or selects a random number of rows, from 1 to 100, in a 50,000-row table with a clustered index.

We executed this stored procedure using a popular SQLQueryStress tool, with 30 threads and 400 iterations on each thread.

The SQLQueryStress tool running an OLTP-like workload

We measured the page density, the number pages in the leaf level of the table’s clustered index, and the number of logical reads (pages) used by a test query reading 1,000 rows, at three points in time:

  1. After initially inserting the data and before running the workload.
  2. Once the workload stopped running.
  3. Several minutes later, once the background process completed index compaction.

Here are the results:

 

Before workload

After workload

After compaction

Logical reads

25 🟢

1,610 🔴⬆️

35 🟢⬇️

Page density

99.51% 🟢

52.71% 🔴⬇️

96.11% 🟢⬆️

Pages

962 🟢

4,394 🔴⬆️

1,065 🟢⬇️

Before the workload starts, page density is high because nearly all pages are full. The number of logical reads required by the test query is minimal, and so is its resource consumption.

The workload leaves a lot of empty space on pages and increases the number of pages because of row updates and deletions, and because of page splits. As a result, immediately after workload completion, the number of logical reads required for the same test query increases more than 60 times, which translates into a higher CPU and memory usage.

But then within a few minutes, automatic index compaction removes the empty space from the index, increasing page density back to nearly 100%, reducing logical reads by about 98% and getting the index very close to its initial compact state. Less logical reads means that the query is faster and uses less CPU. All of this without any user action.

With continuous workloads, index compaction is continuous as well, maintaining higher average page density and reducing resource usage by the workload over time.

The T-SQL code we used in this demo is available in the Appendix.

Conclusion

Automatic index compaction delegates a routine database maintenance operation to the database engine itself, letting administrators and engineers focus on more important work without worrying about index maintenance.

The public preview is a great opportunity to let us know how this new feature works for you. Please share your feedback and suggestions for any improvements we can make.

To let us know your thoughts, you can comment on this blog post, leave feedback at https://aka.ms/sqlfeedback, or email us at sqlaicpreview@microsoft.com.

Appendix

Here is the T-SQL code we used to demonstrate automatic index compaction.

The type of executed statements and the number of affected rows is randomized to better represent an OLTP workload. While the results demonstrate the effectiveness of automatic index compaction, exact measurements may vary from one execution to the next.

/* Enable automatic index compaction */
ALTER DATABASE CURRENT SET AUTOMATIC_INDEX_COMPACTION = ON;

/* Reset to the initial state */
DROP TABLE IF EXISTS dbo.t;
DROP SEQUENCE IF EXISTS dbo.s_id;
DROP PROCEDURE IF EXISTS dbo.churn;

/* Create a sequence to generate clustered index keys */
CREATE SEQUENCE dbo.s_id 
AS int
START WITH 1 INCREMENT BY 1;

/* Create a test table */
CREATE TABLE dbo.t
(
id int NOT NULL CONSTRAINT df_t_id DEFAULT (NEXT VALUE FOR dbo.s_id),
dt datetime2 NOT NULL CONSTRAINT df_t_dt DEFAULT (SYSDATETIME()),
u uniqueidentifier NOT NULL CONSTRAINT df_t_uid DEFAULT (NEWID()),
s nvarchar(100) NOT NULL CONSTRAINT df_t_s DEFAULT (REPLICATE('c', 1 + 100 * RAND())),
CONSTRAINT pk_t PRIMARY KEY (id)
);

/* Insert 50,000 rows */
INSERT INTO dbo.t (s)
SELECT REPLICATE('c', 50) AS s
FROM GENERATE_SERIES(1, 50000);
GO

/*
Create a stored procedure that simulates 
a write-intensive OLTP workload.
*/
CREATE OR ALTER PROCEDURE dbo.churn
AS
SET NOCOUNT, XACT_ABORT ON;

DECLARE @r float = RAND(CAST(CAST(NEWID() AS varbinary(4)) AS int));

/* Get the type of statement to execute */
DECLARE @StatementType char(6) = CASE
                                 WHEN @r <= 0.15 THEN 'insert'
                                 WHEN @r <= 0.30 THEN 'delete'
                                 WHEN @r <= 0.65 THEN 'update'
                                 WHEN @r <= 1 THEN 'select'
                                 ELSE NULL
                                 END;

/* Get the maximum key value for the clustered index */
DECLARE @MaxKey int = (
                      SELECT CAST(current_value AS int)
                      FROM sys.sequences
                      WHERE name = 's_id'
                            AND
                            SCHEMA_NAME(schema_id) = 'dbo'
                      );

/* Get a random key value within the key range */
DECLARE @StartKey int = 1 + RAND() * @MaxKey;

/* Get a random number of rows, between 1 and 100, to modify or read */
DECLARE @RowCount int = 1 + RAND() * 99;

/* Execute a statement */

IF @StatementType = 'insert'
INSERT INTO dbo.t (id)
SELECT NEXT VALUE FOR dbo.s_id
FROM GENERATE_SERIES(1, @RowCount);

IF @StatementType = 'delete'
DELETE TOP (@RowCount) dbo.t
WHERE id >= @StartKey;

IF @StatementType = 'update'
UPDATE TOP (@RowCount) dbo.t
SET dt = DEFAULT,
    u = DEFAULT,
    s = DEFAULT
WHERE id >= @StartKey;

IF @StatementType = 'select'
SELECT TOP (@RowCount) id, dt, u, s
FROM dbo.t
WHERE id >= @StartKey;
GO

/*
The remainder of this script is executed three times:

1. Before running the workload using SQLQueryStress.
2. Immediately after the workload stops running. 
3. Once automatic index compaction completes several minutes later.
*/

/*
Monitor page density and the number of pages and records
in the leaf level of the clustered index.
*/
SELECT avg_page_space_used_in_percent AS page_density,
       page_count,
       record_count
FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('dbo.t'), 1, 1, 'DETAILED')
WHERE index_level = 0;

/* Run a test query and measure its logical reads. */
DROP TABLE IF EXISTS #t;

SET STATISTICS IO ON;

SELECT TOP (1000) id, dt, u, s
INTO #t
FROM dbo.t
WHERE id >= 10000

SET STATISTICS IO OFF;

 

Updated Mar 13, 2026
Version 1.0
No CommentsBe the first to comment