Blog Post

Modernization Best Practices and Reusable Assets Blog
6 MIN READ

Azure SQL’s Native JSON Type: Optimized for Performance

ShrustiKolsur's avatar
ShrustiKolsur
Icon for Microsoft rankMicrosoft
Feb 09, 2026

Introduction

JSON has become the de-facto format for modern applications from web APIs to microservices and event-driven systems. Azure SQL has supported JSON for years, but JSON was treated just like text (stored as nvarchar or varchar). That meant every query involving JSON required parsing, which could get expensive as data volume grew.

The new native JSON binary type changes that story. Instead of saving JSON as raw text, Azure SQL can store it in a binary representation that’s optimized for fast reads, efficient in-place updates, and compact storage. You get the flexibility of JSON with performance that behaves more like a structured column.

Learn more about the JSON data type in the documentation –


A few useful things to know upfront:

  • JSON data is stored in a native binary format, not as plain text
  • Reads are faster because the JSON is already parsed
  • Improved write efficiency, since queries can update individual values without accessing the entire document.
  • Storage is more compact, optimized for compression
  • Existing JSON functions continue to work so app changes are minimal
  • Internally, JSON is stored using UTF-8 encoding (Latin1_General_100_BIN2_UTF8)

This blog shares the performance gains observed after migrating JSON from nvarchar/varchar to the native JSON binary type. Results will vary across JSON structures and workloads, so consider this a guide rather than a universal benchmark.

Note: The purpose of this blog is to introduce the native JSON binary data type. We are not covering JSON indexes or JSON functions at this time in order to maintain clarity and focus.

Test Environment Details:

To measure the performance impact of migrating JSON from nvarchar/varchar to the native JSON binary type, a test environment was set up with six tables on Azure SQL Database (General Purpose, Gen5, 2 vCores).

Note: The dataset used in the testing was generated using AI for demonstration purposes.


JSON data stored as nvarchar/varchar data types:

Table Name

Number of Records

Size (GB)

InventoryTrackingJSON

400,003

4.21

OrderDetailsJSON

554,153

1.29

CustomerProfileJSON

55,001

0.16

ProductCatalogJSON

100,001

0.10

SalesAnalyticsJSON

10,000

0.08

EmployeeRecordsJSON

5,000

0.02

 

Total database size: 5.94 GB (59.43% used), based on a maximum configured size of 10 GB, with JSON stored as nvarchar/varchar.                                    

 

Example schema (OrderDetailsJSON)

One of the core tables used in testing:

CREATE TABLE JSONTest.OrderDetailsJSON (
    OrderDetailID INT IDENTITY(1,1) PRIMARY KEY,
    OrderMetadata NVARCHAR(MAX),           -- JSON: order info,source, salesperson
    ShippingDetails NVARCHAR(MAX),         -- JSON: carrier, priority, addresses
    CustomizationOptions NVARCHAR(MAX),    -- JSON: customizations and add-ons
    CreatedDate DATETIME2 DEFAULT SYSDATETIME(),
    ModifiedDate DATETIME2 DEFAULT SYSDATETIME()
);


Each JSON column simulated realistic business structure - for example:

OrderMetadata

{
  "orderSource": "Mobile App",
  "salesPerson": "Jane Smith",
  "orderDate": "2025-11-14T10:30:00Z",
  "customerType": "Premium"
}

ShippingDetails

{
  "carrier": "FedEx",
  "priority": "standard",
  "address": { "city": "Anytown", "state": "CA" }
}

CustomizationOptions

{
  "color": "Green",
  "size": "Medium",
  "giftWrap": true
}

Performance before migration:

To measure performance differences accurately, a continuous 12-minute test session was run. The load sizes referenced in the results (500, 1K, 2.5K, 5K, 10K, and 25K) represent the number of records read, and each record goes through the following operations:

  • Multiple JSON_VALUE extractions
  • JSON validation using ISJSON
  • Safe type conversions using TRY_CONVERT
  • Aggregation logic

During the 12-minute continuous workload, JSON stored as nvarchar/varchar showed consistent resource pressure, primarily on CPU and storage IO. The monitoring tools reported:

Disclaimer: These results are for illustration purposes only. Actual performance will vary depending on system hardware (CPU cores, memory, disk I/O), database configurations, network latency, and table structures. We recommend validating performance in dev/test to establish a baseline.

 

 

 

 

 

 

 

 

 



Data migration to native JSON binary data type

For testing, native JSON columns were added to the existing tables, and JSON data stored in nvarchar/varchar columns was migrated to the new native JSON binary columns using the CAST function.

Migration Script (example used for all tables)

  1. Add native JSON columns
    ALTER TABLE JSONTest.OrderDetailsJSON ADD OrderMetadata_native JSON, ShippingDetails_native JSON, CustomizationOptions_native JSON;

     

  2. Migrate existing NVARCHAR/VARCHAR JSON into native JSON
    UPDATE JSONTest.OrderDetailsJSON SET OrderMetadata_native = CAST(OrderMetadata AS JSON), ShippingDetails_native = CAST(ShippingDetails AS JSON), CustomizationOptions_native = CAST(CustomizationOptions AS JSON);

Note: After validating that the migrated data was consistent, the original nvarchar/varchar JSON columns were dropped. A rebuild index operation was then performed to remove fragmentation and reclaim space, ensuring that the subsequent storage comparison reflected the true storage footprint of the native JSON binary type.

The same pattern was repeated for all tables. 

Storage footprint after migration:

Table Name

Number of Records

Size_Before (GB)

Size_After (GB)

InventoryTrackingJSON

400,003

4.21

0.60

OrderDetailsJSON

554,153

1.29

0.27

ProductCatalogJSON

100,001

0.16

0.11

SalesAnalyticsJSON

10,000

0.10

0.04

CustomerProfileJSON

55,001

0.08

0.01

EmployeeRecordsJSON

5,000

0.02

0.00


Total database size: 1.06 GB (10.64% used), based on a maximum configured size of 10 GB, with JSON in native binary data type.

After migrating all JSON columns from nvarchar/varchar to the native JSON type, the total database size dropped from 5.94 GB to 1.06 GB - an ~82% reduction in storage.

Performance after migration

After moving all JSON columns from nvarchar/varchar to native JSON, the exact same 12-minute workload was rerun - same query patterns, same workload distribution. Only the JSON storage format was different. Here are the results:  

 

 

 

 

 

 

 

 

 

 



Key Metrics (Before vs. After)

The migration didn’t just shrink storage - it made JSON workloads easier for the engine to process. With the native JSON type, the same workload completed with ~27% lower CPU and ~80% lower Data IO.

Query duration, Throughput, & Logical Reads

Query duration

A comparison was conducted using the same workload, dataset, indexes, and query patterns - with the only variable being the JSON storage format. The outcome showed a clear trend in query duration.

 

 

 

 

 

 

 

 

 

 

 

 

 

Across every single load level, native JSON cut query duration by 2.5x - 4x. Even more interesting: as the workload scaled 50x, native JSON latency stayed almost flat, while text JSON steadily slowed down.

Note: The duration values shown represent the average across multiple runs within the performance test described earlier.

Throughput improvement

The benefits also translated directly into throughput. Overall, native JSON enabled 20x to 40x more records processed per second (rps). For example:

Load

Throughput Before (rps)

Throughput After (rps)

Small load

~60

~240

High load

~690

~2300

Peak load

~1360

~4700

Logical reads improvement

Native JSON significantly reduced I/O work as well:

  • Logical reads per run dropped from ~168,507 → ~33,880
  • An ~80% reduction in pages read

Lower logical reads directly correlate with improved scalability - fewer pages scanned means less work required to serve each request, especially under increasing load.


Sample results:

                          JSON (nvarchar/varchar)                                                                    JSON (native binary)

 

 

 

Cache management

To ensure the performance improvement was not simply a result of native JSON fitting more easily in memory, the test cleared the cache at regular intervals using DBCC DROPCLEANBUFFERS, forcing repeated cold-start execution. As expected, query duration increased immediately after each cache clear for both text JSON and native JSON, yet the relative benefit remained consistent: native JSON continued to show a 2.5x–4x reduction in duration across all load levels. This confirms that the gains are not due to buffer pool residency alone, but from reduced JSON parsing work during execution.

For example, in the chart below for the small load, runs 3 and 6 were executed right after clearing cache. Although both formats show higher duration, the relative performance advantage remains unchanged.

 

 

 

 

 

 

Conclusion

Native JSON storage in Azure SQL isn’t just a new way to store semi-structured data - it delivers tangible performance and efficiency gains. In our case, migrating JSON from NVARCHAR to the new binary JSON type resulted in:

 

 

 

 

 

 

If your workload involves frequent reading or updating of JSON documents - especially large or deeply nested ones, the native JSON type is worth evaluating. Your gains may vary based on JSON structure, indexing strategy, and workload patterns - but the benefits of eliminating repeated text parsing + reducing storage cost are difficult to ignore.

As SQL workloads continue to blend structured and semi-structured data, native JSON brings Azure SQL more in line with modern application design while preserving the maturity and stability of the relational engine.

Feedback and Suggestions

If you have feedback or suggestions, please contact the Databases SQL Customer Success Engineering (Ninja) Team (datasqlninja@microsoft.com). 

Note: For additional information about migrating various source databases to Azure, see the Azure Database Migration Guide.

Updated Feb 05, 2026
Version 1.0
No CommentsBe the first to comment