azure sql
606 TopicsDatabase DevOps (preview) in SSMS 22.4.1
Database DevOps tooling for Microsoft SQL brings the benefits of database-as-code to your development workflow. At its core are SQL database projects, which enable you to source control your database schema, perform reliable deployments to any environment, and integrate code quality checks into your development process.4.2KViews0likes1CommentZero Trust for data: Make Microsoft Entra authentication for SQL your policy baseline
A policy-driven path from enabled to enforced. Why this matters now Security and compliance programs were once built on an assumption that internal networks were inherently safer. Cloud adoption, remote work, and supply-chain compromise have steadily invalidated that model. U.S. federal guidance has now formalized this shift: Executive Order 14028 calls for modernizing cybersecurity and accelerating Zero Trust adoption, and OMB Memorandum M-22-09 sets a federal Zero Trust strategy with specific objectives and timelines. Meanwhile, attacker economics are changing. Automation and AI make reconnaissance, phishing, and credential abuse cheaper and faster. That concentrates risk on identity—the control plane that sits in front of systems, applications, and data. In Zero Trust, the question is no longer “is the network trusted,” but “is this request verified, governed by policy, and least-privilege?” Why database authentication is a first‑order Zero Trust control Databases are universally treated as crown-jewel infrastructure. Yet many data estates still rely on legacy patterns: password-based SQL authentication, long-lived secrets embedded in apps, and shared administrative accounts that persist because migration feels risky. This is exactly the kind of implicit trust Zero Trust architectures aim to remove. NIST SP 800-207 defines Zero Trust as eliminating implicit trust based solely on network location or ownership and focusing controls on protecting resources. In that model, every new database connection is not “plumbing”—it is an access decision to sensitive data. If the authentication mechanism sits outside the enterprise identity plane, governance becomes fragmented and policy enforcement becomes inconsistent. What changes when SQL uses Microsoft Entra authentication Microsoft Entra authentication enables users and applications to connect to SQL using enterprise identities, instead of usernames and passwords. Across Azure SQL and SQL Server enabled by Azure Arc, Entra-based authentication helps align database access with the same identity controls organizations use elsewhere. The security and compliance outcomes that leaders care about Reduce password and secret risk: move away from static passwords and embedded credentials. Centralize governance: bring database access under the same identity policies, access reviews, and lifecycle controls used across the enterprise. Improve auditability: tie access to enterprise identities and create a consistent control surface for reporting. Enable policy enforcement at scale: move from “configured” controls to “enforced” controls through governance and tooling. This is why Entra authentication is a high-ROI modernization step: it collapses multiple security and operational objectives into one effort (identity modernization) rather than a set of ongoing compensating programs (password rotation programs, bespoke exceptions, and perpetual secret hygiene projects). Why AI makes this a high priority decision AI accelerates both reconnaissance and credential abuse, which concentrates risk on identity. As a result, policy makers increasingly treat phishing-resistant authentication and centralized identity enforcement as foundational—not optional. A practical path: from enabled to enforced Successful security programs define a clear end state, a measurable glide path, and an enforcement model. A pragmatic approach to modernizing SQL access typically includes: Discover active usage: Identify which logins and users are actively connecting and which are no longer required. Establish Entra as the identity authority: Enable Entra authentication on SQL logical servers, starting in mixed mode to reduce disruption. Recreate principals using Entra identities: Replace SQL Authentication logins/users with Entra users, groups, service principals, and managed identities. Modernize application connectivity: Update drivers and connection patterns to use Entra-based authentication and managed identities. Validate, then enforce: Confirm the absence of password‑based SQL authentication traffic, then move to Entra‑only where available and enforce via policy. By adopting this sequencing, organizations can mitigate risks at an early stage and postpone enforcement until the validation process concludes. For a comprehensive migration strategy, refer to Securing Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide. Choosing which projects to fund — and which ones to stop When making investment decisions, priority is given to database identity projects that can demonstrate clear risk reduction and lasting security benefits. Microsoft Entra authentication as the default for new SQL workloads, with a defined migration path for the existing workloads. Managed identities for application-to-database connectivity to eliminate stored secrets. Centralized governance for privileged database access using enterprise identity controls. At the same time, organizations should explicitly de-prioritize investments that perpetuate password risk: password rotation projects that preserve SQL Authentication, bespoke scripts maintaining shared logins, and exception processes that do not scale. Security and scale are not competing goals Security is often seen as something that slows down innovation, but database identity offers unique benefits. When enterprise identity is used for access controls, bringing in new applications and users shifts from handing out credentials to overseeing policies. Compliance reporting also becomes uniform rather than customized, making it easier to grow consistently thanks to a single control framework. Modern database authentication is not solely about mitigating risk— it establishes a scalable operational framework for secure data access. A scorecard designed for leadership readiness To elevate the conversation from implementation to governance, use outcome-based metrics: Coverage: Percentage of SQL workloads with Entra authentication enabled. Enforcement: Percentage operating in Entra-only mode after validation. Secret reduction: Applications still relying on stored database passwords. Privilege hygiene: Admin access governed through enterprise identity controls. Audit evidence: Ability to produce identity-backed access reports on demand. These map directly to Zero Trust maturity expectations and provide a defensible definition of “done.” Closing Zero Trust is an operating posture, not a single control. For most organizations, the fastest way to make that posture measurable is to standardize database access on the same identity plane used everywhere else. If you are looking for a single investment that improves security, reduces audit friction, and supports responsible AI adoption, modernizing SQL access with Microsoft Entra authentication — and driving it from enabled to enforced — is one of the most durable choices you can make. References US Government sets forth Zero Trust architecture strategy and requirements (Microsoft Security Blog) Securing Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide (Microsoft Tech Community) OMB Memorandum M-22-09: Federal Zero Trust Strategy (White House) NIST SP 800-207: Zero Trust Architecture CISA: Zero Trust Enforce Microsoft Entra-only authentication for Azure SQL Database and Azure SQL Managed Instance157Views0likes0CommentsStop defragmenting and start living: introducing auto index compaction
Executive summary Automatic index compaction is a new built-in feature in the MSSQL database engine that compacts indexes in background and with minimal overhead. Now you can: Stop using scheduled index maintenance jobs. Reduce storage space consumption and save costs. Improve performance by reducing CPU, memory, and disk I/O consumption. Today, we announce a public preview of automatic index compaction in Azure SQL Database, Azure SQL Managed Instance with the always-up-to-date update policy, and SQL database in Fabric. Index maintenance without maintenance jobs Enable automatic index compaction for a database with a single T-SQL command: ALTER DATABASE [database-name] SET AUTOMATIC_INDEX_COMPACTION = ON; Once enabled, you no longer need to set up, maintain, and monitor resource intensive index maintenance jobs, a time-consuming operational task for many DBA teams today. As the data in the database changes, a background process consolidates rows from partially filled data pages into a smaller number of filled up pages, and then removes the empty pages. Index bloat is eliminated – the same amount of data now uses a minimal amount of storage space. Resource consumption is reduced because the database engine needs fewer disk IOs and less CPU and memory to process the same amount of data. By design, the background compaction process acts on the recently modified pages only. This means that its own resource consumption is much lower compared to the traditional index maintenance operations (index rebuild and reorganize), which process all pages in an index or its partition. For a detailed description of how the feature works, a comparison between automatic index compaction and the traditional index maintenance operations, and the ways to monitor the compaction process, see automatic index compaction in documentation. Compaction in action To see the effects of automatic index compaction, we wrote a stored procedure that simulates a write-intensive OLTP workload. Each execution of the procedure inserts, updates, deletes, or selects a random number of rows, from 1 to 100, in a 50,000-row table with a clustered index. We executed this stored procedure using a popular SQLQueryStress tool, with 30 threads and 400 iterations on each thread. We measured the page density, the number pages in the leaf level of the table’s clustered index, and the number of logical reads (pages) used by a test query reading 1,000 rows, at three points in time: After initially inserting the data and before running the workload. Once the workload stopped running. Several minutes later, once the background process completed index compaction. Here are the results: Before workload After workload After compaction Logical reads 25 🟢 1,610 🔴⬆️ 35 🟢⬇️ Page density 99.51% 🟢 52.71% 🔴⬇️ 96.11% 🟢⬆️ Pages 962 🟢 4,394 🔴⬆️ 1,065 🟢⬇️ Before the workload starts, page density is high because nearly all pages are full. The number of logical reads required by the test query is minimal, and so is its resource consumption. The workload leaves a lot of empty space on pages and increases the number of pages because of row updates and deletions, and because of page splits. As a result, immediately after workload completion, the number of logical reads required for the same test query increases more than 60 times, which translates into a higher CPU and memory usage. But then within a few minutes, automatic index compaction removes the empty space from the index, increasing page density back to nearly 100%, reducing logical reads by about 98% and getting the index very close to its initial compact state. Less logical reads means that the query is faster and uses less CPU. All of this without any user action. With continuous workloads, index compaction is continuous as well, maintaining higher average page density and reducing resource usage by the workload over time. The T-SQL code we used in this demo is available in the Appendix. Conclusion Automatic index compaction delegates a routine database maintenance operation to the database engine itself, letting administrators and engineers focus on more important work without worrying about index maintenance. The public preview is a great opportunity to let us know how this new feature works for you. Please share your feedback and suggestions for any improvements we can make. To let us know your thoughts, you can comment on this blog post, leave feedback at https://aka.ms/sqlfeedback, or email us at sqlaicpreview@microsoft.com. Appendix Here is the T-SQL code we used to demonstrate automatic index compaction. The type of executed statements and the number of affected rows is randomized to better represent an OLTP workload. While the results demonstrate the effectiveness of automatic index compaction, exact measurements may vary from one execution to the next. /* Enable automatic index compaction */ ALTER DATABASE CURRENT SET AUTOMATIC_INDEX_COMPACTION = ON; /* Reset to the initial state */ DROP TABLE IF EXISTS dbo.t; DROP SEQUENCE IF EXISTS dbo.s_id; DROP PROCEDURE IF EXISTS dbo.churn; /* Create a sequence to generate clustered index keys */ CREATE SEQUENCE dbo.s_id AS int START WITH 1 INCREMENT BY 1; /* Create a test table */ CREATE TABLE dbo.t ( id int NOT NULL CONSTRAINT df_t_id DEFAULT (NEXT VALUE FOR dbo.s_id), dt datetime2 NOT NULL CONSTRAINT df_t_dt DEFAULT (SYSDATETIME()), u uniqueidentifier NOT NULL CONSTRAINT df_t_uid DEFAULT (NEWID()), s nvarchar(100) NOT NULL CONSTRAINT df_t_s DEFAULT (REPLICATE('c', 1 + 100 * RAND())), CONSTRAINT pk_t PRIMARY KEY (id) ); /* Insert 50,000 rows */ INSERT INTO dbo.t (s) SELECT REPLICATE('c', 50) AS s FROM GENERATE_SERIES(1, 50000); GO /* Create a stored procedure that simulates a write-intensive OLTP workload. */ CREATE OR ALTER PROCEDURE dbo.churn AS SET NOCOUNT, XACT_ABORT ON; DECLARE @r float = RAND(CAST(CAST(NEWID() AS varbinary(4)) AS int)); /* Get the type of statement to execute */ DECLARE @StatementType char(6) = CASE WHEN @r <= 0.15 THEN 'insert' WHEN @r <= 0.30 THEN 'delete' WHEN @r <= 0.65 THEN 'update' WHEN @r <= 1 THEN 'select' ELSE NULL END; /* Get the maximum key value for the clustered index */ DECLARE @MaxKey int = ( SELECT CAST(current_value AS int) FROM sys.sequences WHERE name = 's_id' AND SCHEMA_NAME(schema_id) = 'dbo' ); /* Get a random key value within the key range */ DECLARE @StartKey int = 1 + RAND() * @MaxKey; /* Get a random number of rows, between 1 and 100, to modify or read */ DECLARE @RowCount int = 1 + RAND() * 99; /* Execute a statement */ IF @StatementType = 'insert' INSERT INTO dbo.t (id) SELECT NEXT VALUE FOR dbo.s_id FROM GENERATE_SERIES(1, @RowCount); IF @StatementType = 'delete' DELETE TOP (@RowCount) dbo.t WHERE id >= @StartKey; IF @StatementType = 'update' UPDATE TOP (@RowCount) dbo.t SET dt = DEFAULT, u = DEFAULT, s = DEFAULT WHERE id >= @StartKey; IF @StatementType = 'select' SELECT TOP (@RowCount) id, dt, u, s FROM dbo.t WHERE id >= @StartKey; GO /* The remainder of this script is executed three times: 1. Before running the workload using SQLQueryStress. 2. Immediately after the workload stops running. 3. Once automatic index compaction completes several minutes later. */ /* Monitor page density and the number of pages and records in the leaf level of the clustered index. */ SELECT avg_page_space_used_in_percent AS page_density, page_count, record_count FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('dbo.t'), 1, 1, 'DETAILED') WHERE index_level = 0; /* Run a test query and measure its logical reads. */ DROP TABLE IF EXISTS #t; SET STATISTICS IO ON; SELECT TOP (1000) id, dt, u, s INTO #t FROM dbo.t WHERE id >= 10000 SET STATISTICS IO OFF;3.7KViews2likes1CommentStream data in near real time from SQL MI to Azure Event Hubs - Public preview
How do I modernize an existing application without rewriting it? Many business-critical applications still rely on architectures where the database is the most dependable integration point. These applications may have been built years ago, long before event-driven patterns became mainstream. Even after moving such workloads to Azure, teams are often left with the same question: how do we get data changes out of the database quickly, reliably, and without adding more custom plumbing? This is where Change Event Streaming (CES) comes in. We are happy to announce that Change Event Streaming for Azure SQL Managed Instance is now in Public Preview. CES enables you to stream row-level changes - inserts, updates, and deletes - from your database directly to Azure Event Hubs in near real time. For workloads running on Azure SQL Managed Instance, this matters especially because many of them are existing line-of-business applications, modernized from on-premises SQL Server environments into Azure. Those applications are often still central to the business, but they were not originally designed to publish events to downstream systems. CES helps bridge that gap without requiring you to redesign the application itself. What is Change Event Streaming? Change Event Streaming is a capability that captures committed row changes from your database and publishes them to Azure Event Hubs or Fabric Eventstreams. Instead of relying on periodic polling, custom ETL jobs, or additional connectors, CES lets SQL push changes out as they happen. This opens the door to near-real-time integrations while keeping the architecture simpler and closer to the source of truth. Why CES matters for Azure SQL Managed Instance Incremental modernization for existing applications Azure SQL Managed Instance is a database of choice where application compatibility matters and where teams want to modernize from on-premises SQL Server into Azure with minimal disruption. In these environments, the database often becomes the most practical place to tap into business events - especially when the application itself was not designed to emit events or integrate in real-time. With CES, you do not need to retrofit an older application to emit events itself. You can publish changes at the data layer and let downstream services react from there. This makes CES a practical tool for modernization programs that need to move step by step rather than through a full rewrite. Lower operational complexity Before CES: teams typically had to assemble integration flows out of polling processes, ETL pipelines, custom code, or third-party connectors. Those approaches can work, but they usually bring more moving parts, more credentials to manage, more monitoring overhead, and more latency tuning. With CES: SQL Managed Instance streams changes directly to the configured destination. This reduces architectural sprawl and helps teams focus on consuming the events instead of maintaining the mechanics of moving them. Better decoupling across the estate Once changes are published to Azure Event Hubs or Fabric Eventstreams, multiple downstream systems can consume them independently. That is useful when one operational workload needs to feed analytics platforms, integration services, caches, search indexes, or new application components at the same time. Instead of teaching an existing application to integrate with every destination directly, you can stream once from the database and let the message bus handle fan-out. Typical scenarios Breaking down monoliths Many modernization efforts start with a large existing application and a database that serves many business functions. CES can help you carve out one capability at a time. A new component (microservice) can subscribe to events from selected tables, build its own behavior around those changes, and be validated incrementally before broader cutover decisions are made. Real-time integration for line-of-business systems If an operational system running on SQL Managed Instance needs to notify other platforms when data changes, CES provides a direct path to doing that. This can help with partner integrations, internal workflows, or downstream business processes that should react quickly when transactions are committed. Real-time analytics Operational data often becomes more valuable when it can be analyzed quickly. CES can stream data changes into Fabric Eventstreams or Azure Event Hubs, from where they can be consumed by analytics and stream processing processes for timely insights or actions. Cache and index refresh Applications often depend on caches or search indexes that need to stay aligned with transactional data. CES can provide a cleaner alternative to custom synchronization logic by publishing changes as they occur. How it works CES uses transaction log-based capture to stream changes with minimal impact on the publishing workload. Events are emitted in a structured JSON format that follows the CloudEvents standard and includes details such as the operation type, primary key, and before/after values. Azure SQL Managed Instance can publish these events to Azure Event Hubs or Fabric Eventstreams using AMQP or Kafka protocols, depending on how you connect your downstream consumers. Conclusion Change Event Streaming for Azure SQL Managed Instance is an important step for customers who want to make existing applications more connected, simplified into smaller pieces or easier to integrate with modern data and application platforms. For teams modernizing long-lived SQL Server workloads in Azure, CES offers a practical path: keep the application stable, tap into the data layer, and start enabling near-real-time scenarios without building another custom integration stack. As CES enters Public Preview for Azure SQL Managed Instance, we encourage you to explore where it can simplify your architecture and accelerate modernization efforts. Availability notes Besides SQL Server 2025 and Azure SQL Database, where CES is already in Public preview, CES is available as of today in Public Preview for Azure SQL Managed Instance. Just make sure that your SQL MI update policy is set to "Always up to date" or "SQL Server 2025". This preview brings the same core CES capability to SQL Managed Instance workloads, helping customers apply event-driven patterns to existing operational systems without adding another custom integration layer. For feature details, configuration guidance, and frequently asked questions, see: Feature Overview CES: Frequently Asked Questions We welcome your feedback through Azure Feedback channels or support channels. The CES team can also be reached via email: sqlcesfeedback [at] microsoft [dot] com. Useful resources Try Azure SQL Managed Instance for free for one year What's new in Azure SQL Managed Instance?291Views0likes0CommentsAnnouncing Preview of 160 and 192vCore Premium-series Options for Azure SQL Database Hyperscale
We are excited to announce the public preview of 160 and 192vCore compute sizes for Premium-series hardware configuration in Azure SQL Database Hyperscale. Since the introduction of Premium-series hardware configurations for Hyperscale in November 2022, many customers have successfully used larger vCore configurations to consolidate workloads, reduce shard counts, and improve overall application performance and stability. This preview builds on the Premium-series configuration introduced previously for Hyperscale, extending the maximum scale of a single database and elastic pools from 128vCores to 192vCores to support higher concurrency, faster CPU performance, and larger memory footprints, for more demanding mission critical workloads. With this preview, customers running largescale OLTP, HTAP, and analytics-heavy workloads can evaluate even higher compute ceilings without rearchitecting their applications. Premium-Series Hyperscale Hardware Overview Premium-series Hyperscale databases run on latest-generation Intel and AMD processors , delivering higher per core performance and improved scalability compared to standard-series (Gen5) hardware. With this public preview release, Premium-series Hyperscale now supports larger vCore configurations, extending the scaleup limits for customers who need more compute and memory in a single database. Getting started Customers can enable the 160 or 192vCore Premium-series options when creating a database, or when scaling up existing Hyperscale databases in supported regions (where preview capacity is available). As with other Hyperscale scale operations, moving to a larger vCore size does not require application changes and uses Hyperscale’s distributed storage and compute architecture. Resource Limits & Key characteristics Link to Azure SQL documentation on resource limits Single Database Resource Limits Cores Memory (GB) Tempdb max data size (GB) Max Local SSD IOPS Max Log Rate (MiB/s) Max concurrent workers Max concurrent external connections per pool Max concurrent sessions 128 (Current Limit) 625 4,096 544,000 150 12,800 150 30,000 160 (New preview limit) 830 4,096 680,000 150 16,000 150 30,000 192 (New preview limit) 843* 4,096 816,000 150 19,200 150 30,000 *Memory values will increase for 192 vCores at GA. Elastic Pool Resource Limits Cores Memory (GB) Tempdb max data size (GB) Max Local SSD IOPS Max Log Rate (MiB/s) Max concurrent workers per pool Max concurrent external connections per pool Max concurrent sessions 128 (Current Limit) 625 4,096 409,600 150 13,440 150 30,000 160 (New preview limit) 830 4,096 800,000 150 16,800 150 30,000 192 (New preview limit) 843* 4,096 960,000 150 20,160 150 30,000 *Memory values will increase for 192 vCores at GA. Premium-series Hyperscale can now scale up to 160 vCores & 192 vCores in public preview regions. High performance CPUs optimized for compute-intensive workloads. Increased memory capacity proportional to vCore scale Up to 128 TiB of data storage, consistent with Hyperscale architecture Full compatibility with existing Hyperscale features and capabilities Performance Improvements with 160 and 192 vcores Strong scale-up efficiency observed beyond 128 vCores: Moving from 128 → 160 → 192 vCores shows consistent performance gains, demonstrating that Hyperscale Premium-series continues to scale effectively at higher core counts. 160 vCores delivers a strong balance of single-query and concurrent performance. 192 vCores is ideal for customers prioritizing maximum throughput, high user concurrency, and large-scale transactional or analytical workloads TPC-H Power Run (measures single-stream query performance) improves from 217 (128 vCores) to 357 (160 vCores) and remains high at 355 (192 vCores), delivering a +64% increase from 128 → 192 vCores, indicating strong single-query execution and CPU efficiency at larger sizes. TPC-H Throughput Run (measures multi-stream concurrency) increases from 191 → 360 → 511 QPH, resulting in a +168% gain from 128 → 192 vCores, highlighting significant benefits for highly concurrent, multi-user workloads. Performance case study (Zava Lending example) If the player doesn’t load, open the video in a new window: Open video Zava Lending scaled Azure SQL Hyperscale online as demand increased—supporting more users and higher transaction volume with no downtime. Throughput scaled linearly as compute increased, moving cleanly from 32 → 64 → 128 → 192 vCores to match real workload growth. 192 vCores proved to be the optimal operating point, sustaining peak transaction load without over‑provisioning. Azure SQL Hyperscale handled mixed OLTP and analytics workloads, including nightly ETL, without becoming a bottleneck. Every scale operation was performed online, with no service interruption and no application changes. Preview scope and limitations During preview, Premium-series 160 and 192 vCores are supported in a limited set of initial regions (Australia East, Canada Central, East US 2, South Central US, UK South, West Europe, North Europe, Southeast Asia, West US 2), with broader availability planned over time. During preview: Zone redundancy and Azure SQL Database maintenance window are not supported for these sizes Preview features are subject to supplemental preview terms, and performance characteristics may continue to improve through GA Customers are encouraged to use this preview to validate scalability, concurrency, memory utilization, query parallelism, and readiness for larger single database deployments. Next Steps This public preview is part of our broader investment in scaling Azure SQL Hyperscale for the most demanding workloads. Feedback from preview will help inform GA configuration limits, regional rollout priorities, and performance optimizations at extreme scale.383Views2likes0CommentsVersionless keys for Transparent Data Encryption in Azure SQL Database (Generally Available)
With this release, you no longer need to reference a specific key version stored in Azure Key Vault or Managed HSM when configuring Transparent Data Encryption (TDE) with customer‑managed keys. Instead, Azure SQL Database now supports a versionless key URI, automatically using the latest enabled version of your key. This means: Simpler key management—no longer necessary to specify the key version. Reduced operational overhead by eliminating risks tied to outdated key versions. Full control remains with the customer. This enhancement streamlines encryption at rest, especially for organizations operating at scale or enforcing strict security and compliance standards. Versionless keys for TDE are available today across Azure SQL Database with no additional cost. Versioned vs. Versionless Key URIs To highlight the difference, here are real examples: Versioned Key URI (old approach — explicit version required) https://demotdeakv.vault.azure.net/keys/TDECMK/40acafb8a7034b20ba227905df090a1f Versionless Key URI (new approach) https://demotdeakv.vault.azure.net/keys/TDECMK A versionless key URI references only the key name. Azure SQL Database automatically uses the newest enabled version of the key. Learn more Transparent Data Encryption - Azure SQL Database Azure SQL transparent data encryption with customer-managed key Transparent data encryption with customer-managed keys at the database level324Views0likes0CommentsManaged Identity Support for Azure SQL Database Import & Export services (preview)
Today we’re announcing a public preview that lets Azure SQL Database Import & Export services authenticate with user-assigned managed identity. Now Azure SQL Databases can perform import and export operations with no passwords, storage keys or SAS tokens. With this preview, customers can choose to use either a single user-assigned managed identity (UAMI) for both SQL and Storage permissions or assign separate UAMIs, one for the Azure SQL logical server and another for the Storage account, for full separation of duties. At a glance: Run Import/Export using a user-assigned managed identity (UAMI). Use one identity for both SQL and Storage, or split them if you prefer tighter scoping. Works in the portal, REST, Azure CLI, and PowerShell. Why this matters: Managed identity support makes SQL migrations simpler and safer, no passwords, storage keys, or SAS tokens. By leveraging managed identity when integrating Import/Export into a pipeline, you streamline access management and enhance security: permissions are granted directly to the identity, reducing manual credential handling and the risk of exposing sensitive information. This keeps operations efficient and secure, without secrets embedded in scripts You’ve got two straightforward options: One UAMI for everything (simplest setup). Two UAMIs, one for SQL and one for Storage, recommended if you wish to maintain more strictly defined permissions. Getting started: Create a user-assigned managed identity (UAMI) Decide up front whether you want one identity end-to-end, or two identities (SQL vs Storage) for separation of duties. Attach the UAMI to the Azure SQL logical server On the server Identity blade, add the UAMI so the Import/Export job can run as that identity. Set the server’s Microsoft Entra ID admin to the UAMI In Microsoft Entra ID > Set admin, select the UAMI. This is what lets the workflow authenticate to SQL without a password. Grant Storage access Use Storage Blob Data Reader for import and Storage Blob Data Contributor for export, assigned in Access control (IAM). If you can, scope the assignment to the container that holds the .bacpac. Pass resource IDs (not names) in your calls In REST/CLI/PowerShell, you pass the UAMI resource ID as the value of administratorLogin (SQL identity) and storageKey (Storage identity), and set authenticationType / storageKeyType to ManagedIdentity. administratorLogin → UAMI resource ID used for SQL auth storageKey → UAMI resource ID used for Storage authauthenticationType / storageKeyType → ManagedIdentity Run the import/export job Kick it off from the portal, REST, Azure CLI, or PowerShell. From there, the service uses the identity you selected to reach both SQL and Storage. Portal experience In the Azure portal, you can choose Authentication type = Managed identity and select the user-assigned managed identity to use for the operation. Figure 1: Azure portal Import/Export experience with Managed identity authentication selected. Notes This preview supports user-assigned managed identities (UAMIs). For least privilege, scope Storage roles to the specific container used for the .bacpac file and use two user-assigned managed identities (UAMIs), one for SQL and one for the storage. Sample 1: REST API — Export using one UAMI: $exportBody = "{ `n `"storageKeyType`": `"ManagedIdentity`", `n `"storageKey`": `"${managedIdentityServerResourceId}`", `n `"storageUri`": `"${storageUri}`", `n `"administratorLogin`": `"${managedIdentityServerResourceId}`", `n `"authenticationType`": `"ManagedIdentity`" `n}" $export = Invoke-AzRestMethod -Method POST -Path "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.Sql/servers/${serverName}/databases/${databaseName}/export?api-version=2024-05-01-preview" -Payload $exportBody # Poll operation status Invoke-AzRestMethod -Method GET $export.Headers.Location.AbsoluteUri Sample 2: REST API — Import to a new server using one UAMI: $serverName = "sql-mi-demo-target" $databaseName = "sqldb-mi-demo-target" # Same UAMI for SQL auth + Storage access $importBody = "{ `n `"operationMode`": `"Import`", `n `"administratorLogin`": `"${managedIdentityServerResourceId}`", `n `"authenticationType`": `"ManagedIdentity`", `n `"storageKeyType`": `"ManagedIdentity`", `n `"storageKey`": `"${managedIdentityServerResourceId}`", `n `"storageUri`": `"${storageUri}`", `n `"databaseName`": `"${databaseName}`" `n}" $import = Invoke-AzRestMethod -Method POST -Path "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.Sql/servers/${serverName}/databases/${databaseName}/import?api-version=2024-05-01-preview" -Payload $importBody # Poll operation status Invoke-AzRestMethod -Method GET $import.Headers.Location.AbsoluteUri Sample 3: PowerShell — Export using two UAMIs: # Server UAMI for SQL auth, Storage UAMI for storage access New-AzSqlDatabaseExport -ResourceGroupName $resourceGroupName -DatabaseName $databaseName -ServerName $serverName -StorageKeyType ManagedIdentity -StorageKey $managedIdentityStorageResourceId -StorageUri $storageUri -AuthenticationType ManagedIdentity -AdministratorLogin $managedIdentityServerResourceId Sample 4: PowerShell — Import to a new server using two UAMIs: New-AzSqlDatabaseImport -ResourceGroupName $resourceGroupName -DatabaseName $databaseName -ServerName $serverName -DatabaseMaxSizeBytes $databaseSizeInBytes -StorageKeyType "ManagedIdentity" -StorageKey $managedIdentityStorageResourceId -StorageUri $storageUri -Edition $edition -ServiceObjectiveName $serviceObjectiveName -AdministratorLogin $managedIdentityServerResourceId -AuthenticationType ManagedIdentity Sample 5: Azure CLI — Export using two UAMIs: az sql db export -s $serverName -n $databaseName -g $resourceGroupName --auth-type ManagedIdentity -u $managedIdentityServerResourceId --storage-key $managedIdentityStorageResourceId --storage-key-type ManagedIdentity --storage-uri $storageUri Sample 6: Azure CLI — Import to a new server using two UAMIs: az sql db import -s $serverName -n $databaseName -g $resourceGroupName --auth-type ManagedIdentity -u $managedIdentityServerResourceId --storage-key $managedIdentityStorageResourceId --storage-key-type ManagedIdentity --storage-uri $storageUrib For more information and samples, please check Tutorial: Use managed identity with Azure SQL import and export (preview)436Views0likes0CommentsGenerally Available: Azure SQL Managed Instance Next-gen General Purpose
Overview Next-gen General Purpose is the evolution of General Purpose service tier that brings significantly improved performance and scalability to power up your existing Azure SQL Managed Instance fleet and helps you bring more mission-critical SQL workloads to Azure. We are happy to announce that Next-gen General Purpose is now Generally Available (GA) delivering even more scalability, flexibility, and value for organizations looking to modernize their data platform in a cost-effective way. The new #SQLMINextGen General Purpose tier delivers a built-in performance upgrade available to all customers at no extra cost. If you are an existing SQL MI General Purpose user, you get faster I/O, higher database density, and expanded storage - automatically. Summary Table: Key Improvements Capability Current GP Next-gen GP Improvement Average I/O Latency 5-10 ms 3-4 ms 2x lower Max Data IOPS 30-50k 80k 60% better Max Storage 16 TB 32 TB 2x better Max Databases/Instance 100 500 5x better Max vCores 80 128 40% better But that’s just the beginning. The new configuration sliders for additional IOPS and memory provide enhanced flexibility to tailor performance according to your requirements. Whether you require more resources for your application or seek to optimize resource utilization, you can adjust your instance settings to maximize efficiency and output. This release isn’t just about speed - It’s about giving you improved performance where it matters, and mechanisms to go further when you need them. Customer story - A recent customer case highlights how Hexure reduced processing time by up to 97.2% using Azure SQL Managed Instance on Next-gen General Purpose. What’s new in Next-gen General Purpose (Nov 2025)? 1. Improved baseline performance with the latest storage tech Azure SQL Managed Instance is built on Intel® Xeon® processors, ensuring a strong foundation for enterprise workloads. With the next-generation General Purpose tier, we’ve paired Intel’s proven compute power with advanced storage technology to deliver faster performance, greater scalability, and enhanced flexibility - helping you run more efficiently and adapt to growing business needs. The SQL Managed Instance General Purpose tier is designed with full separation of compute and storage layers. The Classic GP version uses premium page blobs for the storage layer, while the Next-generation GP tier has transitioned to Azure’s latest storage solution, Elastic SAN. Azure Elastic SAN is a cloud-native storage service that offers high performance and excellent scalability, making it a perfect fit for the storage layer of a data-intensive PaaS service like Azure SQL Managed Instance. Simplified Performance Management With ESAN as the storage layer, the performance quotas for the Next-gen General Purpose tier are no longer enforced for each database file. The entire performance quota for the instance is shared across all the database files, making performance management much easier (one fewer thing to worry about). This adjustment brings the General Purpose tier into alignment with the Business Critical service tier experience. 2. Resource flexibility and cost optimization The GA of Next-gen General Purpose comes together with the GA of a transformative memory slider, enabling up to 49 memory configurations per instance. This lets you right-size workloads for both performance and cost. Memory is billed only for the additional amount beyond the default allocation. Users can independently configure vCores, memory, and IOPS for optimal efficiency. To learn more about the new option for configuring additional memory, check the article: Unlocking More Power with Flexible Memory in Azure SQL Managed Instance. 3. Enhanced resource elasticity through decoupled compute and storage scaling operations With Next-gen GP, both storage and IOPS can be resized independently of the compute infrastructure, and these changes now typically finish within five minutes - a process known as an in-place upgrade. There are three distinct types of storage upgrade experiences depending on the kind of storage upgrade performed and whether failover occurs. In-place update: same storage (no data copy), same compute (no failover) Storage re-attach: Same storage (no data copy), changed compute (with failover) Data copy: Changed storage (data copy), changed compute (with failover) The following matrix describes user experience with management operations: Operation Data copying Failover Storage upgrade type IOPS scaling No No In-place Storage scaling* No* No In-place vCores scaling No Yes** Re-attach Memory scaling No Yes** Re-attach Maintenance Window change No Yes** Re-attach Hardware change No Yes** Re-attach Update policy change Yes Yes Data copy * If scale down is >5.5TB, seeding ** In case of update operations that do not require seeding and are not completed in place (examples are scaling vCores, scaling memory, changing hardware or maintenance window), failover duration of databases on the Next-gen General Purpose service tier scales with the number of databases, up to 10 minutes. While the instance becomes available after 2 minutes, some databases might be available after a delay. Failover duration is measured from the moment when the first database goes offline, until the moment when the last database comes online. Furthermore, resizing vCores and memory is now 50% faster following the introduction of the Faster scaling operations release. No matter if you have end-of-month peak periods, or there are ups and downs of usage during the weekdays and the weekend, with fast and reliable management operations, you can run multiple configurations over your instance and respond to peak usage periods in a cost-effective way. 4. Reserved instance (RI) pricing With Azure Reservations, you can commit to using Azure SQL resources for either one or three years, which lets you benefit from substantial discounts on compute costs. When purchasing a reservation, you'll need to choose the Azure region, deployment type, performance tier, and reservation term. Reservations are only available for products that have reached general availability (GA), and with this update, next-generation GP instances now qualify as well. What's even better is that classic and next-gen GP share the same SKU, just with different remote storage types. This means any reservations you've purchased automatically apply to Next-gen GP, whether you're upgrading an existing classic GP instance or creating a new one. What’s Next? The product group has received considerable positive feedback and welcomes continued input. The initial release will not include zonal redundancy; however, efforts are underway to address this limitation. Next-generation General Purpose (GP) represents the future of the service tier, and all existing classic GP instances will be upgraded accordingly. Once upgrade plans are finalized, we will provide timely communication regarding the announcement. Conclusion Now in GA, Next-gen General Purpose sets a new standard for cloud database performance and flexibility. Whether you’re modernizing legacy applications, consolidating workloads, or building for the future, these enhancements put more power, scalability, and control in your hands - without breaking the bank. If you haven’t already, try out the Next-gen General Purpose capabilities for free with Azure SQL Managed Instance free offer. For users operating SQL Managed Instance on the General Purpose tier, it is recommended to consider upgrading existing instances to leverage the advantages of next-gen upgrade – for free. Welcome to #SQLMINextGen. Boosted by default. Tuned by you. Learn more What is Azure SQL Managed Instance Try Azure SQL Managed Instance for free Next-gen General Purpose – official documentation Analyzing the Economic Benefits of Microsoft Azure SQL Managed Instance How 3 customers are driving change with migration to Azure SQL Accelerate SQL Server Migration to Azure with Azure Arc4.4KViews5likes4CommentsImproving Azure SQL Database reliability with accelerated database recovery in tempdb
We are pleased to announce that in Azure SQL Database, accelerated database recovery is now enabled in the tempdb database to bring instant transaction rollback and aggressive log truncation for transactions in tempdb. The same improvement is coming to SQL Server and Azure SQL Managed Instance.704Views1like2Comments