azure sql database
309 TopicsReplication lag metric for Azure SQL DB is now in public preview
Azure SQL Database offers business continuity capabilities to recover quickly from regional disasters. Features such as active geo-replication and failover groups provide continuous replication of the data in your primary database to a secondary database in a different Azure region. In the event of a regional disruption, these features allow you to perform quick disaster recovery to your secondary database to meet your business' recovery time objective (RTO) and recovery point objective (RPO). RTO for Azure SQL Database is typically less than 60 seconds, but RPO depends on the amount of data changes before the disruptive event that have not been replicated. Consequently, monitoring the replication lag between the primary and secondary databases is critical in ensuring your RPO goals are met. Until now, the main way to measure the replication lag between the primary and secondary databases was with the replication_lag_sec column of the dynamic management view (DMV), sys.dm_geo_replication_link_status from your primary database. With the introduction of the Replication lag metric, you can now monitor lag with respect to RPO in near real time in the Azure portal in addition to using the DMV. Replication lag is a new Azure monitor metric that is emitted at a one-minute frequency and stored up to 93 days. You can visualize the metric in Azure monitor and set up alerts too. The replication lag metric measures the time span in seconds from the point of transaction commit on the primary database and acknowledgement by the secondary database that the transaction log update has been persisted. The replication lag metric is applicable for a database in DTU or vCore purchasing model and in all service tiers (Basic, Standard, Premium, General Purpose, Business Critical & Hyperscale). Both singleton and elastic pool deployments are supported. You can monitor the metric by adding Replication lag (preview) from your primary database in the portal as shown below: The metric provides three dimensions, Partner Database Name, Partner Server Name, and Partner Resource ID that you can use to further filter or split the data to view specific replication links. If your database is configured to send Metrics to Log Analytics under “Diagnostic settings”, you can also query the Replication lag metric data as shown below: Next Steps Learn more about monitoring geo-replication and other commonly used metrics in Azure SQL Database. See how you can achieve your business continuity goals with Azure SQL Database using Active geo-replication and Failover groups. Prepare for disasters with the Disaster recovery checklist Frequently Asked Questions What is the Replication lag and what does it measure? The Replication lag is a new metric in Azure monitor that measures the time span in seconds between the transactions committed on the primary and hardened to the transaction log on the secondary. How do you view the metric in the portal? In the Azure portal, select your primary SQL database and under "Monitoring", select "Metrics". In the "Metrics" dropdown, choose "Replication lag (preview)". What is the granularity of the Replication lag metric? One minute. What is the latency in displaying the Replication lag data in the Metrics screen? Typically, the latency to display the replication lag is less than three minutes. Are there dimensions available for the Replication lag metric? Yes, there are three dimensions available for the metric - Partner Database Name, Partner Server Name, and Partner Resource ID. These dimensions can be used for filtering and splitting the view in the Metrics screen for easier comparison of multiple secondary geo-replicas.1.1KViews2likes5CommentsAzure SQL is Retiring the “No Minimum TLS” (MinTLS None) Configuration
As part of the retirement of lower TLS versions 1.0 and 1.1 and the enforcement of 1.2 as the new default minimum TLS version, we will be removing the No Minimum TLS (MinTLS = “None” or "0") option and updating these configurations to TLS 1.2. No Minimum TLS allowed Azure SQL Database and Azure SQL Managed Instance resources to accept client connections using any TLS protocol version and unencrypted connections. Over the past year, Azure has retired TLS 1.0 and 1.1 for all Azure databases, due to known security vulnerabilities in these older protocols. As of August 31, 2025, creating servers configured with versions 1.0 and 1.1 was disallowed and migration to 1.2 began. With legacy TLS versions being retired, TLS 1.2 will become the secure default minimum TLS version for new Azure SQL DB and MI configurations and for all client-server connections, rendering the MinTLS = None setting obsolete. As a result, the MinTLS = None configuration option will be retired for new servers, and existing servers configured with No Minimum TLS will be upgraded to 1.2. What is changing? After July 31, 2026, we will disallow minimum TLS value "None", for the creation of new SQL DB and MI resources using PowerShell, Azure CLI, and any other REST based interface. This configuration option has already been removed from the Portal as part of the retirement of TLS versions 1.0 and 1.1. Creating new Azure SQL Database and Managed Instance servers with MinTLS = None (which was previously considered the default) will no longer be a supported configuration. If the server parameter value for the minimum TLS is left blank, it will default to minimum TLS version 1.2. Attempts to create an Azure SQL server with MinTLS = None will fail with an “Invalid operation” error and downgrades to None will be disallowed. While attempts to connect with TLS 1.0, 1.1 or unencrypted connections will fail with “Error: 47072/171 on Gateway.” Effective date (retirement milestone) MinTLS = None (0) MinTLS left blank (defaults to supported minimum) Before 8/31/25 Any + Unencrypted Any + Unencrypted After 8/31/25 1.2 + Unencrypted 1.2 After July 31, 2026 Invalid operation error (for new server creates) Downgrades will be disallowed TLS error: 47072/171 (for unencrypted connections) 1.2 In summary, after July 31, 2026, Azure SQL Database and Azure SQL Managed Instance will require all client connections to use TLS 1.2 or higher and unencrypted connections will be denied. The minimum TLS version setting will no longer accept the value "None" for new or existing servers and servers currently configured with this value will be upgraded to explicitly enforce TLS 1.2. Who is impacted? For most Azure SQL customers, there is no action required. Most clients already use TLS 1.2 or higher. After July 31, 2026, if your Azure SQL Database or Managed Instance is still configured with No Minimum TLS and using 1.0, 1.1 or unencrypted connections, it will automatically update to TLS 1.2 to reflect the current minimum protocol enforcement in client-server connectivity. We do recommend you verify your client applications – especially any older or third-party client drivers – to ensure they can communicate with TLS 1.2 or above. In some rare cases, very old applications, such as an outdated JDBC driver or older .NET framework version, may need an update or need to enable TLS 1.2. Conclusion This retirement is part of Azure’s broader security strategy to ensure encrypted connections are secure by modern encryption standards. TLS version 1.2 is more secure than older versions and is now the industry standard (required by regulations like PCI DSS and HIPAA). This change eliminates the use of unencrypted connections which ensure all database connections meet current security standards. If you’ve already migrated to TLS 1.2 (as most customers have), you will most likely not notice any change, except that the No Minimum TLS option will disappear from configurations.792Views0likes0CommentsDynamic Data Masking – What it is, What it isn’t, and How to use it effectively
In this post, we’ll explain the core purpose of Dynamic Data Masking (to ease application development), how it works, and its proper use cases – as well as its limitations. If you’re considering using Dynamic Data Masking or reviewing your data security strategy, this information will help you make informed decisions. What Dynamic Data Masking is designed for Dynamic Data Masking Dynamic Data Masking - SQL Server | Microsoft Learn is a database feature that can be used to alter how certain data elements are presented in query results for users who do not have privileged access or required permission. For example, a query on an email column may return a masked value such as jXXX@XXXX.com rather than the full address, depending on user permissions, while the original data remains unchanged in storage. Masking rules are defined within the database schema and are applied to query results for applicable users at runtime. This approach can simplify application developer’s job and reduce the need for application‑level logic that modifies how sensitive values are displayed across different application(s) or reports. DDM can help prevent accidental or casual exposure of sensitive information. How Does DDM differ from other security features? Dynamic Data Masking affects only what users see in query results—it does not protect the underlying data. Unlike encryption Always Encrypted - SQL Server | Microsoft Learn or Row‑Level security Row-Level Security - SQL Server | Microsoft Learn, DDM does not encrypt data, filter rows, or override SQL permissions. Users with elevated privileges (such as UNMASK, db_owner, or sysadmin) always see unmasked data or can modify or remove masking rules. What DDM doesn’t protect against Because Dynamic Data Masking is applied when query results are returned, there are several considerations to be aware of: Inference through queries: In some scenarios, users with database access may be able to make inferences about masked values by applying query filters or conditions that rely on underlying stored data. The database is still comparing the real values under the hood, so these queries work. It’s an expected behavior given DDM’s design. Privileged users: Users who are granted sufficient database permissions, such as the ability to alter table schemas, can directly disable or remove masking. Users with sysadmin, db_owner or CONTROL permission can view unmasked data. Thus, controlling and auditing who holds such privileges is vital. Metadata visibility: Masking rules and associated columns can be discoverable through system metadata. Data movement: Because masking is defined at the schema level in a given database instance, backups or exported datasets may contain unmasked values depending on permissions and configuration. Understanding these design characteristics is important when incorporating DDM into a broader data governance or privacy strategy. Proper use and best practices for DDM Organizations may consider using Dynamic Data Masking in scenarios where consistent display of sensitive values is needed across application(s) or reporting environments. Some implementation considerations include: Using DDM to help standardize how sensitive fields are displayed in query results and reduce developmental effort for data masking Combining DDM with other database or access‑control features as part of a layered data protection strategy Reviewing which users are granted permissions to view unmask data or alter masking configurations. Implementing auditing or monitoring database activity as part of broader governance practices Educating internal stakeholders on how masking operates at the query‑result level Testing masking configurations in non‑production environments prior to deployment Conclusion Dynamic Data Masking can be useful in scenarios where organizations want to manage how sensitive data is displayed in application outputs without modifying stored values. It is designed to operate as part of a broader data access or governance approach rather than as a standalone protection mechanism for stored data. When implemented alongside complementary database features and appropriate access controls, DDM may help support more consistent handling of sensitive values across environments.246Views0likes0CommentsStream data in near real time from SQL to Azure Event Hubs - Public preview
If near-real time integration is something you are looking to implement and you were looking for a simpler way to get the data out of SQL, keep reading. SQL is making it easier to integrate and Change Event Streaming is a feature continuing this trend. Modern applications and analytics platforms increasingly rely on event-driven architectures and real-time data pipelines. As the businesses speed up, real time decisioning is becoming especially important. Traditionally, capturing changes from a relational database requires complex ETL jobs, periodic polling, or third-party tools. These approaches often consume significant cycles of the data source, introduce operational overhead, and pose challenges with scalability, especially if you need one data source to feed into multiple destinations. In this context, we are happy to release Change Event Streaming ("CES") feature into Public Preview for Azure SQL Database. This feature enables you to stream row-level changes - inserts, updates, and deletes - from your database directly to Azure Event Hubs in near real time. Change Event Streaming addresses the above challenges by: Reducing latency: Changes are streamed (pushed by SQL) as they happen. This is in contrast with traditional CDC (change data capture) or CT (change tracking) based approaches, where an external component needs to poll SQL at regular intervals. Traditional approaches allow you to increase polling frequency, but it gets difficult to find a sweet spot between minimal latency and minimal overhead due to too frequent polls. Simplifying architecture: No need for Change Data Capture (CDC), Change Tracking, custom polling or external connectors - SQL streams directly to configured destination. This means simpler security profile (fewer authentication points), fewer failure points, easier monitoring, lower skill bar to deploy and run the service. No need to worry about cleanup jobs, etc. SQL keeps track of which changes are successfully received by the destination, handles the retry logic and releases log truncation point. Finally, with CES you have fewer components to procure and get approved for production use. Decoupling: The integration is done on the database level. This eliminates the problem of dual writes - the changes are streamed at transaction boundaries, once your source of truth (the database) has saved the changes. You do not need to modify your app workloads to get the data streamed - you tap right onto the data layer - this is useful if your apps are dated and do not possess real-time integration capabilities. In case of some 3rd party apps, you may not even have an option to do anything other than database level integration, and CES makes it simpler. Also, the publishing database does not concern itself with the final destination for the data - Stream the data once to the common message bus, and you can consume it by multiple downstream systems, irrespective of their number or capacity - the (number of) consumers does not affect publishing load on the SQL side. Serving consumers is handled by the message bus, Azure Event Hubs, which is purpose built for high throughput data transfers. onceptually visualizing data flow from SQL Server, with an arrow towards Azure Event Hubs, from where a number of arrows point to different final destinations. Key Scenarios for CES Event-driven microservices: They need to exchange data, typically thru a common message bus. With CES, you can have automated data publishing from each of the microservices. This allows you to trigger business processes immediately when data changes. Real-time analytics: Stream operational data into platforms like Fabric Real Time Intelligence or Azure Stream Analytics for quick insights. Breaking down the monoliths: Typical monolithic systems with complex schemas, sitting on top of a single database can be broken down one piece at a time: create a new component (typically a microservice), set up the streaming from the relevant tables on the monolith database and tap into the stream by the new components. You can then test run the components, validate the results against the original monolith, and cutover when you build the confidence that the new component is stable. Cache and search index updates: Keep distributed caches and search indexes in sync without custom triggers. Data lake ingestion: Capture changes continuously into storage for incremental processing. Data availability: This is not a scenario per se, but the amount of data you can tap into for business process mining or intelligence in general goes up whenever you plug another database into the message bus. E.g. You plug in your eCommerce system to the message bus to integrate with Shipping providers, and consequently, the same data stream is immediately available for any other systems to tap into. How It Works CES uses transaction log-based capture to stream changes with minimal impact on your workload. Events are published in a structured JSON format following the CloudEvents standard, including operation type, primary key, and before/after values. You can configure CES to target Azure Event Hubs via AMQP or Kafka protocols. For details on configuration, message format, and FAQs, see the official documentation: Feature Overview CES: Frequently Asked Questions Get Started Public preview CES is available today in public preview for Azure SQL Database and as a preview feature in SQL Server 2025. [update 20-mar-2026] Change Event Streaming is now in public preview for Azure SQL Managed instance. Read more here. Private preview CES is also available as a private preview for Azure SQL Managed Instance and Fabric SQL database: you can request to join the private preview by signing up here: https://aka.ms/sql-ces-signup We encourage you to try the feature out and start building real-time integrations on top of your existing data. We welcome your feedback—please share your experience through Azure Feedback portal or support channels. The comments below on this blog post will also be monitored, if you want to engage with us. Finally, CES team can be reached via email: sqlcesfeedback [at] microsoft [dot] com. Useful resources Free Azure SQL Database. Free Azure SQL Managed Instance.1.3KViews0likes0CommentsAnnouncing Public Preview: Auditing for Fabric SQL Database
We’re excited to announce the public preview of Auditing for Fabric SQL Database—a powerful feature designed to help organizations strengthen security, ensure compliance, and gain deep operational insights into their data environments. Why Auditing Matters Auditing is a cornerstone of data governance. With Fabric SQL Database auditing, you can now easily track and log database activities—answering critical questions like who accessed what data, when, and how. This supports compliance requirements (such as HIPAA and SOX), enables robust threat detection, and provides a foundation for forensic investigations. Key Highlights Flexible Configuration: Choose from default “audit everything,” preconfigured scenarios (like permission changes, login attempts, data reads/writes, schema changes), or define custom action groups and predicate filters for advanced needs. Seamless Access: Audit logs are stored in One Lake, making them easily accessible via T-SQL or One Lake Explorer. Role-Based Access Control: Configuration and log access are governed by both Fabric workspace roles and SQL-level permissions, ensuring only authorized users can view or manage audit data. Retention Settings: Customize how long audit logs are retained to meet your organization’s policy. How It Works Audit logs are written to a secure, read-only folder in One Lake and can be queried using the sys. fn_get_audit_file_v2 T-SQL function. Workspace and artifact IDs are used as identifiers, ensuring logs remain consistent even if databases move across logical servers. Access controls at both the workspace and SQL database level ensure only the right people can configure or view audit logs. Example Use Cases Compliance Monitoring: Validate a full audit trail for regulatory requirements. Security Investigations: Track specific events like permission changes or failed login attempts. Operational Insights: Focus on specific operations (e.g., DML only) or test retention policies. Role-Based Access: Verify audit visibility across different user roles. Getting Started You can configure auditing directly from the Manage SQL Auditing blade in the Fabric Portal. Choose your preferred scenario, set retention, and (optionally) define custom filters—all through a simple, intuitive interface. Learn more about auditing for Fabric SQL database here Data exposed session with demo here280Views3likes1CommentZero Trust for data: Make Microsoft Entra authentication for SQL your policy baseline
A policy-driven path from enabled to enforced. Why this matters now Security and compliance programs were once built on an assumption that internal networks were inherently safer. Cloud adoption, remote work, and supply-chain compromise have steadily invalidated that model. U.S. federal guidance has now formalized this shift: Executive Order 14028 calls for modernizing cybersecurity and accelerating Zero Trust adoption, and OMB Memorandum M-22-09 sets a federal Zero Trust strategy with specific objectives and timelines. Meanwhile, attacker economics are changing. Automation and AI make reconnaissance, phishing, and credential abuse cheaper and faster. That concentrates risk on identity—the control plane that sits in front of systems, applications, and data. In Zero Trust, the question is no longer “is the network trusted,” but “is this request verified, governed by policy, and least-privilege?” Why database authentication is a first‑order Zero Trust control Databases are universally treated as crown-jewel infrastructure. Yet many data estates still rely on legacy patterns: password-based SQL authentication, long-lived secrets embedded in apps, and shared administrative accounts that persist because migration feels risky. This is exactly the kind of implicit trust Zero Trust architectures aim to remove. NIST SP 800-207 defines Zero Trust as eliminating implicit trust based solely on network location or ownership and focusing controls on protecting resources. In that model, every new database connection is not “plumbing”—it is an access decision to sensitive data. If the authentication mechanism sits outside the enterprise identity plane, governance becomes fragmented and policy enforcement becomes inconsistent. What changes when SQL uses Microsoft Entra authentication Microsoft Entra authentication enables users and applications to connect to SQL using enterprise identities, instead of usernames and passwords. Across Azure SQL and SQL Server enabled by Azure Arc, Entra-based authentication helps align database access with the same identity controls organizations use elsewhere. The security and compliance outcomes that leaders care about Reduce password and secret risk: move away from static passwords and embedded credentials. Centralize governance: bring database access under the same identity policies, access reviews, and lifecycle controls used across the enterprise. Improve auditability: tie access to enterprise identities and create a consistent control surface for reporting. Enable policy enforcement at scale: move from “configured” controls to “enforced” controls through governance and tooling. This is why Entra authentication is a high-ROI modernization step: it collapses multiple security and operational objectives into one effort (identity modernization) rather than a set of ongoing compensating programs (password rotation programs, bespoke exceptions, and perpetual secret hygiene projects). Why AI makes this a high priority decision AI accelerates both reconnaissance and credential abuse, which concentrates risk on identity. As a result, policy makers increasingly treat phishing-resistant authentication and centralized identity enforcement as foundational—not optional. A practical path: from enabled to enforced Successful security programs define a clear end state, a measurable glide path, and an enforcement model. A pragmatic approach to modernizing SQL access typically includes: Discover active usage: Identify which logins and users are actively connecting and which are no longer required. Establish Entra as the identity authority: Enable Entra authentication on SQL logical servers, starting in mixed mode to reduce disruption. Recreate principals using Entra identities: Replace SQL Authentication logins/users with Entra users, groups, service principals, and managed identities. Modernize application connectivity: Update drivers and connection patterns to use Entra-based authentication and managed identities. Validate, then enforce: Confirm the absence of password‑based SQL authentication traffic, then move to Entra‑only where available and enforce via policy. By adopting this sequencing, organizations can mitigate risks at an early stage and postpone enforcement until the validation process concludes. For a comprehensive migration strategy, refer to Securing Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide. Choosing which projects to fund — and which ones to stop When making investment decisions, priority is given to database identity projects that can demonstrate clear risk reduction and lasting security benefits. Microsoft Entra authentication as the default for new SQL workloads, with a defined migration path for the existing workloads. Managed identities for application-to-database connectivity to eliminate stored secrets. Centralized governance for privileged database access using enterprise identity controls. At the same time, organizations should explicitly de-prioritize investments that perpetuate password risk: password rotation projects that preserve SQL Authentication, bespoke scripts maintaining shared logins, and exception processes that do not scale. Security and scale are not competing goals Security is often seen as something that slows down innovation, but database identity offers unique benefits. When enterprise identity is used for access controls, bringing in new applications and users shifts from handing out credentials to overseeing policies. Compliance reporting also becomes uniform rather than customized, making it easier to grow consistently thanks to a single control framework. Modern database authentication is not solely about mitigating risk— it establishes a scalable operational framework for secure data access. A scorecard designed for leadership readiness To elevate the conversation from implementation to governance, use outcome-based metrics: Coverage: Percentage of SQL workloads with Entra authentication enabled. Enforcement: Percentage operating in Entra-only mode after validation. Secret reduction: Applications still relying on stored database passwords. Privilege hygiene: Admin access governed through enterprise identity controls. Audit evidence: Ability to produce identity-backed access reports on demand. These map directly to Zero Trust maturity expectations and provide a defensible definition of “done.” Closing Zero Trust is an operating posture, not a single control. For most organizations, the fastest way to make that posture measurable is to standardize database access on the same identity plane used everywhere else. If you are looking for a single investment that improves security, reduces audit friction, and supports responsible AI adoption, modernizing SQL access with Microsoft Entra authentication — and driving it from enabled to enforced — is one of the most durable choices you can make. References US Government sets forth Zero Trust architecture strategy and requirements (Microsoft Security Blog) Securing Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide (Microsoft Tech Community) OMB Memorandum M-22-09: Federal Zero Trust Strategy (White House) NIST SP 800-207: Zero Trust Architecture CISA: Zero Trust Enforce Microsoft Entra-only authentication for Azure SQL Database and Azure SQL Managed Instance422Views1like0CommentsStop defragmenting and start living: introducing auto index compaction
Executive summary Automatic index compaction is a new built-in feature in the MSSQL database engine that compacts indexes in background and with minimal overhead. Now you can: Stop using scheduled index maintenance jobs. Reduce storage space consumption and save costs. Improve performance by reducing CPU, memory, and disk I/O consumption. Today, we announce a public preview of automatic index compaction in Azure SQL Database, Azure SQL Managed Instance with the always-up-to-date update policy, and SQL database in Fabric. Index maintenance without maintenance jobs Enable automatic index compaction for a database with a single T-SQL command: ALTER DATABASE [database-name] SET AUTOMATIC_INDEX_COMPACTION = ON; Once enabled, you no longer need to set up, maintain, and monitor resource intensive index maintenance jobs, a time-consuming operational task for many DBA teams today. As the data in the database changes, a background process consolidates rows from partially filled data pages into a smaller number of filled up pages, and then removes the empty pages. Index bloat is eliminated – the same amount of data now uses a minimal amount of storage space. Resource consumption is reduced because the database engine needs fewer disk IOs and less CPU and memory to process the same amount of data. By design, the background compaction process acts on the recently modified pages only. This means that its own resource consumption is much lower compared to the traditional index maintenance operations (index rebuild and reorganize), which process all pages in an index or its partition. For a detailed description of how the feature works, a comparison between automatic index compaction and the traditional index maintenance operations, and the ways to monitor the compaction process, see automatic index compaction in documentation. Compaction in action To see the effects of automatic index compaction, we wrote a stored procedure that simulates a write-intensive OLTP workload. Each execution of the procedure inserts, updates, deletes, or selects a random number of rows, from 1 to 100, in a 50,000-row table with a clustered index. We executed this stored procedure using a popular SQLQueryStress tool, with 30 threads and 400 iterations on each thread. We measured the page density, the number pages in the leaf level of the table’s clustered index, and the number of logical reads (pages) used by a test query reading 1,000 rows, at three points in time: After initially inserting the data and before running the workload. Once the workload stopped running. Several minutes later, once the background process completed index compaction. Here are the results: Before workload After workload After compaction Logical reads 25 🟢 1,610 🔴⬆️ 35 🟢⬇️ Page density 99.51% 🟢 52.71% 🔴⬇️ 96.11% 🟢⬆️ Pages 962 🟢 4,394 🔴⬆️ 1,065 🟢⬇️ Before the workload starts, page density is high because nearly all pages are full. The number of logical reads required by the test query is minimal, and so is its resource consumption. The workload leaves a lot of empty space on pages and increases the number of pages because of row updates and deletions, and because of page splits. As a result, immediately after workload completion, the number of logical reads required for the same test query increases more than 60 times, which translates into a higher CPU and memory usage. But then within a few minutes, automatic index compaction removes the empty space from the index, increasing page density back to nearly 100%, reducing logical reads by about 98% and getting the index very close to its initial compact state. Less logical reads means that the query is faster and uses less CPU. All of this without any user action. With continuous workloads, index compaction is continuous as well, maintaining higher average page density and reducing resource usage by the workload over time. The T-SQL code we used in this demo is available in the Appendix. Conclusion Automatic index compaction delegates a routine database maintenance operation to the database engine itself, letting administrators and engineers focus on more important work without worrying about index maintenance. The public preview is a great opportunity to let us know how this new feature works for you. Please share your feedback and suggestions for any improvements we can make. To let us know your thoughts, you can comment on this blog post, leave feedback at https://aka.ms/sqlfeedback, or email us at sqlaicpreview@microsoft.com. Appendix Here is the T-SQL code we used to demonstrate automatic index compaction. The type of executed statements and the number of affected rows is randomized to better represent an OLTP workload. While the results demonstrate the effectiveness of automatic index compaction, exact measurements may vary from one execution to the next. /* Enable automatic index compaction */ ALTER DATABASE CURRENT SET AUTOMATIC_INDEX_COMPACTION = ON; /* Reset to the initial state */ DROP TABLE IF EXISTS dbo.t; DROP SEQUENCE IF EXISTS dbo.s_id; DROP PROCEDURE IF EXISTS dbo.churn; /* Create a sequence to generate clustered index keys */ CREATE SEQUENCE dbo.s_id AS int START WITH 1 INCREMENT BY 1; /* Create a test table */ CREATE TABLE dbo.t ( id int NOT NULL CONSTRAINT df_t_id DEFAULT (NEXT VALUE FOR dbo.s_id), dt datetime2 NOT NULL CONSTRAINT df_t_dt DEFAULT (SYSDATETIME()), u uniqueidentifier NOT NULL CONSTRAINT df_t_uid DEFAULT (NEWID()), s nvarchar(100) NOT NULL CONSTRAINT df_t_s DEFAULT (REPLICATE('c', 1 + 100 * RAND())), CONSTRAINT pk_t PRIMARY KEY (id) ); /* Insert 50,000 rows */ INSERT INTO dbo.t (s) SELECT REPLICATE('c', 50) AS s FROM GENERATE_SERIES(1, 50000); GO /* Create a stored procedure that simulates a write-intensive OLTP workload. */ CREATE OR ALTER PROCEDURE dbo.churn AS SET NOCOUNT, XACT_ABORT ON; DECLARE @r float = RAND(CAST(CAST(NEWID() AS varbinary(4)) AS int)); /* Get the type of statement to execute */ DECLARE @StatementType char(6) = CASE WHEN @r <= 0.15 THEN 'insert' WHEN @r <= 0.30 THEN 'delete' WHEN @r <= 0.65 THEN 'update' WHEN @r <= 1 THEN 'select' ELSE NULL END; /* Get the maximum key value for the clustered index */ DECLARE @MaxKey int = ( SELECT CAST(current_value AS int) FROM sys.sequences WHERE name = 's_id' AND SCHEMA_NAME(schema_id) = 'dbo' ); /* Get a random key value within the key range */ DECLARE @StartKey int = 1 + RAND() * @MaxKey; /* Get a random number of rows, between 1 and 100, to modify or read */ DECLARE @RowCount int = 1 + RAND() * 99; /* Execute a statement */ IF @StatementType = 'insert' INSERT INTO dbo.t (id) SELECT NEXT VALUE FOR dbo.s_id FROM GENERATE_SERIES(1, @RowCount); IF @StatementType = 'delete' DELETE TOP (@RowCount) dbo.t WHERE id >= @StartKey; IF @StatementType = 'update' UPDATE TOP (@RowCount) dbo.t SET dt = DEFAULT, u = DEFAULT, s = DEFAULT WHERE id >= @StartKey; IF @StatementType = 'select' SELECT TOP (@RowCount) id, dt, u, s FROM dbo.t WHERE id >= @StartKey; GO /* The remainder of this script is executed three times: 1. Before running the workload using SQLQueryStress. 2. Immediately after the workload stops running. 3. Once automatic index compaction completes several minutes later. */ /* Monitor page density and the number of pages and records in the leaf level of the clustered index. */ SELECT avg_page_space_used_in_percent AS page_density, page_count, record_count FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('dbo.t'), 1, 1, 'DETAILED') WHERE index_level = 0; /* Run a test query and measure its logical reads. */ DROP TABLE IF EXISTS #t; SET STATISTICS IO ON; SELECT TOP (1000) id, dt, u, s INTO #t FROM dbo.t WHERE id >= 10000 SET STATISTICS IO OFF;4.8KViews3likes1CommentAnnouncing Preview of 160 and 192vCore Premium-series Options for Azure SQL Database Hyperscale
We are excited to announce the public preview of 160 and 192vCore compute sizes for Premium-series hardware configuration in Azure SQL Database Hyperscale. Since the introduction of Premium-series hardware configurations for Hyperscale in November 2022, many customers have successfully used larger vCore configurations to consolidate workloads, reduce shard counts, and improve overall application performance and stability. This preview builds on the Premium-series configuration introduced previously for Hyperscale, extending the maximum scale of a single database and elastic pools from 128vCores to 192vCores to support higher concurrency, faster CPU performance, and larger memory footprints, for more demanding mission critical workloads. With this preview, customers running largescale OLTP, HTAP, and analytics-heavy workloads can evaluate even higher compute ceilings without rearchitecting their applications. Premium-Series Hyperscale Hardware Overview Premium-series Hyperscale databases run on latest-generation Intel and AMD processors , delivering higher per core performance and improved scalability compared to standard-series (Gen5) hardware. With this public preview release, Premium-series Hyperscale now supports larger vCore configurations, extending the scaleup limits for customers who need more compute and memory in a single database. Getting started Customers can enable the 160 or 192vCore Premium-series options when creating a database, or when scaling up existing Hyperscale databases in supported regions (where preview capacity is available). As with other Hyperscale scale operations, moving to a larger vCore size does not require application changes and uses Hyperscale’s distributed storage and compute architecture. Resource Limits & Key characteristics Link to Azure SQL documentation on resource limits Single Database Resource Limits Cores Memory (GB) Tempdb max data size (GB) Max Local SSD IOPS Max Log Rate (MiB/s) Max concurrent workers Max concurrent external connections per pool Max concurrent sessions 128 (Current Limit) 625 4,096 544,000 150 12,800 150 30,000 160 (New preview limit) 830 4,096 680,000 150 16,000 150 30,000 192 (New preview limit) 843* 4,096 816,000 150 19,200 150 30,000 *Memory values will increase for 192 vCores at GA. Elastic Pool Resource Limits Cores Memory (GB) Tempdb max data size (GB) Max Local SSD IOPS Max Log Rate (MiB/s) Max concurrent workers per pool Max concurrent external connections per pool Max concurrent sessions 128 (Current Limit) 625 4,096 409,600 150 13,440 150 30,000 160 (New preview limit) 830 4,096 800,000 150 16,800 150 30,000 192 (New preview limit) 843* 4,096 960,000 150 20,160 150 30,000 *Memory values will increase for 192 vCores at GA. Premium-series Hyperscale can now scale up to 160 vCores & 192 vCores in public preview regions. High performance CPUs optimized for compute-intensive workloads. Increased memory capacity proportional to vCore scale Up to 128 TiB of data storage, consistent with Hyperscale architecture Full compatibility with existing Hyperscale features and capabilities Performance Improvements with 160 and 192 vcores Strong scale-up efficiency observed beyond 128 vCores: Moving from 128 → 160 → 192 vCores shows consistent performance gains, demonstrating that Hyperscale Premium-series continues to scale effectively at higher core counts. 160 vCores delivers a strong balance of single-query and concurrent performance. 192 vCores is ideal for customers prioritizing maximum throughput, high user concurrency, and large-scale transactional or analytical workloads TPC-H Power Run (measures single-stream query performance) improves from 217 (128 vCores) to 357 (160 vCores) and remains high at 355 (192 vCores), delivering a +64% increase from 128 → 192 vCores, indicating strong single-query execution and CPU efficiency at larger sizes. TPC-H Throughput Run (measures multi-stream concurrency) increases from 191 → 360 → 511 QPH, resulting in a +168% gain from 128 → 192 vCores, highlighting significant benefits for highly concurrent, multi-user workloads. Performance case study (Zava Lending example) If the player doesn’t load, open the video in a new window: Open video Zava Lending scaled Azure SQL Hyperscale online as demand increased—supporting more users and higher transaction volume with no downtime. Throughput scaled linearly as compute increased, moving cleanly from 32 → 64 → 128 → 192 vCores to match real workload growth. 192 vCores proved to be the optimal operating point, sustaining peak transaction load without over‑provisioning. Azure SQL Hyperscale handled mixed OLTP and analytics workloads, including nightly ETL, without becoming a bottleneck. Every scale operation was performed online, with no service interruption and no application changes. Preview scope and limitations During preview, Premium-series 160 and 192 vCores are supported in a limited set of initial regions (Australia East, Canada Central, East US 2, South Central US, UK South, West Europe, North Europe, Southeast Asia, West US 2), with broader availability planned over time. During preview: Zone redundancy and Azure SQL Database maintenance window are not supported for these sizes Preview features are subject to supplemental preview terms, and performance characteristics may continue to improve through GA Customers are encouraged to use this preview to validate scalability, concurrency, memory utilization, query parallelism, and readiness for larger single database deployments. Next Steps This public preview is part of our broader investment in scaling Azure SQL Hyperscale for the most demanding workloads. Feedback from preview will help inform GA configuration limits, regional rollout priorities, and performance optimizations at extreme scale.687Views2likes0CommentsVersionless keys for Transparent Data Encryption in Azure SQL Database (Generally Available)
With this release, you no longer need to reference a specific key version stored in Azure Key Vault or Managed HSM when configuring Transparent Data Encryption (TDE) with customer‑managed keys. Instead, Azure SQL Database now supports a versionless key URI, automatically using the latest enabled version of your key. This means: Simpler key management—no longer necessary to specify the key version. Reduced operational overhead by eliminating risks tied to outdated key versions. Full control remains with the customer. This enhancement streamlines encryption at rest, especially for organizations operating at scale or enforcing strict security and compliance standards. Versionless keys for TDE are available today across Azure SQL Database with no additional cost. Versioned vs. Versionless Key URIs To highlight the difference, here are real examples: Versioned Key URI (old approach — explicit version required) https://demotdeakv.vault.azure.net/keys/TDECMK/40acafb8a7034b20ba227905df090a1f Versionless Key URI (new approach) https://demotdeakv.vault.azure.net/keys/TDECMK A versionless key URI references only the key name. Azure SQL Database automatically uses the newest enabled version of the key. Learn more Transparent Data Encryption - Azure SQL Database Azure SQL transparent data encryption with customer-managed key Transparent data encryption with customer-managed keys at the database level527Views0likes0CommentsManaged Identity Support for Azure SQL Database Import & Export services (preview)
Today we’re announcing a public preview that lets Azure SQL Database Import & Export services authenticate with user-assigned managed identity. Now Azure SQL Databases can perform import and export operations with no passwords, storage keys or SAS tokens. With this preview, customers can choose to use either a single user-assigned managed identity (UAMI) for both SQL and Storage permissions or assign separate UAMIs, one for the Azure SQL logical server and another for the Storage account, for full separation of duties. At a glance: Run Import/Export using a user-assigned managed identity (UAMI). Use one identity for both SQL and Storage, or split them if you prefer tighter scoping. Works in the portal, REST, Azure CLI, and PowerShell. Why this matters: Managed identity support makes SQL migrations simpler and safer, no passwords, storage keys, or SAS tokens. By leveraging managed identity when integrating Import/Export into a pipeline, you streamline access management and enhance security: permissions are granted directly to the identity, reducing manual credential handling and the risk of exposing sensitive information. This keeps operations efficient and secure, without secrets embedded in scripts You’ve got two straightforward options: One UAMI for everything (simplest setup). Two UAMIs, one for SQL and one for Storage, recommended if you wish to maintain more strictly defined permissions. Getting started: Create a user-assigned managed identity (UAMI) Decide up front whether you want one identity end-to-end, or two identities (SQL vs Storage) for separation of duties. Attach the UAMI to the Azure SQL logical server On the server Identity blade, add the UAMI so the Import/Export job can run as that identity. Set the server’s Microsoft Entra ID admin to the UAMI In Microsoft Entra ID > Set admin, select the UAMI. This is what lets the workflow authenticate to SQL without a password. Grant Storage access Use Storage Blob Data Reader for import and Storage Blob Data Contributor for export, assigned in Access control (IAM). If you can, scope the assignment to the container that holds the .bacpac. Pass resource IDs (not names) in your calls In REST/CLI/PowerShell, you pass the UAMI resource ID as the value of administratorLogin (SQL identity) and storageKey (Storage identity), and set authenticationType / storageKeyType to ManagedIdentity. administratorLogin → UAMI resource ID used for SQL auth storageKey → UAMI resource ID used for Storage authauthenticationType / storageKeyType → ManagedIdentity Run the import/export job Kick it off from the portal, REST, Azure CLI, or PowerShell. From there, the service uses the identity you selected to reach both SQL and Storage. Portal experience In the Azure portal, you can choose Authentication type = Managed identity and select the user-assigned managed identity to use for the operation. Figure 1: Azure portal Import/Export experience with Managed identity authentication selected. Notes This preview supports user-assigned managed identities (UAMIs). For least privilege, scope Storage roles to the specific container used for the .bacpac file and use two user-assigned managed identities (UAMIs), one for SQL and one for the storage. Sample 1: REST API — Export using one UAMI: $exportBody = "{ `n `"storageKeyType`": `"ManagedIdentity`", `n `"storageKey`": `"${managedIdentityServerResourceId}`", `n `"storageUri`": `"${storageUri}`", `n `"administratorLogin`": `"${managedIdentityServerResourceId}`", `n `"authenticationType`": `"ManagedIdentity`" `n}" $export = Invoke-AzRestMethod -Method POST -Path "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.Sql/servers/${serverName}/databases/${databaseName}/export?api-version=2024-05-01-preview" -Payload $exportBody # Poll operation status Invoke-AzRestMethod -Method GET $export.Headers.Location.AbsoluteUri Sample 2: REST API — Import to a new server using one UAMI: $serverName = "sql-mi-demo-target" $databaseName = "sqldb-mi-demo-target" # Same UAMI for SQL auth + Storage access $importBody = "{ `n `"operationMode`": `"Import`", `n `"administratorLogin`": `"${managedIdentityServerResourceId}`", `n `"authenticationType`": `"ManagedIdentity`", `n `"storageKeyType`": `"ManagedIdentity`", `n `"storageKey`": `"${managedIdentityServerResourceId}`", `n `"storageUri`": `"${storageUri}`", `n `"databaseName`": `"${databaseName}`" `n}" $import = Invoke-AzRestMethod -Method POST -Path "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.Sql/servers/${serverName}/databases/${databaseName}/import?api-version=2024-05-01-preview" -Payload $importBody # Poll operation status Invoke-AzRestMethod -Method GET $import.Headers.Location.AbsoluteUri Sample 3: PowerShell — Export using two UAMIs: # Server UAMI for SQL auth, Storage UAMI for storage access New-AzSqlDatabaseExport -ResourceGroupName $resourceGroupName -DatabaseName $databaseName -ServerName $serverName -StorageKeyType ManagedIdentity -StorageKey $managedIdentityStorageResourceId -StorageUri $storageUri -AuthenticationType ManagedIdentity -AdministratorLogin $managedIdentityServerResourceId Sample 4: PowerShell — Import to a new server using two UAMIs: New-AzSqlDatabaseImport -ResourceGroupName $resourceGroupName -DatabaseName $databaseName -ServerName $serverName -DatabaseMaxSizeBytes $databaseSizeInBytes -StorageKeyType "ManagedIdentity" -StorageKey $managedIdentityStorageResourceId -StorageUri $storageUri -Edition $edition -ServiceObjectiveName $serviceObjectiveName -AdministratorLogin $managedIdentityServerResourceId -AuthenticationType ManagedIdentity Sample 5: Azure CLI — Export using two UAMIs: az sql db export -s $serverName -n $databaseName -g $resourceGroupName --auth-type ManagedIdentity -u $managedIdentityServerResourceId --storage-key $managedIdentityStorageResourceId --storage-key-type ManagedIdentity --storage-uri $storageUri Sample 6: Azure CLI — Import to a new server using two UAMIs: az sql db import -s $serverName -n $databaseName -g $resourceGroupName --auth-type ManagedIdentity -u $managedIdentityServerResourceId --storage-key $managedIdentityStorageResourceId --storage-key-type ManagedIdentity --storage-uri $storageUrib For more information and samples, please check Tutorial: Use managed identity with Azure SQL import and export (preview)535Views0likes0Comments