azure sql managed instance
331 TopicsAzure SQL is Deprecating the “No Minimum TLS” (MinTLS None) Configuration
As part of the retirement of lower TLS versions 1.0 and 1.1 and the enforcement of 1.2 as the new default minimum TLS version, we will be removing the No Minimum TLS (MinTLS = “None” or "0") option and updating these configurations to TLS 1.2. No Minimum TLS allowed Azure SQL Database and Azure SQL Managed Instance resources to accept client connections using any TLS protocol version and unencrypted connections. Over the past year, Azure has retired TLS 1.0 and 1.1 for all Azure databases, due to known security vulnerabilities in these older protocols. As of August 31, 2025, creating servers configured with versions 1.0 and 1.1 was disallowed and migration to 1.2 began. With legacy TLS versions being deprecated, TLS 1.2 will become the secure default minimum TLS version for new Azure SQL DB and MI configurations and for all client-server connections, rendering the MinTLS = None setting obsolete. As a result, the MinTLS = None configuration option will be deprecated for new servers, and existing servers configured with No Minimum TLS will be upgraded to 1.2. What is changing? After July 15, 2026, we will disallow minimum TLS value "None", for the creation of new SQL DB and MI resources using PowerShell, Azure CLI, and any other REST based interface. This configuration option has already been removed from the Portal during the deprecation of TLS versions 1.0 and 1.1. Creating new Azure SQL Database and Managed Instance servers with MinTLS = None (which was previously considered the default) will no longer be a supported configuration. If the server parameter value for the minimum TLS is left blank, it will default to minimum TLS version 1.2. Attempts to create an Azure SQL server with MinTLS = None will fail with an “Invalid operation” error and downgrades to None will be disallowed. While attempts to connect with TLS 1.0, 1.1 or unencrypted connections will fail with “Error: 47072/171 on Gateway.” Effective date (retirement milestone) MinTLS = None (0) MinTLS left blank (defaults to supported minimum) Before 8/31/25 Any + Unencrypted Any + Unencrypted After 8/31/25 1.2 + Unencrypted 1.2 After July 15, 2026 Invalid operation error (for new server creates) Downgrades will be disallowed TLS error: 47072/171 (for unencrypted connections) 1.2 In summary, after July 15, 2026, Azure SQL Database and Azure SQL Managed Instance will require all client connections to use TLS 1.2 or higher and unencrypted connections will be denied. The minimum TLS version setting will no longer accept the value "None" for new or existing servers and servers currently configured with this value will be upgraded to explicitly enforce TLS 1.2. Who is impacted? For most Azure SQL customers, there is no action required. Most clients already use TLS 1.2 or higher. After July 15, 2026, if your Azure SQL Database or Managed Instance is still configured with No Minimum TLS and using 1.0, 1.1 or unencrypted connections, it will automatically update to TLS 1.2 to reflect the current minimum protocol enforcement in client-server connectivity. We do recommend you verify your client applications – especially any older or third-party client drivers – to ensure they can communicate with TLS 1.2 or above. In some rare cases, very old applications, such as an outdated JDBC driver or older .NET framework version, may need an update or need to enable TLS 1.2. Conclusion This deprecation is part of Azure’s broader security strategy to ensure encrypted connections are secure by modern encryption standards. TLS version 1.2 is more secure than older versions and is now the industry standard (required by regulations like PCI DSS and HIPAA). This change eliminates the use of unencrypted connections which ensure all database connections meet current security standards. If you’ve already migrated to TLS 1.2 (as most customers have), you will most likely not notice any change, except that the No Minimum TLS option will disappear from configurations.202Views0likes0CommentsZero Trust for data: Make Microsoft Entra authentication for SQL your policy baseline
A policy-driven path from enabled to enforced. Why this matters now Security and compliance programs were once built on an assumption that internal networks were inherently safer. Cloud adoption, remote work, and supply-chain compromise have steadily invalidated that model. U.S. federal guidance has now formalized this shift: Executive Order 14028 calls for modernizing cybersecurity and accelerating Zero Trust adoption, and OMB Memorandum M-22-09 sets a federal Zero Trust strategy with specific objectives and timelines. Meanwhile, attacker economics are changing. Automation and AI make reconnaissance, phishing, and credential abuse cheaper and faster. That concentrates risk on identity—the control plane that sits in front of systems, applications, and data. In Zero Trust, the question is no longer “is the network trusted,” but “is this request verified, governed by policy, and least-privilege?” Why database authentication is a first‑order Zero Trust control Databases are universally treated as crown-jewel infrastructure. Yet many data estates still rely on legacy patterns: password-based SQL authentication, long-lived secrets embedded in apps, and shared administrative accounts that persist because migration feels risky. This is exactly the kind of implicit trust Zero Trust architectures aim to remove. NIST SP 800-207 defines Zero Trust as eliminating implicit trust based solely on network location or ownership and focusing controls on protecting resources. In that model, every new database connection is not “plumbing”—it is an access decision to sensitive data. If the authentication mechanism sits outside the enterprise identity plane, governance becomes fragmented and policy enforcement becomes inconsistent. What changes when SQL uses Microsoft Entra authentication Microsoft Entra authentication enables users and applications to connect to SQL using enterprise identities, instead of usernames and passwords. Across Azure SQL and SQL Server enabled by Azure Arc, Entra-based authentication helps align database access with the same identity controls organizations use elsewhere. The security and compliance outcomes that leaders care about Reduce password and secret risk: move away from static passwords and embedded credentials. Centralize governance: bring database access under the same identity policies, access reviews, and lifecycle controls used across the enterprise. Improve auditability: tie access to enterprise identities and create a consistent control surface for reporting. Enable policy enforcement at scale: move from “configured” controls to “enforced” controls through governance and tooling. This is why Entra authentication is a high-ROI modernization step: it collapses multiple security and operational objectives into one effort (identity modernization) rather than a set of ongoing compensating programs (password rotation programs, bespoke exceptions, and perpetual secret hygiene projects). Why AI makes this a high priority decision AI accelerates both reconnaissance and credential abuse, which concentrates risk on identity. As a result, policy makers increasingly treat phishing-resistant authentication and centralized identity enforcement as foundational—not optional. A practical path: from enabled to enforced Successful security programs define a clear end state, a measurable glide path, and an enforcement model. A pragmatic approach to modernizing SQL access typically includes: Discover active usage: Identify which logins and users are actively connecting and which are no longer required. Establish Entra as the identity authority: Enable Entra authentication on SQL logical servers, starting in mixed mode to reduce disruption. Recreate principals using Entra identities: Replace SQL Authentication logins/users with Entra users, groups, service principals, and managed identities. Modernize application connectivity: Update drivers and connection patterns to use Entra-based authentication and managed identities. Validate, then enforce: Confirm the absence of password‑based SQL authentication traffic, then move to Entra‑only where available and enforce via policy. By adopting this sequencing, organizations can mitigate risks at an early stage and postpone enforcement until the validation process concludes. For a comprehensive migration strategy, refer to Securing Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide. Choosing which projects to fund — and which ones to stop When making investment decisions, priority is given to database identity projects that can demonstrate clear risk reduction and lasting security benefits. Microsoft Entra authentication as the default for new SQL workloads, with a defined migration path for the existing workloads. Managed identities for application-to-database connectivity to eliminate stored secrets. Centralized governance for privileged database access using enterprise identity controls. At the same time, organizations should explicitly de-prioritize investments that perpetuate password risk: password rotation projects that preserve SQL Authentication, bespoke scripts maintaining shared logins, and exception processes that do not scale. Security and scale are not competing goals Security is often seen as something that slows down innovation, but database identity offers unique benefits. When enterprise identity is used for access controls, bringing in new applications and users shifts from handing out credentials to overseeing policies. Compliance reporting also becomes uniform rather than customized, making it easier to grow consistently thanks to a single control framework. Modern database authentication is not solely about mitigating risk— it establishes a scalable operational framework for secure data access. A scorecard designed for leadership readiness To elevate the conversation from implementation to governance, use outcome-based metrics: Coverage: Percentage of SQL workloads with Entra authentication enabled. Enforcement: Percentage operating in Entra-only mode after validation. Secret reduction: Applications still relying on stored database passwords. Privilege hygiene: Admin access governed through enterprise identity controls. Audit evidence: Ability to produce identity-backed access reports on demand. These map directly to Zero Trust maturity expectations and provide a defensible definition of “done.” Closing Zero Trust is an operating posture, not a single control. For most organizations, the fastest way to make that posture measurable is to standardize database access on the same identity plane used everywhere else. If you are looking for a single investment that improves security, reduces audit friction, and supports responsible AI adoption, modernizing SQL access with Microsoft Entra authentication — and driving it from enabled to enforced — is one of the most durable choices you can make. References US Government sets forth Zero Trust architecture strategy and requirements (Microsoft Security Blog) Securing Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide (Microsoft Tech Community) OMB Memorandum M-22-09: Federal Zero Trust Strategy (White House) NIST SP 800-207: Zero Trust Architecture CISA: Zero Trust Enforce Microsoft Entra-only authentication for Azure SQL Database and Azure SQL Managed Instance321Views1like0CommentsStop defragmenting and start living: introducing auto index compaction
Executive summary Automatic index compaction is a new built-in feature in the MSSQL database engine that compacts indexes in background and with minimal overhead. Now you can: Stop using scheduled index maintenance jobs. Reduce storage space consumption and save costs. Improve performance by reducing CPU, memory, and disk I/O consumption. Today, we announce a public preview of automatic index compaction in Azure SQL Database, Azure SQL Managed Instance with the always-up-to-date update policy, and SQL database in Fabric. Index maintenance without maintenance jobs Enable automatic index compaction for a database with a single T-SQL command: ALTER DATABASE [database-name] SET AUTOMATIC_INDEX_COMPACTION = ON; Once enabled, you no longer need to set up, maintain, and monitor resource intensive index maintenance jobs, a time-consuming operational task for many DBA teams today. As the data in the database changes, a background process consolidates rows from partially filled data pages into a smaller number of filled up pages, and then removes the empty pages. Index bloat is eliminated – the same amount of data now uses a minimal amount of storage space. Resource consumption is reduced because the database engine needs fewer disk IOs and less CPU and memory to process the same amount of data. By design, the background compaction process acts on the recently modified pages only. This means that its own resource consumption is much lower compared to the traditional index maintenance operations (index rebuild and reorganize), which process all pages in an index or its partition. For a detailed description of how the feature works, a comparison between automatic index compaction and the traditional index maintenance operations, and the ways to monitor the compaction process, see automatic index compaction in documentation. Compaction in action To see the effects of automatic index compaction, we wrote a stored procedure that simulates a write-intensive OLTP workload. Each execution of the procedure inserts, updates, deletes, or selects a random number of rows, from 1 to 100, in a 50,000-row table with a clustered index. We executed this stored procedure using a popular SQLQueryStress tool, with 30 threads and 400 iterations on each thread. We measured the page density, the number pages in the leaf level of the table’s clustered index, and the number of logical reads (pages) used by a test query reading 1,000 rows, at three points in time: After initially inserting the data and before running the workload. Once the workload stopped running. Several minutes later, once the background process completed index compaction. Here are the results: Before workload After workload After compaction Logical reads 25 🟢 1,610 🔴⬆️ 35 🟢⬇️ Page density 99.51% 🟢 52.71% 🔴⬇️ 96.11% 🟢⬆️ Pages 962 🟢 4,394 🔴⬆️ 1,065 🟢⬇️ Before the workload starts, page density is high because nearly all pages are full. The number of logical reads required by the test query is minimal, and so is its resource consumption. The workload leaves a lot of empty space on pages and increases the number of pages because of row updates and deletions, and because of page splits. As a result, immediately after workload completion, the number of logical reads required for the same test query increases more than 60 times, which translates into a higher CPU and memory usage. But then within a few minutes, automatic index compaction removes the empty space from the index, increasing page density back to nearly 100%, reducing logical reads by about 98% and getting the index very close to its initial compact state. Less logical reads means that the query is faster and uses less CPU. All of this without any user action. With continuous workloads, index compaction is continuous as well, maintaining higher average page density and reducing resource usage by the workload over time. The T-SQL code we used in this demo is available in the Appendix. Conclusion Automatic index compaction delegates a routine database maintenance operation to the database engine itself, letting administrators and engineers focus on more important work without worrying about index maintenance. The public preview is a great opportunity to let us know how this new feature works for you. Please share your feedback and suggestions for any improvements we can make. To let us know your thoughts, you can comment on this blog post, leave feedback at https://aka.ms/sqlfeedback, or email us at sqlaicpreview@microsoft.com. Appendix Here is the T-SQL code we used to demonstrate automatic index compaction. The type of executed statements and the number of affected rows is randomized to better represent an OLTP workload. While the results demonstrate the effectiveness of automatic index compaction, exact measurements may vary from one execution to the next. /* Enable automatic index compaction */ ALTER DATABASE CURRENT SET AUTOMATIC_INDEX_COMPACTION = ON; /* Reset to the initial state */ DROP TABLE IF EXISTS dbo.t; DROP SEQUENCE IF EXISTS dbo.s_id; DROP PROCEDURE IF EXISTS dbo.churn; /* Create a sequence to generate clustered index keys */ CREATE SEQUENCE dbo.s_id AS int START WITH 1 INCREMENT BY 1; /* Create a test table */ CREATE TABLE dbo.t ( id int NOT NULL CONSTRAINT df_t_id DEFAULT (NEXT VALUE FOR dbo.s_id), dt datetime2 NOT NULL CONSTRAINT df_t_dt DEFAULT (SYSDATETIME()), u uniqueidentifier NOT NULL CONSTRAINT df_t_uid DEFAULT (NEWID()), s nvarchar(100) NOT NULL CONSTRAINT df_t_s DEFAULT (REPLICATE('c', 1 + 100 * RAND())), CONSTRAINT pk_t PRIMARY KEY (id) ); /* Insert 50,000 rows */ INSERT INTO dbo.t (s) SELECT REPLICATE('c', 50) AS s FROM GENERATE_SERIES(1, 50000); GO /* Create a stored procedure that simulates a write-intensive OLTP workload. */ CREATE OR ALTER PROCEDURE dbo.churn AS SET NOCOUNT, XACT_ABORT ON; DECLARE @r float = RAND(CAST(CAST(NEWID() AS varbinary(4)) AS int)); /* Get the type of statement to execute */ DECLARE @StatementType char(6) = CASE WHEN @r <= 0.15 THEN 'insert' WHEN @r <= 0.30 THEN 'delete' WHEN @r <= 0.65 THEN 'update' WHEN @r <= 1 THEN 'select' ELSE NULL END; /* Get the maximum key value for the clustered index */ DECLARE @MaxKey int = ( SELECT CAST(current_value AS int) FROM sys.sequences WHERE name = 's_id' AND SCHEMA_NAME(schema_id) = 'dbo' ); /* Get a random key value within the key range */ DECLARE @StartKey int = 1 + RAND() * @MaxKey; /* Get a random number of rows, between 1 and 100, to modify or read */ DECLARE @RowCount int = 1 + RAND() * 99; /* Execute a statement */ IF @StatementType = 'insert' INSERT INTO dbo.t (id) SELECT NEXT VALUE FOR dbo.s_id FROM GENERATE_SERIES(1, @RowCount); IF @StatementType = 'delete' DELETE TOP (@RowCount) dbo.t WHERE id >= @StartKey; IF @StatementType = 'update' UPDATE TOP (@RowCount) dbo.t SET dt = DEFAULT, u = DEFAULT, s = DEFAULT WHERE id >= @StartKey; IF @StatementType = 'select' SELECT TOP (@RowCount) id, dt, u, s FROM dbo.t WHERE id >= @StartKey; GO /* The remainder of this script is executed three times: 1. Before running the workload using SQLQueryStress. 2. Immediately after the workload stops running. 3. Once automatic index compaction completes several minutes later. */ /* Monitor page density and the number of pages and records in the leaf level of the clustered index. */ SELECT avg_page_space_used_in_percent AS page_density, page_count, record_count FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('dbo.t'), 1, 1, 'DETAILED') WHERE index_level = 0; /* Run a test query and measure its logical reads. */ DROP TABLE IF EXISTS #t; SET STATISTICS IO ON; SELECT TOP (1000) id, dt, u, s INTO #t FROM dbo.t WHERE id >= 10000 SET STATISTICS IO OFF;4.2KViews3likes1CommentStream data in near real time from SQL MI to Azure Event Hubs - Public preview
How do I modernize an existing application without rewriting it? Many business-critical applications still rely on architectures where the database is the most dependable integration point. These applications may have been built years ago, long before event-driven patterns became mainstream. Even after moving such workloads to Azure, teams are often left with the same question: how do we get data changes out of the database quickly, reliably, and without adding more custom plumbing? This is where Change Event Streaming (CES) comes in. We are happy to announce that Change Event Streaming for Azure SQL Managed Instance is now in Public Preview. CES enables you to stream row-level changes - inserts, updates, and deletes - from your database directly to Azure Event Hubs in near real time. For workloads running on Azure SQL Managed Instance, this matters especially because many of them are existing line-of-business applications, modernized from on-premises SQL Server environments into Azure. Those applications are often still central to the business, but they were not originally designed to publish events to downstream systems. CES helps bridge that gap without requiring you to redesign the application itself. What is Change Event Streaming? Change Event Streaming is a capability that captures committed row changes from your database and publishes them to Azure Event Hubs or Fabric Eventstreams. Instead of relying on periodic polling, custom ETL jobs, or additional connectors, CES lets SQL push changes out as they happen. This opens the door to near-real-time integrations while keeping the architecture simpler and closer to the source of truth. Why CES matters for Azure SQL Managed Instance Incremental modernization for existing applications Azure SQL Managed Instance is a database of choice where application compatibility matters and where teams want to modernize from on-premises SQL Server into Azure with minimal disruption. In these environments, the database often becomes the most practical place to tap into business events - especially when the application itself was not designed to emit events or integrate in real-time. With CES, you do not need to retrofit an older application to emit events itself. You can publish changes at the data layer and let downstream services react from there. This makes CES a practical tool for modernization programs that need to move step by step rather than through a full rewrite. Lower operational complexity Before CES: teams typically had to assemble integration flows out of polling processes, ETL pipelines, custom code, or third-party connectors. Those approaches can work, but they usually bring more moving parts, more credentials to manage, more monitoring overhead, and more latency tuning. With CES: SQL Managed Instance streams changes directly to the configured destination. This reduces architectural sprawl and helps teams focus on consuming the events instead of maintaining the mechanics of moving them. Better decoupling across the estate Once changes are published to Azure Event Hubs or Fabric Eventstreams, multiple downstream systems can consume them independently. That is useful when one operational workload needs to feed analytics platforms, integration services, caches, search indexes, or new application components at the same time. Instead of teaching an existing application to integrate with every destination directly, you can stream once from the database and let the message bus handle fan-out. Typical scenarios Breaking down monoliths Many modernization efforts start with a large existing application and a database that serves many business functions. CES can help you carve out one capability at a time. A new component (microservice) can subscribe to events from selected tables, build its own behavior around those changes, and be validated incrementally before broader cutover decisions are made. Real-time integration for line-of-business systems If an operational system running on SQL Managed Instance needs to notify other platforms when data changes, CES provides a direct path to doing that. This can help with partner integrations, internal workflows, or downstream business processes that should react quickly when transactions are committed. Real-time analytics Operational data often becomes more valuable when it can be analyzed quickly. CES can stream data changes into Fabric Eventstreams or Azure Event Hubs, from where they can be consumed by analytics and stream processing processes for timely insights or actions. Cache and index refresh Applications often depend on caches or search indexes that need to stay aligned with transactional data. CES can provide a cleaner alternative to custom synchronization logic by publishing changes as they occur. How it works CES uses transaction log-based capture to stream changes with minimal impact on the publishing workload. Events are emitted in a structured JSON format that follows the CloudEvents standard and includes details such as the operation type, primary key, and before/after values. Azure SQL Managed Instance can publish these events to Azure Event Hubs or Fabric Eventstreams using AMQP or Kafka protocols, depending on how you connect your downstream consumers. Conclusion Change Event Streaming for Azure SQL Managed Instance is an important step for customers who want to make existing applications more connected, simplified into smaller pieces or easier to integrate with modern data and application platforms. For teams modernizing long-lived SQL Server workloads in Azure, CES offers a practical path: keep the application stable, tap into the data layer, and start enabling near-real-time scenarios without building another custom integration stack. As CES enters Public Preview for Azure SQL Managed Instance, we encourage you to explore where it can simplify your architecture and accelerate modernization efforts. Availability notes Besides SQL Server 2025 and Azure SQL Database, where CES is already in Public preview, CES is available as of today in Public Preview for Azure SQL Managed Instance. Just make sure that your SQL MI update policy is set to "Always up to date" or "SQL Server 2025". This preview brings the same core CES capability to SQL Managed Instance workloads, helping customers apply event-driven patterns to existing operational systems without adding another custom integration layer. For feature details, configuration guidance, and frequently asked questions, see: Feature Overview CES: Frequently Asked Questions We welcome your feedback through Azure Feedback channels or support channels. The CES team can also be reached via email: sqlcesfeedback [at] microsoft [dot] com. Useful resources Try Azure SQL Managed Instance for free for one year What's new in Azure SQL Managed Instance?423Views0likes0CommentsGenerally Available: Azure SQL Managed Instance Next-gen General Purpose
Overview Next-gen General Purpose is the evolution of General Purpose service tier that brings significantly improved performance and scalability to power up your existing Azure SQL Managed Instance fleet and helps you bring more mission-critical SQL workloads to Azure. We are happy to announce that Next-gen General Purpose is now Generally Available (GA) delivering even more scalability, flexibility, and value for organizations looking to modernize their data platform in a cost-effective way. The new #SQLMINextGen General Purpose tier delivers a built-in performance upgrade available to all customers at no extra cost. If you are an existing SQL MI General Purpose user, you get faster I/O, higher database density, and expanded storage - automatically. Summary Table: Key Improvements Capability Current GP Next-gen GP Improvement Average I/O Latency 5-10 ms 3-4 ms 2x lower Max Data IOPS 30-50k 80k 60% better Max Storage 16 TB 32 TB 2x better Max Databases/Instance 100 500 5x better Max vCores 80 128 40% better But that’s just the beginning. The new configuration sliders for additional IOPS and memory provide enhanced flexibility to tailor performance according to your requirements. Whether you require more resources for your application or seek to optimize resource utilization, you can adjust your instance settings to maximize efficiency and output. This release isn’t just about speed - It’s about giving you improved performance where it matters, and mechanisms to go further when you need them. Customer story - A recent customer case highlights how Hexure reduced processing time by up to 97.2% using Azure SQL Managed Instance on Next-gen General Purpose. What’s new in Next-gen General Purpose (Nov 2025)? 1. Improved baseline performance with the latest storage tech Azure SQL Managed Instance is built on Intel® Xeon® processors, ensuring a strong foundation for enterprise workloads. With the next-generation General Purpose tier, we’ve paired Intel’s proven compute power with advanced storage technology to deliver faster performance, greater scalability, and enhanced flexibility - helping you run more efficiently and adapt to growing business needs. The SQL Managed Instance General Purpose tier is designed with full separation of compute and storage layers. The Classic GP version uses premium page blobs for the storage layer, while the Next-generation GP tier has transitioned to Azure’s latest storage solution, Elastic SAN. Azure Elastic SAN is a cloud-native storage service that offers high performance and excellent scalability, making it a perfect fit for the storage layer of a data-intensive PaaS service like Azure SQL Managed Instance. Simplified Performance Management With ESAN as the storage layer, the performance quotas for the Next-gen General Purpose tier are no longer enforced for each database file. The entire performance quota for the instance is shared across all the database files, making performance management much easier (one fewer thing to worry about). This adjustment brings the General Purpose tier into alignment with the Business Critical service tier experience. 2. Resource flexibility and cost optimization The GA of Next-gen General Purpose comes together with the GA of a transformative memory slider, enabling up to 49 memory configurations per instance. This lets you right-size workloads for both performance and cost. Memory is billed only for the additional amount beyond the default allocation. Users can independently configure vCores, memory, and IOPS for optimal efficiency. To learn more about the new option for configuring additional memory, check the article: Unlocking More Power with Flexible Memory in Azure SQL Managed Instance. 3. Enhanced resource elasticity through decoupled compute and storage scaling operations With Next-gen GP, both storage and IOPS can be resized independently of the compute infrastructure, and these changes now typically finish within five minutes - a process known as an in-place upgrade. There are three distinct types of storage upgrade experiences depending on the kind of storage upgrade performed and whether failover occurs. In-place update: same storage (no data copy), same compute (no failover) Storage re-attach: Same storage (no data copy), changed compute (with failover) Data copy: Changed storage (data copy), changed compute (with failover) The following matrix describes user experience with management operations: Operation Data copying Failover Storage upgrade type IOPS scaling No No In-place Storage scaling* No* No In-place vCores scaling No Yes** Re-attach Memory scaling No Yes** Re-attach Maintenance Window change No Yes** Re-attach Hardware change No Yes** Re-attach Update policy change Yes Yes Data copy * If scale down is >5.5TB, seeding ** In case of update operations that do not require seeding and are not completed in place (examples are scaling vCores, scaling memory, changing hardware or maintenance window), failover duration of databases on the Next-gen General Purpose service tier scales with the number of databases, up to 10 minutes. While the instance becomes available after 2 minutes, some databases might be available after a delay. Failover duration is measured from the moment when the first database goes offline, until the moment when the last database comes online. Furthermore, resizing vCores and memory is now 50% faster following the introduction of the Faster scaling operations release. No matter if you have end-of-month peak periods, or there are ups and downs of usage during the weekdays and the weekend, with fast and reliable management operations, you can run multiple configurations over your instance and respond to peak usage periods in a cost-effective way. 4. Reserved instance (RI) pricing With Azure Reservations, you can commit to using Azure SQL resources for either one or three years, which lets you benefit from substantial discounts on compute costs. When purchasing a reservation, you'll need to choose the Azure region, deployment type, performance tier, and reservation term. Reservations are only available for products that have reached general availability (GA), and with this update, next-generation GP instances now qualify as well. What's even better is that classic and next-gen GP share the same SKU, just with different remote storage types. This means any reservations you've purchased automatically apply to Next-gen GP, whether you're upgrading an existing classic GP instance or creating a new one. What’s Next? The product group has received considerable positive feedback and welcomes continued input. The initial release will not include zonal redundancy; however, efforts are underway to address this limitation. Next-generation General Purpose (GP) represents the future of the service tier, and all existing classic GP instances will be upgraded accordingly. Once upgrade plans are finalized, we will provide timely communication regarding the announcement. Conclusion Now in GA, Next-gen General Purpose sets a new standard for cloud database performance and flexibility. Whether you’re modernizing legacy applications, consolidating workloads, or building for the future, these enhancements put more power, scalability, and control in your hands - without breaking the bank. If you haven’t already, try out the Next-gen General Purpose capabilities for free with Azure SQL Managed Instance free offer. For users operating SQL Managed Instance on the General Purpose tier, it is recommended to consider upgrading existing instances to leverage the advantages of next-gen upgrade – for free. Welcome to #SQLMINextGen. Boosted by default. Tuned by you. Learn more What is Azure SQL Managed Instance Try Azure SQL Managed Instance for free Next-gen General Purpose – official documentation Analyzing the Economic Benefits of Microsoft Azure SQL Managed Instance How 3 customers are driving change with migration to Azure SQL Accelerate SQL Server Migration to Azure with Azure Arc4.6KViews5likes4CommentsGA of update policy SQL Server 2025 for Azure SQL Managed Instance
We’re happy to announce that the update policy SQL Server 2025 for Azure SQL Managed Instance is now generally available (GA). SQL Server 2025 update policy contains all the latest SQL engine innovation while retaining database portability to the recent major release of SQL Server. Update policy is an instance configuration option that provides flexibility and allows you to choose between instant access to the latest SQL engine features and fixed SQL engine feature set corresponding to 2022 and 2025 major releases of SQL Server. Regardless of the update policy chosen, you continue to benefit from Azure SQL platform innovation. New features and capabilities not related to the SQL engine – everything that makes Azure SQL Managed Instance a true PaaS service – are successively delivered to your Azure SQL Managed Instance resources. What’s new in SQL Server 2025 update policy In short, instances with update policy SQL Server 2025 benefit from all the SQL engine features that were gradually added to the Always-up-to-date policy over the past few years and are not available in the SQL Server 2022 update policy. Let’s name few most notable features, with complete list available in the update policy documentation: Optimized locking Mirroring in Fabric Regular expression functions Vector data type and functions JSON data type and aggregate functions Invoking HTTP REST endpoints Manual (copy-only) backup to immutable Azure Blob Storage Change Event Streaming (private preview) Update policy for each modernization strategy Always-up-to-date is a “perpetual” update policy. It has no end of lifetime and brings new SQL engine features to instances as soon as they are available in Azure. It enables you to always be at the forefront - to quickly adopt new yet production-ready SQL engine features, benefit from them in everyday operations and keep a competitive edge without waiting for the next major release of SQL Server. In contrast, update policies SQL Server 2025 and SQL Server 2022 contain fixed sets of SQL engine features corresponding to the respective releases of SQL Server. They’re optimized to fulfill regulatory compliance, contractual, or other requirements for database/workload portability from managed instance to SQL Server. Over time, they get security patches, fixes, and incremental functional improvements in form of Cumulative Updates, but not new SQL engine features. They also have limited lifetime, aligned with the period of mainstream support of SQL Server releases. As the end of mainstream support for the update policy approaches, you should upgrade instances to a newer policy. Instances will be automatically upgraded to the next more recent update policy at the end of mainstream support of their existing update policy. Best practices with the Update policy feature Plan for the end of lifetime of SQL Server 2022 update policy if you’re using it today, and upgrade to a newer policy on your terms before automatic upgrade kicks in. Choose between Always-up-to-date and SQL Server 2025 update policy. Make sure to add update policy configuration to your deployment templates and scripts, so that you don’t rely on service defaults that may change in the future. Be aware that using some of the newly introduced features may require changing the database compatibility level. Consult feature documentation for details. What’s coming next SQL Server 2025 will become the default update policy in Azure portal during March 2026. Future versions of REST API, PowerShell and CLI will also have the default value changed to SQL Server 2025 for the „database format“ parameter which corresponds to the instance’s update policy. SQL Server 2022 policy will reach end of lifetime on January 11, 2028 when the mainstream support for SQL Server 2022 ends. Plan timely and change the update policy of your instances before that date. Summary Update policy SQL Server 2025 for Azure SQL Managed Instance is now generally available. It brings the same set of SQL engine features that exist in the new SQL Server 2025. Consider it if you have regulatory compliance, contractual, or other reasons for database/workload portability from Azure SQL Managed Instance to SQL Server 2025. Otherwise, use the Always-up-to-date policy which always provides the latest features and benefits available to Azure SQL Managed Instance. If your instances are currently configured with SQL Server 2022 update policy, update them to a newer policy before the end of mainstream support. For more details visit Update policy documentation. To stay up to date with the latest feature additions to Azure SQL Managed Instance, subscribe to the Azure SQL video channel, subscribe to the Azure SQL Blog feed, or bookmark What’s new in Azure SQL Managed Instance article with regular updates.1.6KViews1like0CommentsWhy Developers and DBAs love SQL’s Dynamic Data Masking (Series-Part 1)
Dynamic Data Masking (DDM) is one of those SQL features (available in SQL Server, Azure SQL DB, Azure SQL MI, SQL Database in Microsoft Fabric) that both developers and DBAs can rally behind. Why? Because it delivers a simple, built-in way to protect sensitive data—like phone numbers, emails, or IDs—without rewriting application logic or duplicating security rules across layers. With just a single line of T-SQL, you can configure masking directly at the column level, ensuring that non-privileged users see only obfuscated values while privileged users retain full access. This not only streamlines development but also supports compliance with data privacy regulations like GDPR and HIPAA, etc. by minimizing exposure to personally identifiable information (PII). In this first post of our DDM series, we’ll walk through a real-world scenario using the default masking function to show how easy it is to implement and how much development effort it can save. Scenario: Hiding customer phone numbers from support queries Imagine you have a support application where agents can look up customer profiles. They need to know if a phone number exists for the customer but shouldn’t see the actual digits for privacy. In a traditional approach, a developer might implement custom logic in the app (or a SQL view) to replace phone numbers with placeholders like “XXXX” for non-privileged users. This adds complexity and duplicate logic across the app. With DDM’s default masking, the database can handle this automatically. By applying a mask to the phone number column, any query by a non-privileged user will return a generic masked value (e.g. “XXXX”) instead of the real number. The support agent gets the information they need (that a number is on file) without revealing the actual phone number, and the developer writes zero masking code in the app. This not only simplifies the application codebase but also ensures consistent data protection across all query access paths. As Microsoft’s documentation puts it, DDM lets you control how much sensitive data to reveal “with minimal effect on the application layer” – exactly what our scenario achieves. Using the ‘Default’ Mask in T-SQL : The ‘Default’ masking function is the simplest mask: it fully replaces the actual value with a fixed default based on data type. For text data, that default is XXXX. Let’s apply this to our phone number example. The following T-SQL snippet works in Azure SQL Database, Azure SQL MI and SQL Server: SQL -- Step 1: Create the table with a default mask on the Phone column CREATE TABLE SupportCustomers ( CustomerID INT PRIMARY KEY, Name NVARCHAR(100), Phone NVARCHAR(15) MASKED WITH (FUNCTION = 'default()') -- Apply default masking ); GO -- Step 2: Insert sample data INSERT INTO SupportCustomers (CustomerID, Name, Phone) VALUES (1, 'Alice Johnson', '222-555-1234'); GO -- Step 3: Create a non-privileged user (no login for simplicity) CREATE USER SupportAgent WITHOUT LOGIN; GO -- Step 4: Grant SELECT permission on the table to the user GRANT SELECT ON SupportCustomers TO SupportAgent; GO -- Step 5: Execute a SELECT as the non-privileged user EXECUTE AS USER = 'SupportAgent'; SELECT Name, Phone FROM SupportCustomers WHERE CustomerID = 1 Alternatively, you can use Azure Portal to configure masking as shown in the following screenshot: Expected result: The query above would return Alice’s name and a masked phone number. Instead of seeing 222-555-1234, the Phone column would show XXXX. Alice’s actual number remains safely stored in the database, but it’s dynamically obscured for the support agent’s query. Meanwhile, privileged users such as administrator or db_owner which has CONTROL permission on the database or user with proper UNMASK permission would see the real phone number when running the same query. How this helps Developers : By pushing the masking logic down to the database, developers and DBAs avoid writing repetitive masking code in every app or report that touches this data. In our scenario, without DDM you might implement a check in the application like: If user_role == “Support”, then show “XXXX” for phone number, else show full phone. With DDM, such conditional code isn’t needed – the database takes care of it. This means: Less application code to write and maintain for masking Consistent masking everywhere (whether data is accessed via app, report, or ad-hoc query). Quick changes to masking rules in one place if requirements change, without hunting through application code. From a security standpoint, DDM reduces the risk of accidental data exposure and helps in compliance scenarios where personal data must be protected in lower environments or by certain roles, while reducing the developer effort drastically. In the next posts of this series, we’ll explore other masking functions (like Email, Partial, and Random etc) with different scenarios. By the end, you’ll see how each built-in mask can be applied to make data security and compliance more developer-friendly! Reference Links : Dynamic Data Masking - SQL Server | Microsoft Learn Dynamic Data Masking - Azure SQL Database & Azure SQL Managed Instance & Azure Synapse Analytics | Microsoft Learn347Views1like0CommentsImproving Azure SQL Database reliability with accelerated database recovery in tempdb
We are pleased to announce that in Azure SQL Database, accelerated database recovery is now enabled in the tempdb database to bring instant transaction rollback and aggressive log truncation for transactions in tempdb. The same improvement is coming to SQL Server and Azure SQL Managed Instance.717Views1like2CommentsAzure Data Studio Retirement
We’re announcing the upcoming retirement of Azure Data Studio (ADS) on February 6, 2025, as we focus on delivering a modern, streamlined SQL development experience. ADS will remain supported until February 28, 2026, giving developers ample time to transition. This decision aligns with our commitment to simplifying SQL development by consolidating efforts on Visual Studio Code (VS Code) with the MSSQL extension, a powerful and versatile tool designed for modern developers. Why Retire Azure Data Studio? Azure Data Studio has been an essential tool for SQL developers, but evolving developer needs and the rise of more versatile platforms like VS Code have made it the right time to transition. Here’s why: Focus on innovation VS Code, widely adopted across the developer community, provides a robust platform for delivering advanced features like cutting-edge schema management and improved query execution. Streamlined tools Consolidating SQL development on VS Code eliminates duplication, reduces engineering maintenance overhead, and accelerates feature delivery, ensuring developers have access to the latest innovations. Why Transition to Visual Studio Code? VS Code is the #1 developer tool, trusted by millions worldwide. It is a modern, versatile platform that meets the evolving demands of SQL and application developers. By transitioning, you gain access to cutting-edge tools, seamless workflows, and an expansive ecosystem designed to enhance productivity and innovation. We’re committed to meeting developers where they are, providing a modern SQL development experience within VS Code. Here’s how: Modern development environment VS Code is a lightweight, extensible, and community-supported code editor trusted by millions of developers. It provides: Regular updates. An active extension marketplace. A seamless cross-platform experience for Windows, macOS, and Linux. Comprehensive SQL features With the MSSQL extension in VS Code, you can: Execute queries faster with filtering, sorting, and export options for JSON, Excel, and CSV. Manage schemas visually with Table Designer, Object Explorer, and support for keys, indexes, and constraints. Connect to SQL Server, Azure SQL (all offerings), and SQL database in Fabric using an improved Connection Dialog. Streamline development with scripting, object modifications, and a unified SQL experience. Optimize performance with an enhanced Query Results Pane and execution plans. Integrate with DevOps and CI/CD pipelines using SQL Database Projects. Stay tuned for upcoming features—we’re continuously building new experiences based on feedback from the community. Make sure to follow the MSSQL repository on GitHub to stay updated and contribute to the project! Streamlined workflow VS Code supports cloud-native development, real-time collaboration, and thousands of extensions to enhance your workflows. Transitioning to Visual Studio Code: What You Need to Know We understand that transitioning tools can raise concerns, but moving from Azure Data Studio (ADS) to Visual Studio Code (VS Code) with the MSSQL extension is designed to be straightforward and hassle-free. Here’s why you can feel confident about this transition: No Loss of Functionality If you use ADS to connect to Azure SQL databases, SQL Server, or SQL database in Fabric, you’ll find that the MSSQL extension supports these scenarios seamlessly. Your database projects, queries, and scripts created in ADS are fully compatible with VS Code and can be opened without additional migration steps. Familiar features, enhanced experience VS Code provides advanced tools like improved query execution, modern schema management, and CI/CD integration. Additionally, alternative tools and extensions are available to replace ADS capabilities like SQL Server Agent and Schema Compare. Cross-Platform and extensible Like ADS, VS Code runs on Windows, macOS, and Linux, ensuring a consistent experience across operating systems. Its extensibility allows you to adapt it to your workflow with thousands of extensions. If you have further questions or need detailed guidance, visit the ADS Retirement page. The page includes step-by-step instructions, recommended alternatives, and additional resources. Continued Support With the Azure Data Studio retirement, we’re committed to supporting you during this transition: Documentation: Find detailed guides, tutorials, and FAQs on the ADS Retirement page. Community Support: Engage with the active Visual Studio Code community for tips and solutions. You can also explore forums like Stack Overflow. GitHub Issues: If you encounter any issues, submit a request or report bugs on the MSSQL extension’s GitHub repository. Microsoft Support: For critical issues, reach out to Microsoft Support directly through your account. Transitioning to VS Code opens the door to a more modern and versatile SQL development experience. We encourage you to explore the new possibilities and start your journey today! Conclusion Azure Data Studio has served the SQL community well,but the Azure Data Studio retirement marks an opportunity to embrace the modern capabilities of Visual Studio Code. Transitioning now ensures you’re equipped with cutting-edge tools and a future-ready platform to enhance your SQL development experience. For a detailed guide on ADS retirement , visit aka.ms/ads-retirement. To get started with the MSSQL extension, check out the official documentation. We’re excited to see what you build with VS Code!34KViews4likes28Comments