azure sql
719 TopicsAzure SQL is Deprecating the “No Minimum TLS” (MinTLS None) Configuration
As part of the retirement of lower TLS versions 1.0 and 1.1 and the enforcement of 1.2 as the new default minimum TLS version, we will be removing the No Minimum TLS (MinTLS = “None” or "0") option and updating these configurations to TLS 1.2. No Minimum TLS allowed Azure SQL Database and Azure SQL Managed Instance resources to accept client connections using any TLS protocol version and unencrypted connections. Over the past year, Azure has retired TLS 1.0 and 1.1 for all Azure databases, due to known security vulnerabilities in these older protocols. As of August 31, 2025, creating servers configured with versions 1.0 and 1.1 was disallowed and migration to 1.2 began. With legacy TLS versions being deprecated, TLS 1.2 will become the secure default minimum TLS version for new Azure SQL DB and MI configurations and for all client-server connections, rendering the MinTLS = None setting obsolete. As a result, the MinTLS = None configuration option will be deprecated for new servers, and existing servers configured with No Minimum TLS will be upgraded to 1.2. What is changing? After July 15, 2026, we will disallow minimum TLS value "None", for the creation of new SQL DB and MI resources using PowerShell, Azure CLI, and any other REST based interface. This configuration option has already been removed from the Portal during the deprecation of TLS versions 1.0 and 1.1. Creating new Azure SQL Database and Managed Instance servers with MinTLS = None (which was previously considered the default) will no longer be a supported configuration. If the server parameter value for the minimum TLS is left blank, it will default to minimum TLS version 1.2. Attempts to create an Azure SQL server with MinTLS = None will fail with an “Invalid operation” error and downgrades to None will be disallowed. While attempts to connect with TLS 1.0, 1.1 or unencrypted connections will fail with “Error: 47072/171 on Gateway.” Effective date (retirement milestone) MinTLS = None (0) MinTLS left blank (defaults to supported minimum) Before 8/31/25 Any + Unencrypted Any + Unencrypted After 8/31/25 1.2 + Unencrypted 1.2 After July 15, 2026 Invalid operation error (for new server creates) Downgrades will be disallowed TLS error: 47072/171 (for unencrypted connections) 1.2 In summary, after July 15, 2026, Azure SQL Database and Azure SQL Managed Instance will require all client connections to use TLS 1.2 or higher and unencrypted connections will be denied. The minimum TLS version setting will no longer accept the value "None" for new or existing servers and servers currently configured with this value will be upgraded to explicitly enforce TLS 1.2. Who is impacted? For most Azure SQL customers, there is no action required. Most clients already use TLS 1.2 or higher. After July 15, 2026, if your Azure SQL Database or Managed Instance is still configured with No Minimum TLS and using 1.0, 1.1 or unencrypted connections, it will automatically update to TLS 1.2 to reflect the current minimum protocol enforcement in client-server connectivity. We do recommend you verify your client applications – especially any older or third-party client drivers – to ensure they can communicate with TLS 1.2 or above. In some rare cases, very old applications, such as an outdated JDBC driver or older .NET framework version, may need an update or need to enable TLS 1.2. Conclusion This deprecation is part of Azure’s broader security strategy to ensure encrypted connections are secure by modern encryption standards. TLS version 1.2 is more secure than older versions and is now the industry standard (required by regulations like PCI DSS and HIPAA). This change eliminates the use of unencrypted connections which ensure all database connections meet current security standards. If you’ve already migrated to TLS 1.2 (as most customers have), you will most likely not notice any change, except that the No Minimum TLS option will disappear from configurations.90Views0likes0CommentsConnect to Azure SQL Database using a custom domain name with Microsoft Entra ID authentication
Many of us might prefer to connect to Azure SQL Server using a custom domain name (like devsqlserver.mycompany.com) rather than the default fully qualified domain name (devsqlserver.database.windows.net), often because of application-specific or compliance reasons. This article details how you can accomplish this when logging in with Microsoft Entra ID (for example, user@mycompany.com) in Azure SQL Database specific environment. Frequently, users encounter errors similar to the one described below during this process. Before you start: If you use SQL authentication (SQL username/password), the steps are different. Refer the following article for that scenario: How to use different domain name to connect to Azure SQL DB Server | Microsoft Community Hub With SQL authentication, you can include the server name in the login (for example, username@servername). With Microsoft Entra ID authentication, you don’t do that—so your custom DNS name must follow one important rule. Key requirement for Microsoft Entra ID authentication In an Azure SQL Database (PaaS) environment, the platform relies on the server name portion of the Fully Qualified Domain Name (FQDN) to correctly route incoming connection requests to the appropriate logical server. When you use a custom DNS name, it is important that the name starts with the exact Azure SQL server name (the part before .database.windows.net). Why this is required: Azure SQL Database is a multi-tenant PaaS service, where multiple logical servers are hosted behind shared infrastructure. During the connection process (especially with Microsoft Entra ID authentication), Azure SQL uses the server name extracted from the FQDN to: Identify the correct logical server Route the connection internally within the platform Validate the authentication context This behavior aligns with how Azure SQL endpoints are designed and resolved within Microsoft’s managed infrastructure. If your custom DNS name doesn’t start with the Azure SQL server name, Azure can’t route the connection to the correct server. Sign-in may fail and you might see error 40532 (as shown above). To fix this, change the custom DNS name so it starts with your Azure SQL server name. Example: if your server is devsqlserver.database.windows.net, your custom name must start with 'devsqlserver' devsqlserver.mycompany.com devsqlserver.contoso.com devsqlserver.mydomain.com Step-by-step: set up and connect Pick the custom name. It must start with your server name. Example: use devsqlserver.mycompany.com (not othername.mycompany.com). Create DNS records for the custom name. Create a CNAME or DNS alias to point the custom name to your Azure SQL server endpoint (public) or to the private endpoint IP (private) as per the blog mentioned above. Check DNS from your computer. Make sure devsqlserver.mycompany.com resolves to the right address before you try to connect. Connect with Microsoft Entra ID. In SSMS/Azure Data Studio, set Server to your custom server name and select a Microsoft Entra ID authentication option (for example, Universal with MFA). Sign in and connect. Use your Entra ID (for example, user@mycompany.com). Example: Also, when you connect to Azure SQL Database using a custom domain name, you might see the following error: “The target principal name is incorrect” Example: This happens because Azure SQL’s SSL/TLS certificate is issued for the default server name (for example, servername.database.windows.net), not for your custom DNS name. During the secure connection process, the client validates that the server name you are connecting to matches the name in the certificate. Since the custom domain does not match the certificate, this validation fails, resulting in the error. This is expected behavior and is part of standard security checks to prevent connecting to an untrusted or impersonated server. To proceed with the connection, you can configure the client to trust the server certificate by: Setting Trust Server Certificate = True in the client settings, or Adding TrustServerCertificate=True in the connection string This bypasses the strict name validation and allows the connection to succeed. Note: Please use the latest client drivers (ODBC/JDBC/.NET, etc.). In some old driver versions, the 'TrustServerCertificate' setting may not work properly, and you may still face connection issues with the same 'target principal name is incorrect' error. So, it is always better to keep drivers updated for smooth connectivity with Azure SQL. Applies to both public and private endpoints: This naming requirement and approach work whether you connect over the public endpoint or through a private endpoint for Azure SQL Database scenario, as long as DNS resolution for the custom name is set up correctly for your network.152Views3likes0CommentsDatabase DevOps (preview) in SSMS 22.4.1
Database DevOps tooling for Microsoft SQL brings the benefits of database-as-code to your development workflow. At its core are SQL database projects, which enable you to source control your database schema, perform reliable deployments to any environment, and integrate code quality checks into your development process.4.5KViews0likes1CommentZero Trust for data: Make Microsoft Entra authentication for SQL your policy baseline
A policy-driven path from enabled to enforced. Why this matters now Security and compliance programs were once built on an assumption that internal networks were inherently safer. Cloud adoption, remote work, and supply-chain compromise have steadily invalidated that model. U.S. federal guidance has now formalized this shift: Executive Order 14028 calls for modernizing cybersecurity and accelerating Zero Trust adoption, and OMB Memorandum M-22-09 sets a federal Zero Trust strategy with specific objectives and timelines. Meanwhile, attacker economics are changing. Automation and AI make reconnaissance, phishing, and credential abuse cheaper and faster. That concentrates risk on identity—the control plane that sits in front of systems, applications, and data. In Zero Trust, the question is no longer “is the network trusted,” but “is this request verified, governed by policy, and least-privilege?” Why database authentication is a first‑order Zero Trust control Databases are universally treated as crown-jewel infrastructure. Yet many data estates still rely on legacy patterns: password-based SQL authentication, long-lived secrets embedded in apps, and shared administrative accounts that persist because migration feels risky. This is exactly the kind of implicit trust Zero Trust architectures aim to remove. NIST SP 800-207 defines Zero Trust as eliminating implicit trust based solely on network location or ownership and focusing controls on protecting resources. In that model, every new database connection is not “plumbing”—it is an access decision to sensitive data. If the authentication mechanism sits outside the enterprise identity plane, governance becomes fragmented and policy enforcement becomes inconsistent. What changes when SQL uses Microsoft Entra authentication Microsoft Entra authentication enables users and applications to connect to SQL using enterprise identities, instead of usernames and passwords. Across Azure SQL and SQL Server enabled by Azure Arc, Entra-based authentication helps align database access with the same identity controls organizations use elsewhere. The security and compliance outcomes that leaders care about Reduce password and secret risk: move away from static passwords and embedded credentials. Centralize governance: bring database access under the same identity policies, access reviews, and lifecycle controls used across the enterprise. Improve auditability: tie access to enterprise identities and create a consistent control surface for reporting. Enable policy enforcement at scale: move from “configured” controls to “enforced” controls through governance and tooling. This is why Entra authentication is a high-ROI modernization step: it collapses multiple security and operational objectives into one effort (identity modernization) rather than a set of ongoing compensating programs (password rotation programs, bespoke exceptions, and perpetual secret hygiene projects). Why AI makes this a high priority decision AI accelerates both reconnaissance and credential abuse, which concentrates risk on identity. As a result, policy makers increasingly treat phishing-resistant authentication and centralized identity enforcement as foundational—not optional. A practical path: from enabled to enforced Successful security programs define a clear end state, a measurable glide path, and an enforcement model. A pragmatic approach to modernizing SQL access typically includes: Discover active usage: Identify which logins and users are actively connecting and which are no longer required. Establish Entra as the identity authority: Enable Entra authentication on SQL logical servers, starting in mixed mode to reduce disruption. Recreate principals using Entra identities: Replace SQL Authentication logins/users with Entra users, groups, service principals, and managed identities. Modernize application connectivity: Update drivers and connection patterns to use Entra-based authentication and managed identities. Validate, then enforce: Confirm the absence of password‑based SQL authentication traffic, then move to Entra‑only where available and enforce via policy. By adopting this sequencing, organizations can mitigate risks at an early stage and postpone enforcement until the validation process concludes. For a comprehensive migration strategy, refer to Securing Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide. Choosing which projects to fund — and which ones to stop When making investment decisions, priority is given to database identity projects that can demonstrate clear risk reduction and lasting security benefits. Microsoft Entra authentication as the default for new SQL workloads, with a defined migration path for the existing workloads. Managed identities for application-to-database connectivity to eliminate stored secrets. Centralized governance for privileged database access using enterprise identity controls. At the same time, organizations should explicitly de-prioritize investments that perpetuate password risk: password rotation projects that preserve SQL Authentication, bespoke scripts maintaining shared logins, and exception processes that do not scale. Security and scale are not competing goals Security is often seen as something that slows down innovation, but database identity offers unique benefits. When enterprise identity is used for access controls, bringing in new applications and users shifts from handing out credentials to overseeing policies. Compliance reporting also becomes uniform rather than customized, making it easier to grow consistently thanks to a single control framework. Modern database authentication is not solely about mitigating risk— it establishes a scalable operational framework for secure data access. A scorecard designed for leadership readiness To elevate the conversation from implementation to governance, use outcome-based metrics: Coverage: Percentage of SQL workloads with Entra authentication enabled. Enforcement: Percentage operating in Entra-only mode after validation. Secret reduction: Applications still relying on stored database passwords. Privilege hygiene: Admin access governed through enterprise identity controls. Audit evidence: Ability to produce identity-backed access reports on demand. These map directly to Zero Trust maturity expectations and provide a defensible definition of “done.” Closing Zero Trust is an operating posture, not a single control. For most organizations, the fastest way to make that posture measurable is to standardize database access on the same identity plane used everywhere else. If you are looking for a single investment that improves security, reduces audit friction, and supports responsible AI adoption, modernizing SQL access with Microsoft Entra authentication — and driving it from enabled to enforced — is one of the most durable choices you can make. References US Government sets forth Zero Trust architecture strategy and requirements (Microsoft Security Blog) Securing Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide (Microsoft Tech Community) OMB Memorandum M-22-09: Federal Zero Trust Strategy (White House) NIST SP 800-207: Zero Trust Architecture CISA: Zero Trust Enforce Microsoft Entra-only authentication for Azure SQL Database and Azure SQL Managed Instance272Views1like0CommentsStop defragmenting and start living: introducing auto index compaction
Executive summary Automatic index compaction is a new built-in feature in the MSSQL database engine that compacts indexes in background and with minimal overhead. Now you can: Stop using scheduled index maintenance jobs. Reduce storage space consumption and save costs. Improve performance by reducing CPU, memory, and disk I/O consumption. Today, we announce a public preview of automatic index compaction in Azure SQL Database, Azure SQL Managed Instance with the always-up-to-date update policy, and SQL database in Fabric. Index maintenance without maintenance jobs Enable automatic index compaction for a database with a single T-SQL command: ALTER DATABASE [database-name] SET AUTOMATIC_INDEX_COMPACTION = ON; Once enabled, you no longer need to set up, maintain, and monitor resource intensive index maintenance jobs, a time-consuming operational task for many DBA teams today. As the data in the database changes, a background process consolidates rows from partially filled data pages into a smaller number of filled up pages, and then removes the empty pages. Index bloat is eliminated – the same amount of data now uses a minimal amount of storage space. Resource consumption is reduced because the database engine needs fewer disk IOs and less CPU and memory to process the same amount of data. By design, the background compaction process acts on the recently modified pages only. This means that its own resource consumption is much lower compared to the traditional index maintenance operations (index rebuild and reorganize), which process all pages in an index or its partition. For a detailed description of how the feature works, a comparison between automatic index compaction and the traditional index maintenance operations, and the ways to monitor the compaction process, see automatic index compaction in documentation. Compaction in action To see the effects of automatic index compaction, we wrote a stored procedure that simulates a write-intensive OLTP workload. Each execution of the procedure inserts, updates, deletes, or selects a random number of rows, from 1 to 100, in a 50,000-row table with a clustered index. We executed this stored procedure using a popular SQLQueryStress tool, with 30 threads and 400 iterations on each thread. We measured the page density, the number pages in the leaf level of the table’s clustered index, and the number of logical reads (pages) used by a test query reading 1,000 rows, at three points in time: After initially inserting the data and before running the workload. Once the workload stopped running. Several minutes later, once the background process completed index compaction. Here are the results: Before workload After workload After compaction Logical reads 25 🟢 1,610 🔴⬆️ 35 🟢⬇️ Page density 99.51% 🟢 52.71% 🔴⬇️ 96.11% 🟢⬆️ Pages 962 🟢 4,394 🔴⬆️ 1,065 🟢⬇️ Before the workload starts, page density is high because nearly all pages are full. The number of logical reads required by the test query is minimal, and so is its resource consumption. The workload leaves a lot of empty space on pages and increases the number of pages because of row updates and deletions, and because of page splits. As a result, immediately after workload completion, the number of logical reads required for the same test query increases more than 60 times, which translates into a higher CPU and memory usage. But then within a few minutes, automatic index compaction removes the empty space from the index, increasing page density back to nearly 100%, reducing logical reads by about 98% and getting the index very close to its initial compact state. Less logical reads means that the query is faster and uses less CPU. All of this without any user action. With continuous workloads, index compaction is continuous as well, maintaining higher average page density and reducing resource usage by the workload over time. The T-SQL code we used in this demo is available in the Appendix. Conclusion Automatic index compaction delegates a routine database maintenance operation to the database engine itself, letting administrators and engineers focus on more important work without worrying about index maintenance. The public preview is a great opportunity to let us know how this new feature works for you. Please share your feedback and suggestions for any improvements we can make. To let us know your thoughts, you can comment on this blog post, leave feedback at https://aka.ms/sqlfeedback, or email us at sqlaicpreview@microsoft.com. Appendix Here is the T-SQL code we used to demonstrate automatic index compaction. The type of executed statements and the number of affected rows is randomized to better represent an OLTP workload. While the results demonstrate the effectiveness of automatic index compaction, exact measurements may vary from one execution to the next. /* Enable automatic index compaction */ ALTER DATABASE CURRENT SET AUTOMATIC_INDEX_COMPACTION = ON; /* Reset to the initial state */ DROP TABLE IF EXISTS dbo.t; DROP SEQUENCE IF EXISTS dbo.s_id; DROP PROCEDURE IF EXISTS dbo.churn; /* Create a sequence to generate clustered index keys */ CREATE SEQUENCE dbo.s_id AS int START WITH 1 INCREMENT BY 1; /* Create a test table */ CREATE TABLE dbo.t ( id int NOT NULL CONSTRAINT df_t_id DEFAULT (NEXT VALUE FOR dbo.s_id), dt datetime2 NOT NULL CONSTRAINT df_t_dt DEFAULT (SYSDATETIME()), u uniqueidentifier NOT NULL CONSTRAINT df_t_uid DEFAULT (NEWID()), s nvarchar(100) NOT NULL CONSTRAINT df_t_s DEFAULT (REPLICATE('c', 1 + 100 * RAND())), CONSTRAINT pk_t PRIMARY KEY (id) ); /* Insert 50,000 rows */ INSERT INTO dbo.t (s) SELECT REPLICATE('c', 50) AS s FROM GENERATE_SERIES(1, 50000); GO /* Create a stored procedure that simulates a write-intensive OLTP workload. */ CREATE OR ALTER PROCEDURE dbo.churn AS SET NOCOUNT, XACT_ABORT ON; DECLARE @r float = RAND(CAST(CAST(NEWID() AS varbinary(4)) AS int)); /* Get the type of statement to execute */ DECLARE @StatementType char(6) = CASE WHEN @r <= 0.15 THEN 'insert' WHEN @r <= 0.30 THEN 'delete' WHEN @r <= 0.65 THEN 'update' WHEN @r <= 1 THEN 'select' ELSE NULL END; /* Get the maximum key value for the clustered index */ DECLARE @MaxKey int = ( SELECT CAST(current_value AS int) FROM sys.sequences WHERE name = 's_id' AND SCHEMA_NAME(schema_id) = 'dbo' ); /* Get a random key value within the key range */ DECLARE @StartKey int = 1 + RAND() * @MaxKey; /* Get a random number of rows, between 1 and 100, to modify or read */ DECLARE @RowCount int = 1 + RAND() * 99; /* Execute a statement */ IF @StatementType = 'insert' INSERT INTO dbo.t (id) SELECT NEXT VALUE FOR dbo.s_id FROM GENERATE_SERIES(1, @RowCount); IF @StatementType = 'delete' DELETE TOP (@RowCount) dbo.t WHERE id >= @StartKey; IF @StatementType = 'update' UPDATE TOP (@RowCount) dbo.t SET dt = DEFAULT, u = DEFAULT, s = DEFAULT WHERE id >= @StartKey; IF @StatementType = 'select' SELECT TOP (@RowCount) id, dt, u, s FROM dbo.t WHERE id >= @StartKey; GO /* The remainder of this script is executed three times: 1. Before running the workload using SQLQueryStress. 2. Immediately after the workload stops running. 3. Once automatic index compaction completes several minutes later. */ /* Monitor page density and the number of pages and records in the leaf level of the clustered index. */ SELECT avg_page_space_used_in_percent AS page_density, page_count, record_count FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('dbo.t'), 1, 1, 'DETAILED') WHERE index_level = 0; /* Run a test query and measure its logical reads. */ DROP TABLE IF EXISTS #t; SET STATISTICS IO ON; SELECT TOP (1000) id, dt, u, s INTO #t FROM dbo.t WHERE id >= 10000 SET STATISTICS IO OFF;4KViews2likes1CommentStream data in near real time from SQL MI to Azure Event Hubs - Public preview
How do I modernize an existing application without rewriting it? Many business-critical applications still rely on architectures where the database is the most dependable integration point. These applications may have been built years ago, long before event-driven patterns became mainstream. Even after moving such workloads to Azure, teams are often left with the same question: how do we get data changes out of the database quickly, reliably, and without adding more custom plumbing? This is where Change Event Streaming (CES) comes in. We are happy to announce that Change Event Streaming for Azure SQL Managed Instance is now in Public Preview. CES enables you to stream row-level changes - inserts, updates, and deletes - from your database directly to Azure Event Hubs in near real time. For workloads running on Azure SQL Managed Instance, this matters especially because many of them are existing line-of-business applications, modernized from on-premises SQL Server environments into Azure. Those applications are often still central to the business, but they were not originally designed to publish events to downstream systems. CES helps bridge that gap without requiring you to redesign the application itself. What is Change Event Streaming? Change Event Streaming is a capability that captures committed row changes from your database and publishes them to Azure Event Hubs or Fabric Eventstreams. Instead of relying on periodic polling, custom ETL jobs, or additional connectors, CES lets SQL push changes out as they happen. This opens the door to near-real-time integrations while keeping the architecture simpler and closer to the source of truth. Why CES matters for Azure SQL Managed Instance Incremental modernization for existing applications Azure SQL Managed Instance is a database of choice where application compatibility matters and where teams want to modernize from on-premises SQL Server into Azure with minimal disruption. In these environments, the database often becomes the most practical place to tap into business events - especially when the application itself was not designed to emit events or integrate in real-time. With CES, you do not need to retrofit an older application to emit events itself. You can publish changes at the data layer and let downstream services react from there. This makes CES a practical tool for modernization programs that need to move step by step rather than through a full rewrite. Lower operational complexity Before CES: teams typically had to assemble integration flows out of polling processes, ETL pipelines, custom code, or third-party connectors. Those approaches can work, but they usually bring more moving parts, more credentials to manage, more monitoring overhead, and more latency tuning. With CES: SQL Managed Instance streams changes directly to the configured destination. This reduces architectural sprawl and helps teams focus on consuming the events instead of maintaining the mechanics of moving them. Better decoupling across the estate Once changes are published to Azure Event Hubs or Fabric Eventstreams, multiple downstream systems can consume them independently. That is useful when one operational workload needs to feed analytics platforms, integration services, caches, search indexes, or new application components at the same time. Instead of teaching an existing application to integrate with every destination directly, you can stream once from the database and let the message bus handle fan-out. Typical scenarios Breaking down monoliths Many modernization efforts start with a large existing application and a database that serves many business functions. CES can help you carve out one capability at a time. A new component (microservice) can subscribe to events from selected tables, build its own behavior around those changes, and be validated incrementally before broader cutover decisions are made. Real-time integration for line-of-business systems If an operational system running on SQL Managed Instance needs to notify other platforms when data changes, CES provides a direct path to doing that. This can help with partner integrations, internal workflows, or downstream business processes that should react quickly when transactions are committed. Real-time analytics Operational data often becomes more valuable when it can be analyzed quickly. CES can stream data changes into Fabric Eventstreams or Azure Event Hubs, from where they can be consumed by analytics and stream processing processes for timely insights or actions. Cache and index refresh Applications often depend on caches or search indexes that need to stay aligned with transactional data. CES can provide a cleaner alternative to custom synchronization logic by publishing changes as they occur. How it works CES uses transaction log-based capture to stream changes with minimal impact on the publishing workload. Events are emitted in a structured JSON format that follows the CloudEvents standard and includes details such as the operation type, primary key, and before/after values. Azure SQL Managed Instance can publish these events to Azure Event Hubs or Fabric Eventstreams using AMQP or Kafka protocols, depending on how you connect your downstream consumers. Conclusion Change Event Streaming for Azure SQL Managed Instance is an important step for customers who want to make existing applications more connected, simplified into smaller pieces or easier to integrate with modern data and application platforms. For teams modernizing long-lived SQL Server workloads in Azure, CES offers a practical path: keep the application stable, tap into the data layer, and start enabling near-real-time scenarios without building another custom integration stack. As CES enters Public Preview for Azure SQL Managed Instance, we encourage you to explore where it can simplify your architecture and accelerate modernization efforts. Availability notes Besides SQL Server 2025 and Azure SQL Database, where CES is already in Public preview, CES is available as of today in Public Preview for Azure SQL Managed Instance. Just make sure that your SQL MI update policy is set to "Always up to date" or "SQL Server 2025". This preview brings the same core CES capability to SQL Managed Instance workloads, helping customers apply event-driven patterns to existing operational systems without adding another custom integration layer. For feature details, configuration guidance, and frequently asked questions, see: Feature Overview CES: Frequently Asked Questions We welcome your feedback through Azure Feedback channels or support channels. The CES team can also be reached via email: sqlcesfeedback [at] microsoft [dot] com. Useful resources Try Azure SQL Managed Instance for free for one year What's new in Azure SQL Managed Instance?348Views0likes0CommentsAnnouncing Preview of 160 and 192vCore Premium-series Options for Azure SQL Database Hyperscale
We are excited to announce the public preview of 160 and 192vCore compute sizes for Premium-series hardware configuration in Azure SQL Database Hyperscale. Since the introduction of Premium-series hardware configurations for Hyperscale in November 2022, many customers have successfully used larger vCore configurations to consolidate workloads, reduce shard counts, and improve overall application performance and stability. This preview builds on the Premium-series configuration introduced previously for Hyperscale, extending the maximum scale of a single database and elastic pools from 128vCores to 192vCores to support higher concurrency, faster CPU performance, and larger memory footprints, for more demanding mission critical workloads. With this preview, customers running largescale OLTP, HTAP, and analytics-heavy workloads can evaluate even higher compute ceilings without rearchitecting their applications. Premium-Series Hyperscale Hardware Overview Premium-series Hyperscale databases run on latest-generation Intel and AMD processors , delivering higher per core performance and improved scalability compared to standard-series (Gen5) hardware. With this public preview release, Premium-series Hyperscale now supports larger vCore configurations, extending the scaleup limits for customers who need more compute and memory in a single database. Getting started Customers can enable the 160 or 192vCore Premium-series options when creating a database, or when scaling up existing Hyperscale databases in supported regions (where preview capacity is available). As with other Hyperscale scale operations, moving to a larger vCore size does not require application changes and uses Hyperscale’s distributed storage and compute architecture. Resource Limits & Key characteristics Link to Azure SQL documentation on resource limits Single Database Resource Limits Cores Memory (GB) Tempdb max data size (GB) Max Local SSD IOPS Max Log Rate (MiB/s) Max concurrent workers Max concurrent external connections per pool Max concurrent sessions 128 (Current Limit) 625 4,096 544,000 150 12,800 150 30,000 160 (New preview limit) 830 4,096 680,000 150 16,000 150 30,000 192 (New preview limit) 843* 4,096 816,000 150 19,200 150 30,000 *Memory values will increase for 192 vCores at GA. Elastic Pool Resource Limits Cores Memory (GB) Tempdb max data size (GB) Max Local SSD IOPS Max Log Rate (MiB/s) Max concurrent workers per pool Max concurrent external connections per pool Max concurrent sessions 128 (Current Limit) 625 4,096 409,600 150 13,440 150 30,000 160 (New preview limit) 830 4,096 800,000 150 16,800 150 30,000 192 (New preview limit) 843* 4,096 960,000 150 20,160 150 30,000 *Memory values will increase for 192 vCores at GA. Premium-series Hyperscale can now scale up to 160 vCores & 192 vCores in public preview regions. High performance CPUs optimized for compute-intensive workloads. Increased memory capacity proportional to vCore scale Up to 128 TiB of data storage, consistent with Hyperscale architecture Full compatibility with existing Hyperscale features and capabilities Performance Improvements with 160 and 192 vcores Strong scale-up efficiency observed beyond 128 vCores: Moving from 128 → 160 → 192 vCores shows consistent performance gains, demonstrating that Hyperscale Premium-series continues to scale effectively at higher core counts. 160 vCores delivers a strong balance of single-query and concurrent performance. 192 vCores is ideal for customers prioritizing maximum throughput, high user concurrency, and large-scale transactional or analytical workloads TPC-H Power Run (measures single-stream query performance) improves from 217 (128 vCores) to 357 (160 vCores) and remains high at 355 (192 vCores), delivering a +64% increase from 128 → 192 vCores, indicating strong single-query execution and CPU efficiency at larger sizes. TPC-H Throughput Run (measures multi-stream concurrency) increases from 191 → 360 → 511 QPH, resulting in a +168% gain from 128 → 192 vCores, highlighting significant benefits for highly concurrent, multi-user workloads. Performance case study (Zava Lending example) If the player doesn’t load, open the video in a new window: Open video Zava Lending scaled Azure SQL Hyperscale online as demand increased—supporting more users and higher transaction volume with no downtime. Throughput scaled linearly as compute increased, moving cleanly from 32 → 64 → 128 → 192 vCores to match real workload growth. 192 vCores proved to be the optimal operating point, sustaining peak transaction load without over‑provisioning. Azure SQL Hyperscale handled mixed OLTP and analytics workloads, including nightly ETL, without becoming a bottleneck. Every scale operation was performed online, with no service interruption and no application changes. Preview scope and limitations During preview, Premium-series 160 and 192 vCores are supported in a limited set of initial regions (Australia East, Canada Central, East US 2, South Central US, UK South, West Europe, North Europe, Southeast Asia, West US 2), with broader availability planned over time. During preview: Zone redundancy and Azure SQL Database maintenance window are not supported for these sizes Preview features are subject to supplemental preview terms, and performance characteristics may continue to improve through GA Customers are encouraged to use this preview to validate scalability, concurrency, memory utilization, query parallelism, and readiness for larger single database deployments. Next Steps This public preview is part of our broader investment in scaling Azure SQL Hyperscale for the most demanding workloads. Feedback from preview will help inform GA configuration limits, regional rollout priorities, and performance optimizations at extreme scale.476Views2likes0CommentsVersionless keys for Transparent Data Encryption in Azure SQL Database (Generally Available)
With this release, you no longer need to reference a specific key version stored in Azure Key Vault or Managed HSM when configuring Transparent Data Encryption (TDE) with customer‑managed keys. Instead, Azure SQL Database now supports a versionless key URI, automatically using the latest enabled version of your key. This means: Simpler key management—no longer necessary to specify the key version. Reduced operational overhead by eliminating risks tied to outdated key versions. Full control remains with the customer. This enhancement streamlines encryption at rest, especially for organizations operating at scale or enforcing strict security and compliance standards. Versionless keys for TDE are available today across Azure SQL Database with no additional cost. Versioned vs. Versionless Key URIs To highlight the difference, here are real examples: Versioned Key URI (old approach — explicit version required) https://demotdeakv.vault.azure.net/keys/TDECMK/40acafb8a7034b20ba227905df090a1f Versionless Key URI (new approach) https://demotdeakv.vault.azure.net/keys/TDECMK A versionless key URI references only the key name. Azure SQL Database automatically uses the newest enabled version of the key. Learn more Transparent Data Encryption - Azure SQL Database Azure SQL transparent data encryption with customer-managed key Transparent data encryption with customer-managed keys at the database level363Views0likes0CommentsCounting distinct values
I’m trying to create a view but I’m struggling to find a solution to this issue. I need a method that counts multiple visits to the same location as one if they occur within 14 days of each other. If there are multiple visits to the same location and the gap between them is more than 14 days, then each should be counted separately. For example, in the attached screenshot: Brussels had visits on 08/05 and 15/05, which are less than 14 days apart, so this should be counted as one visit. Dublin had visits more than a month apart, so these should be counted as two separate visits. Could someone please guide me on how to achieve this? Thanks.27Views0likes0CommentsSSMS 21/22 Error Upload BACPAC file to Azure Storage
Hello All In my SSMS 20, I can use "Export Data-tier Application" to export an BACPAC file of Azure SQL database and upload to Azure storage in the same machine, the SSMS 21 gives error message when doing the same export, it created the BACPAC files but failed on the last step, "Uploading BACPAC file to Microsoft Azure Storage", The error message is "Could not load file or assembly 'System.IO.Hashing, Version=6.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' or one of its dependencies. The system cannot find the file specified. (Azure.Storage.Blobs)" I tried the fresh installation of SSMS 21 in a brand-new machine (Windows 11), same issue, Can anyone advice? Thanks388Views0likes5Comments