<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>rss.livelink.threads-in-node</title>
    <link>https://techcommunity.microsoft.com/t5/azure-data/ct-p/AzureDatabases</link>
    <description>rss.livelink.threads-in-node</description>
    <pubDate>Sun, 15 Mar 2026 21:17:57 GMT</pubDate>
    <dc:creator>AzureDatabases</dc:creator>
    <dc:date>2026-03-15T21:17:57Z</dc:date>
    <item>
      <title>Recovering Missing Rows (“Gaps”) in Azure SQL Data Sync — Supported Approaches (and What to Avoid)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/recovering-missing-rows-gaps-in-azure-sql-data-sync-supported/ba-p/4502185</link>
      <description>&lt;P&gt;Azure SQL Data Sync is commonly used to keep selected tables synchronized between a hub database and one or more member databases. In some cases, you may discover a&amp;nbsp;&lt;STRONG&gt;data “gap”&lt;/STRONG&gt;: a subset of rows that exist in the source but are missing on the destination for a specific time window, even though synchronization continues afterward for new changes. This post explains &lt;STRONG&gt;supported recovery patterns&lt;/STRONG&gt; and &lt;STRONG&gt;what not to do&lt;/STRONG&gt;, based on a real support scenario where a customer reported missing rows for a single table within a sync group and requested a way to synchronize &lt;STRONG&gt;only the missing records&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;The scenario: Data Sync continues, but some rows are missing&lt;/H2&gt;
&lt;P&gt;In the referenced case, the customer observed that:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A specific table had a &lt;STRONG&gt;gap&lt;/STRONG&gt; on the member side (missing rows for a period), while &lt;STRONG&gt;newer data continued to sync normally&lt;/STRONG&gt; afterward.&lt;/LI&gt;
&lt;LI&gt;They asked for a Microsoft-supported method to &lt;STRONG&gt;sync only the missing rows&lt;/STRONG&gt;, without rebuilding or fully reinitializing the table.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This is a reasonable goal—but the recovery method matters, because Data Sync relies on service-managed tracking artifacts.&lt;/P&gt;
&lt;H2&gt;Temptation: “Can we push missing data by editing tracking tables or calling internal triggers?”&lt;/H2&gt;
&lt;P&gt;A frequent idea is to “force” Data Sync to pick up missing rows by manipulating internal artifacts:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Writing directly into Data Sync tracking tables (for example, tables under the DataSync schema such as *_dss_tracking), or altering provisioning markers.&lt;/LI&gt;
&lt;LI&gt;Manually invoking Data Sync–generated triggers or relying on their internal logic. The case discussion specifically referenced internal triggers such as _dss_insert_trigger, _dss_update_trigger, and _dss_delete_trigger.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Why this is not recommended / not supported as a customer-facing solution&lt;/H3&gt;
&lt;P&gt;In the case, the guidance from Microsoft engineering was clear:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Manually invoking internal Data Sync triggers is not supported&lt;/STRONG&gt; and can increase the risk of data corruption because these triggers are service-generated at runtime and are not intended for manual use.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Directly manipulating Data Sync tracking/metadata tables is not recommended&lt;/STRONG&gt;. The customer thread also highlights that these tracking tables are part of Data Sync internals, and using them for manual “push” scenarios is not a supported approach.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Also, the customer conversation highlights an important conceptual point: tracking tables are part of how the service tracks changes; they are not meant to be treated as a user-managed replication queue.&lt;/P&gt;
&lt;H2&gt;Supported recovery option #1 (recommended): Re-drive change detection via the base table&lt;/H2&gt;
&lt;P&gt;The &lt;STRONG&gt;most supportable&lt;/STRONG&gt; approach is to make Data Sync detect the missing rows through its normal change tracking path—by operating on the &lt;STRONG&gt;base/source table&lt;/STRONG&gt;, not the service-managed internals.&lt;/P&gt;
&lt;H3&gt;A practical pattern: “No-op update” to re-fire tracking&lt;/H3&gt;
&lt;P&gt;In the internal discussion with the product team, the recommended pattern was to &lt;STRONG&gt;update the source/base table&lt;/STRONG&gt; (even with a “no-op” assignment) so that Data Sync’s normal tracking logic is triggered, without manually invoking internal triggers.&lt;/P&gt;
&lt;P&gt;Example pattern (conceptual):&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;UPDATE t
SET some_column = some_column  -- no-op: value unchanged
FROM dbo.YourTable AS t
WHERE &amp;lt;filter identifying the rows that are missing on the destination&amp;gt;;&lt;/LI-CODE&gt;
&lt;P&gt;This approach is called out explicitly in the thread as a way to “re-drive” change detection safely through supported mechanisms.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Operational guidance (practical):&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Apply the update in &lt;STRONG&gt;small batches&lt;/STRONG&gt;, especially for large tables, to reduce transaction/log impact and avoid long-running operations.&lt;/LI&gt;
&lt;LI&gt;Validate the impacted row set first (for example, by comparing keys between hub and member).&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Supported recovery option #2: Deprovision and re-provision the affected table (safe “reset” path)&lt;/H2&gt;
&lt;P&gt;If the gap is large, the row-set is hard to isolate, or you want a clean realignment of tracking artifacts, the operational approach discussed in the case was:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Stop sync&lt;/LI&gt;
&lt;LI&gt;Remove the table from the sync group (so the service deprovisions tracking objects)&lt;/LI&gt;
&lt;LI&gt;Fix/clean the destination state as needed&lt;/LI&gt;
&lt;LI&gt;Add the table back and let Data Sync re-provision and sync again&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This option is often the safest when the goal is to avoid touching system-managed artifacts directly.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Note: In production environments, customers may not be able to truncate/empty tables due to operational constraints. In that situation, the sync may take longer because the service might need to do more row-by-row evaluation. This “tradeoff” was discussed in the case context.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;Diagnostics: Use the Azure SQL Data Sync Health Checker&lt;/H2&gt;
&lt;P&gt;When you suspect metadata drift, missing objects, or provisioning inconsistencies, the case recommended using the &lt;STRONG&gt;AzureSQLDataSyncHealthChecker&lt;/STRONG&gt; script.&lt;/P&gt;
&lt;P&gt;This tool:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Validates hub/member metadata and scopes against the sync metadata database&lt;/LI&gt;
&lt;LI&gt;Produces logs that can highlight missing artifacts and other inconsistencies&lt;/LI&gt;
&lt;LI&gt;Is intended to help troubleshoot Data Sync issues faster&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;A likely contributor to “gaps”: schema changes during Data Sync (snapshot isolation conflict)&lt;/H2&gt;
&lt;P&gt;In the case discussion, telemetry referenced an error consistent with &lt;STRONG&gt;concurrent DDL/schema changes while the sync process is enumerating changes&lt;/STRONG&gt; (snapshot isolation + metadata changes).&lt;/P&gt;
&lt;P&gt;A well-known related error is &lt;STRONG&gt;SQL Server error 3961&lt;/STRONG&gt;, which occurs when a snapshot isolation transaction fails because metadata was modified by a concurrent DDL statement, since metadata is not versioned. Microsoft documents this behavior and explains why metadata changes conflict with snapshot isolation semantics.&lt;/P&gt;
&lt;H3&gt;Prevention guidance (practical)&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Avoid running schema deployments (DDL) during active sync windows.&lt;/LI&gt;
&lt;LI&gt;Use a controlled workflow for schema changes with Data Sync—pause/coordinate changes to prevent mid-sync metadata shifts. (General best practices exist for ongoing Data Sync operations and maintenance.)&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Key takeaways&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Do not&lt;/STRONG&gt; treat Data Sync tracking tables/triggers as user-managed “replication internals.” Manually invoking internal triggers or editing tracking tables is not a supported customer-facing recovery mechanism.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Do&lt;/STRONG&gt; recover gaps via &lt;STRONG&gt;base table operations&lt;/STRONG&gt; (insert/update) so the service captures changes through its normal path—“no-op update” is one practical pattern when you already know the missing row set.&lt;/LI&gt;
&lt;LI&gt;For large/complex gaps, consider the &lt;STRONG&gt;safe reset&lt;/STRONG&gt; approach: remove the table from the sync group and re-add it to re-provision artifacts.&lt;/LI&gt;
&lt;LI&gt;Use the &lt;STRONG&gt;AzureSQLDataSyncHealthChecker&lt;/STRONG&gt; to validate metadata consistency and reduce guesswork.&lt;/LI&gt;
&lt;LI&gt;If you see intermittent failures around deployments, consider the &lt;STRONG&gt;schema-change + snapshot isolation&lt;/STRONG&gt; pattern (e.g., error 3961) as a possible contributor and schedule DDL changes accordingly.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 14 Mar 2026 02:27:30 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/recovering-missing-rows-gaps-in-azure-sql-data-sync-supported/ba-p/4502185</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-03-14T02:27:30Z</dc:date>
    </item>
    <item>
      <title>February 2026 Recap: Azure Database for PostgreSQL</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/february-2026-recap-azure-database-for-postgresql/ba-p/4501093</link>
      <description>&lt;P&gt;Hello Azure Community,&lt;/P&gt;
&lt;P&gt;We’re excited to share the February 2026 recap for &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/postgresql/" target="_blank" rel="noopener"&gt;Azure Database for PostgreSQL&lt;/A&gt;, featuring a set of updates focused on speed, simplicity, and better visibility. From Terraform support for Elastic Clusters and a refreshed VM SKU selection experience in the Azure portal to built‑in Grafana dashboards, these improvements make it easier to build, operate, and scale PostgreSQL on Azure. This recap also includes practical GIN index tuning guidance, enhancements to the PostgreSQL VS Code extension, and improved connectivity for azure_pg_admin users.&lt;/P&gt;
&lt;H1&gt;Features&lt;/H1&gt;
&lt;OL&gt;
&lt;LI style="font-weight: bold;"&gt;&lt;STRONG&gt;&lt;A href="#community--1-tf" target="_self" rel="noopener"&gt;Terraform support for Elastic Clusters - Generally Available&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI style="font-weight: bold;"&gt;&lt;STRONG&gt;&lt;A href="#community--1-dashboard" target="_self" rel="noopener"&gt;Dashboards with Grafana - Generally Available&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI style="font-weight: bold;"&gt;&lt;STRONG&gt;&lt;A href="#community--1-sku" target="_self" rel="noopener"&gt;Easier way to choose VM SKUs on portal – Generally Available&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI style="font-weight: bold;"&gt;&lt;STRONG&gt;&lt;A href="#community--1-vscode" target="_self" rel="noopener"&gt;What’s New in the PostgreSQL VS Code Extension&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI style="font-weight: bold;"&gt;&lt;STRONG&gt;&lt;A href="#community--1-admin" target="_self" rel="noopener"&gt;Priority Connectivity to azure_pg_admin users&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI style="font-weight: bold;"&gt;&lt;STRONG&gt;&lt;A href="#community--1-gin" target="_self" rel="noopener"&gt;Guide on 'gin_pending_list_limit' indexes&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2 id="tf"&gt;Terraform support for Elastic Clusters&lt;/H2&gt;
&lt;P&gt;Terraform now supports provisioning and managing Azure Database for PostgreSQL Elastic Clusters, enabling customers to define and operate elastic clusters using infrastructure‑as‑code workflows. With this support, it is now easier to create, scale, and manage multi‑node PostgreSQL clusters through Terraform, making it easier to automate deployments, replicate environments, and integrate elastic clusters into CI/CD pipelines. This improves operational consistency and simplifies management for horizontally scalable PostgreSQL workloads.&lt;/P&gt;
&lt;P&gt;Learn more about building and scaling with &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/postgresql/elastic-clusters/concepts-elastic-clusters" target="_blank" rel="noopener"&gt;Azure Database for PostgreSQL elastic clusters.&lt;/A&gt;&lt;/P&gt;
&lt;H2 id="dashboard"&gt;Dashboards with Grafana — Now Built-In&lt;/H2&gt;
&lt;P&gt;Grafana dashboards are now natively integrated into the Azure Portal for Azure Database for PostgreSQL. This removes the need to deploy or manage a separate Grafana instance. With just a few clicks, you can visualize key metrics and logs side by side, correlate events by timestamp, and gain deep insights into performance, availability, and query behavior all in one place.&lt;/P&gt;
&lt;P&gt;Whether you're troubleshooting a spike, monitoring trends, or sharing insights with your team, this built-in experience simplifies day-to-day observability with no added cost or complexity.&lt;/P&gt;
&lt;P&gt;Try it under &lt;STRONG&gt;Azure Portal &amp;gt; Dashboards with Grafana&lt;/STRONG&gt; in your PostgreSQL server view.&lt;/P&gt;
&lt;P&gt;For more details, see the &lt;A href="https://aka.ms/azure-postgres-dashboards-grafana" target="_blank" rel="noopener"&gt;blog post: &lt;EM&gt;Dashboards with Grafana — Now in Azure Portal for PostgreSQL&lt;/EM&gt;&lt;/A&gt;.&lt;/P&gt;
&lt;H2 id="sku"&gt;Easier way to choose VM SKUs on portal&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-teams="true"&gt;We’ve improved the VM SKU selection experience in the Azure portal to make it easier to find and compare the right compute options for your PostgreSQL workload. The updated experience organizes SKUs in a clearer, more scannable view, helping you quickly compare key attributes like vCores and memory without extra clicks. This streamlined approach reduces guesswork and makes selecting the right SKU faster and more intuitive.&lt;/SPAN&gt;&lt;/P&gt;
&lt;A href="&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;h2 id=" target="_blank" rel="noopener"&gt;
&lt;H2 id="vscode"&gt;What’s New in the PostgreSQL VS Code Extension&lt;/H2&gt;
&lt;/A&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;h2 id=" target="_blank" rel="noopener"&gt;The&amp;nbsp;&lt;/A&gt;&lt;A href="https://marketplace.visualstudio.com/items?itemName=ms-ossdata.vscode-pgsql" target="_blank" rel="noopener"&gt;VS Code extension for PostgreSQL&lt;/A&gt; helps developers and database administrators work with PostgreSQL directly from VS Code. It provides capabilities for querying, schema exploration, diagnostics, and Azure PostgreSQL management allowing users to stay within their editor while building and troubleshooting. &lt;A class="lia-external-url" href="https://github.com/microsoft/vscode-pgsql/blob/main/CHANGELOG.md" target="_blank" rel="noopener"&gt;This release focuses&lt;/A&gt; on improving developer productivity and diagnostics. It introduces new visualization capabilities, Copilot-powered experiences, enhanced schema navigation, and deeper Azure PostgreSQL management directly from VS Code.&lt;/P&gt;
&lt;H4&gt;New Features &amp;amp; Enhancements&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Query Plan Visualization:&lt;/STRONG&gt; Graphical execution plans can now be viewed directly in the editor, making it easier to diagnose slow queries without leaving VS Code.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;AGE Graph Rendering:&lt;/STRONG&gt; Support is now available for automatically rendering graph visualizations from Cypher queries, improving the experience of working with graph data in PostgreSQL.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Object Explorer Search: &lt;/STRONG&gt;A new graphical search experience in Object Explorer allows users to quickly find tables, views, functions, and other objects across large schemas, addressing one of the highest-rated user feedback requests.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure PostgreSQL Backup Management:&lt;/STRONG&gt; Users can now manage Azure Database for PostgreSQL backups directly from the Server Dashboard, including listing backups and configuring retention policies.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Server Logs Dashboard:&lt;/STRONG&gt; A new Server Dashboard view surfaces Azure Database for PostgreSQL server logs and retention settings for faster diagnostics. Logs can be opened directly in VS Code and analyzed using the built-in GitHub Copilot integration.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This release also includes several reliability improvements and bug fixes, including resolving connection pool exhaustion issues, fixing Docker container creation failures when no password is provided, and improving stability around connection profiles and schema-related operations.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="admin"&gt;Priority Connectivity to azure_pg_admin Users&lt;/H2&gt;
&lt;P&gt;Members of the azure_pg_admin role can now use connections from the pg_use_reserved_connections pool. This ensures that an admin always has at least one available connection, even if all standard client connections from the server connection pool are in use. By making sure admin users can log in when the client connection pool is full, this change prevents lockout situations and lets admins handle emergencies without competing for available open connection slots.&lt;/P&gt;
&lt;H2 id="gin"&gt;Guide on 'gin_pending_list_limit' indexes&lt;/H2&gt;
&lt;P&gt;Struggling with slow GIN index inserts in PostgreSQL? This post dives into the often-overlooked &lt;EM&gt;gin_pending_list_limit&lt;/EM&gt; parameter and how it directly impacts insert performance. Learn how GIN’s pending list works, why the right limit matters, and practical guidance on tuning it to strike the perfect balance between write performance and index maintenance overhead.&lt;/P&gt;
&lt;P&gt;For a deeper dive into &lt;EM&gt;gin_pending_list_limit&lt;/EM&gt; and tuning guidance, see the full blog &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/adforpostgresql/mastering-gin-pending-list-limit-how-this-parameter-shapes-gin-index-insert-perf/4494203" target="_blank" rel="noopener" data-lia-auto-title="here" data-lia-auto-title-active="0"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;H1&gt;Learning Bytes&lt;/H1&gt;
&lt;P&gt;Create Azure Database for PostgreSQL elastic clusters with terraform:&lt;/P&gt;
&lt;P&gt;Elastic clusters in Azure Database for PostgreSQL let you scale PostgreSQL horizontally using a managed, multi‑node architecture. With Elastic cluster now generally available, you can provision and manage elastic clusters using infrastructure‑as‑code, making it easier to automate deployments, standardize environments, and integrate PostgreSQL into CI/CD workflows.&lt;/P&gt;
&lt;P&gt;Elastic clusters are a good fit when you need:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Horizontal scale for large or fast‑growing PostgreSQL workloads&lt;/LI&gt;
&lt;LI&gt;Multi‑tenant applications or sharded data models&lt;/LI&gt;
&lt;LI&gt;Repeatable and automated deployments across environments&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The following example shows a basic Terraform configuration to create an Azure Database for PostgreSQL flexible server configured as an elastic cluster.&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;resource "azurerm_postgresql_flexible_server" "elastic_cluster" {
  name                   = "pg-elastic-cluster"
  resource_group_name    = &amp;lt;rg-name&amp;gt;
  location               = &amp;lt;region&amp;gt;
  administrator_login    = var.admin_username
  administrator_password = var.admin_password
  version                = "17"
  sku_name   = "GP_Standard_D4ds_v5"
  storage_mb = 131072
  cluster {
    size = 3
  }
}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Conclusion&lt;/H1&gt;
&lt;P&gt;That’s a wrap for the February 2026 Azure Database for PostgreSQL recap. We’re continuing to focus on making PostgreSQL on Azure easier to build, operate, and scale whether that’s through better automation with Terraform, improved observability, or a smoother day‑to‑day developer and admin experience. Your feedback is important to us, have suggestions, ideas, or questions? We’d love to hear from you:&amp;nbsp;&lt;A href="https://aka.ms/pgfeedback" target="_blank" rel="noopener"&gt;https://aka.ms/pgfeedback&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Wed, 11 Mar 2026 17:09:35 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/february-2026-recap-azure-database-for-postgresql/ba-p/4501093</guid>
      <dc:creator>gauri-kasar</dc:creator>
      <dc:date>2026-03-11T17:09:35Z</dc:date>
    </item>
    <item>
      <title>February 2026 Recap: Azure Database for MySQL</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/february-2026-recap-azure-database-for-mysql/ba-p/4501191</link>
      <description>&lt;P&gt;We're excited to share a summary of the &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-in/azure/mysql/" target="_blank"&gt;Azure Database for MySQL&lt;/A&gt; updates from the last couple of months.&lt;/P&gt;
&lt;H4&gt;Extended Support Timeline Update&lt;/H4&gt;
&lt;P&gt;Based on customer feedback requesting additional time to complete major version upgrades, we have extended the grace period before &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/adformysql/announcing-extended-support-for-azure-database-for-mysql/4442924" data-lia-auto-title="extended support" data-lia-auto-title-active="0" target="_blank"&gt;extended support&lt;/A&gt; billing begins for Azure Database for MySQL:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;MySQL 5.7&lt;/STRONG&gt;: Extended support billing start date moved from &lt;STRONG&gt;April 1, 2026&lt;/STRONG&gt; to &lt;STRONG&gt;August 1, 2026&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;MySQL 8.0&lt;/STRONG&gt;: Extended support billing start date moved from &lt;STRONG&gt;June 1, 2026&lt;/STRONG&gt; to &lt;STRONG&gt;January 1, 2027&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This update provides customers additional time to plan, validate, and complete upgrades while maintaining service continuity and security. We continue to recommend upgrading to a supported MySQL version as early as possible to avoid extended support charges and benefit from the latest improvements. &lt;A href="https://techcommunity.microsoft.com/blog/adformysql/guide-to-upgrade-azure-database-for-mysql-from-8-0-to-8-4/4493669" target="_blank"&gt;Learn more&lt;/A&gt; about performing a major version upgrade in Azure Database for MySQL.&lt;/P&gt;
&lt;P&gt;When upgrading using a read replica, you can optionally use the Rename Server feature to promote the replica and avoid application connection‑string updates after the upgrade completes. Rename Server is currently in Private Preview and is expected to enter Public Preview around the April 2026 timeframe.&lt;/P&gt;
&lt;H4&gt;Private Preview - Fabric Mirroring for Azure Database for MySQL&lt;/H4&gt;
&lt;P&gt;This capability enables real‑time replication of MySQL data into Microsoft Fabric with a zero‑ETL experience, allowing data to land directly in OneLake in analytics‑ready formats. Customers can seamlessly analyse mirrored data using Microsoft Fabric experiences, while isolating analytical workloads from their operational MySQL databases.&amp;nbsp;&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Stay Connected&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;We welcome your feedback and invite you to share your experiences or suggestions at&amp;nbsp;&lt;A href="mailto:AskAzureDBforMySQL@service.microsoft.com" target="_blank"&gt;AskAzureDBforMySQL@service.microsoft.com&lt;/A&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Stay up to date by visiting&amp;nbsp;&lt;A href="https://learn.microsoft.com/azure/mysql/flexible-server/whats-new" target="_blank"&gt;What's new in Azure Database for MySQL&lt;/A&gt;, and follow us on&amp;nbsp;&lt;A href="https://aka.ms/mysql-yt-subscribe" target="_blank"&gt;YouTube&lt;/A&gt; | &lt;A href="https://www.linkedin.com/company/azure-database-for-mysql/" target="_blank"&gt;LinkedIn&lt;/A&gt; | &lt;A href="https://twitter.com/AzureDBMySQL" target="_blank"&gt;X&lt;/A&gt;&amp;nbsp;for ongoing updates.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for choosing Azure Database for MySQL!&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 11 Mar 2026 13:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/february-2026-recap-azure-database-for-mysql/ba-p/4501191</guid>
      <dc:creator>SaurabhKirtani</dc:creator>
      <dc:date>2026-03-11T13:00:00Z</dc:date>
    </item>
    <item>
      <title>Counting distinct values</title>
      <link>https://techcommunity.microsoft.com/t5/azure-sql/counting-distinct-values/m-p/4500858#M256</link>
      <description>&lt;P&gt;I’m trying to create a view but I’m struggling to find a solution to this issue.&lt;/P&gt;&lt;P&gt;I need a method that counts multiple visits to the same location as one if they occur within 14 days of each other. If there are multiple visits to the same location and the gap between them is more than 14 days, then each should be counted separately.&lt;/P&gt;&lt;P&gt;For example, in the attached screenshot:&lt;/P&gt;&lt;P&gt;Brussels had visits on 08/05 and 15/05, which are less than 14 days apart, so this should be counted as one visit.&lt;/P&gt;&lt;P&gt;Dublin had visits more than a month apart, so these should be counted as two separate visits.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Could someone please guide me on how to achieve this?&lt;/P&gt;&lt;P&gt;Thanks.&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 10 Mar 2026 10:59:41 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-sql/counting-distinct-values/m-p/4500858#M256</guid>
      <dc:creator>KrishKK</dc:creator>
      <dc:date>2026-03-10T10:59:41Z</dc:date>
    </item>
    <item>
      <title>Using Different Azure Key Vault Keys for Azure SQL Servers in a Failover Group</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/using-different-azure-key-vault-keys-for-azure-sql-servers-in-a/ba-p/4500775</link>
      <description>&lt;P&gt;Transparent Data Encryption (TDE) in Azure SQL Database encrypts data at rest using a database encryption key (DEK). The DEK itself is protected by a TDE protector, which can be either:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A service-managed key (default), or&lt;/LI&gt;
&lt;LI&gt;A customer-managed key (CMK) stored in Azure Key Vault.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Many enterprises adopt customer-managed keys to meet compliance, key rotation, and security governance requirements.&lt;/P&gt;
&lt;P&gt;When configuring Geo-replication or Failover Groups, Azure SQL supports using different Key Vault keys on the primary and secondary servers. This provides additional flexibility and allows organizations to isolate encryption keys across regions or subscriptions.&lt;/P&gt;
&lt;P&gt;Microsoft documentation describes this scenario for new deployments in detail. If you are configuring geo-replication from scratch, refer to the following article: &lt;A href="https://techcommunity.microsoft.com/blog/azuresqlblog/geo-replication-and-transparent-data-encryption-key-management-in-azure-sql-data/4459670" target="_blank" rel="noopener"&gt;Geo-Replication and Transparent Data Encryption Key Management in Azure SQL Database | Microsoft Community Hub&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;However, customers often ask:&lt;/STRONG&gt;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM&gt;How can we change the Azure Key Vault and use different customer-managed keys on the primary and secondary servers when the failover group already exists without breaking the replication?&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;This article walks through the safe process to update Key Vault keys in an existing failover group configuration.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Architecture Overview&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In a failover group configuration:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Primary Server&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Uses CMK stored in Key Vault A&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;Secondary Server&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Uses CMK stored in Key Vault B&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;Even though each server uses its own key, both servers must be able to access both keys. This requirement exists because during failover operations the database must still be able to decrypt the existing Database Encryption Key (DEK).&lt;/P&gt;
&lt;P&gt;Therefore:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Each logical server must have both Key Vault keys registered and have the required permissions&lt;/LI&gt;
&lt;LI&gt;Only one key is configured as the active TDE protector&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Scenario&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Existing environment:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Azure SQL Failover Group configured&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;TDE enabled with Customer Managed Key with same key.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Primary server:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Secondary server:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Customer wants to:&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Move to different Azure Key Vaults&lt;/LI&gt;
&lt;LI&gt;Use separate CMKs for primary and secondary servers&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step-by-Step Implementation&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1&lt;/STRONG&gt; – &lt;STRONG&gt;Add the Secondary Server Key to the Primary Server&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Before changing the TDE protector on the secondary server, the secondary CMK must first be registered on the primary logical server.&lt;/P&gt;
&lt;P&gt;This ensures that both servers are aware of both encryption keys.&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Add-AzSqlServerKeyVaultKey -KeyId ‘https://testanu2.vault.azure.net/keys/tey/29497babb0cb4af58a773104a5dd61e5' -ServerName 'tdeprimarytest' -ResourceGroupName ‘harshitha_lab’&lt;/LI-CODE&gt;
&lt;P&gt;This command registers the Key Vault key with the logical SQL server but does not change the active TDE protector.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2&lt;/STRONG&gt; – &lt;STRONG&gt;Change the TDE Protector on the Secondary Server&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;After registering the key, update the TDE protector on the secondary server to use its designated CMK.&lt;/P&gt;
&lt;P&gt;This can be done from the Azure Portal:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Navigate to Azure SQL Server (Secondary)&lt;/LI&gt;
&lt;LI&gt;Select Transparent Data Encryption&lt;/LI&gt;
&lt;LI&gt;Choose Customer-managed key&lt;/LI&gt;
&lt;LI&gt;Select the Key Vault key intended for the secondary server&lt;/LI&gt;
&lt;LI&gt;Save the configuration&lt;/LI&gt;
&lt;/OL&gt;
&lt;img /&gt;
&lt;P&gt;At this stage:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Secondary server uses Key Vault B&lt;/LI&gt;
&lt;LI&gt;Primary server still uses Key Vault A&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3 – Add the Primary Server Key to the Secondary Server&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Next, perform the same operation in reverse.&lt;/P&gt;
&lt;P&gt;Register the primary server CMK on the secondary server.&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Add-AzSqlServerKeyVaultKey -KeyId ‘https://testprimarysubham.vault.azure.net/keys/subham/e0637ed7e3734f989b928101c79ca565' -ServerName ‘tdesecondarytest’ -ResourceGroupName ‘harshitha_lab’&lt;/LI-CODE&gt;
&lt;P&gt;Now both servers contain references to both keys.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4 – Change the TDE Protector on the Primary Server&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Once the key is registered, update the primary server TDE protector to use its intended CMK.&lt;/P&gt;
&lt;P&gt;This can also be done through the Azure Portal or PowerShell.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;After this step:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Server&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Key Vault&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Active CMK&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Primary&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Key Vault A&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Primary CMK&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Secondary&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Key Vault B&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Secondary CMK&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 5 – Verify Keys Registered on Both Servers&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Use the following PowerShell command to confirm that both keys are registered on each server.&lt;/P&gt;
&lt;P&gt;Primary Server&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Get-AzSqlServerKeyVaultKey -ServerName "tdeprimarytest" -ResourceGroupName "harshitha_lab"&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;P&gt;Secondary Server&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Get-AzSqlServerKeyVaultKey -ServerName "tdesecondarytest" -ResourceGroupName "harshitha_lab"&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;P&gt;Both outputs should list both Key Vault keys.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 6 – Verify Database Encryption State&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Connect to the database using SQL Server Management Studio (SSMS) and run the following query to verify the encryption configuration.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT
    DB_NAME(db_id()) AS database_name,
    dek.encryption_state,
    dek.encryption_state_desc,
    dek.key_algorithm,
    dek.key_length,
    dek.encryptor_type,
    dek.encryptor_thumbprint
FROM sys.dm_database_encryption_keys AS dek
WHERE database_id &amp;lt;&amp;gt; 2;&lt;/LI-CODE&gt;
&lt;P&gt;Key columns to check:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Column&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Description&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;encryption_state&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Shows if the database is encrypted&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;encryptor_type&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Indicates if the key is from Azure Key Vault&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;encryptor_thumbprint&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Identifies the CMK used for encryption&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;At this stage, the primary database thumbprint should match the primary server CMK on the secondary database.&lt;/P&gt;
&lt;P&gt;Primary:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Secondary:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step 7 – Validate After Failover&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Perform a planned failover of the failover group.&lt;/P&gt;
&lt;P&gt;After failover, run the same query again.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT
    DB_NAME(db_id()) AS database_name,
    dek.encryption_state,
    dek.encryption_state_desc,
    dek.key_algorithm,
    dek.key_length,
    dek.encryptor_type,
    dek.encryptor_thumbprint
FROM sys.dm_database_encryption_keys AS dek
WHERE database_id &amp;lt;&amp;gt; 2;&lt;/LI-CODE&gt;
&lt;P&gt;You should observe:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The encryptor_thumbprint changes&lt;/LI&gt;
&lt;LI&gt;The database is now encrypted using the secondary server’s CMK which is now primary&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Primary server after failover:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Secondary server which is primary before failover&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;This confirms that the failover group is successfully using different Key Vault keys for each server.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Key Takeaways&lt;/STRONG&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Azure SQL Failover Groups support different Customer Managed Keys across regions.&lt;/LI&gt;
&lt;LI&gt;Both logical servers must have access to both keys.&lt;/LI&gt;
&lt;LI&gt;Only one key acts as the active TDE protector per server.&lt;/LI&gt;
&lt;LI&gt;Validation should always include checking the encryptor thumbprint before and after failover.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This approach allows organizations to implement regional key isolation, meet compliance requirements, and maintain high availability with secure key management.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 10 Mar 2026 09:59:01 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/using-different-azure-key-vault-keys-for-azure-sql-servers-in-a/ba-p/4500775</guid>
      <dc:creator>Harshitha_Reddy_Chappidi</dc:creator>
      <dc:date>2026-03-10T09:59:01Z</dc:date>
    </item>
    <item>
      <title>Understanding Hash Join Memory Usage and OOM Risks in PostgreSQL</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/understanding-hash-join-memory-usage-and-oom-risks-in-postgresql/ba-p/4500308</link>
      <description>&lt;H2&gt;Background: Why Memory Usage May Exceed work_mem&lt;/H2&gt;
&lt;P&gt;work_mem is commonly assumed to be a hard upper bound on per‑query memory usage.&lt;/P&gt;
&lt;P&gt;However, for Hash Join operations, memory consumption depends not only on this parameter but also on:&lt;/P&gt;
&lt;P&gt;✅ Data cardinality&lt;/P&gt;
&lt;P&gt;✅ Hash table internal bucket distribution&lt;/P&gt;
&lt;P&gt;✅ Join column characteristics&lt;/P&gt;
&lt;P&gt;✅ Number of batches created&lt;/P&gt;
&lt;P&gt;✅ Parallel workers involved&lt;/P&gt;
&lt;P&gt;Under low‑cardinality conditions, a Hash Join may place an extremely large number of rows into very few buckets—sometimes a single bucket. This causes unexpectedly large memory allocations that exceed the nominal work_mem limit.&lt;/P&gt;
&lt;H2&gt;Background: What work_mem&amp;nbsp;&lt;EM&gt;really&lt;/EM&gt; means for Hash Joins&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;work_mem controls the amount of memory available &lt;STRONG&gt;per operation&lt;/STRONG&gt; (e.g., a sort or a hash) &lt;STRONG&gt;per node&lt;/STRONG&gt; (and per parallel worker) before spilling to disk. Hash operations can additionally use hash_mem_multiplier×work_mem for their hash tables. &lt;A href="https://www.postgresql.org/docs/current/runtime-config-resource.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;, &lt;A href="https://postgresqlco.nf/doc/en/param/hash_mem_multiplier/" target="_blank" rel="noopener"&gt;[postgresqlco.nf]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;The &lt;STRONG&gt;Hash Join&lt;/STRONG&gt; algorithm builds a hash table for the “build/inner” side and probes it with the “outer” side. The table is split into &lt;STRONG&gt;buckets&lt;/STRONG&gt;; if it doesn’t fit in memory, PostgreSQL partitions work into &lt;STRONG&gt;batches&lt;/STRONG&gt; (spilling to temporary files). &lt;STRONG&gt;Skewed distributions&lt;/STRONG&gt; (e.g., very few distinct join keys) pack many rows into the same bucket(s), exploding memory usage even when work_mem is small. &lt;A href="https://postgrespro.com/blog/pgsql/5969673" target="_blank" rel="noopener"&gt;[postgrespro.com]&lt;/A&gt;, &lt;A href="https://www.interdb.jp/pg/pgsql03/05/03.html" target="_blank" rel="noopener"&gt;[interdb.jp]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;In EXPLAIN (ANALYZE) you’ll see &lt;STRONG&gt;Buckets:&lt;/STRONG&gt;, &lt;STRONG&gt;Batches:&lt;/STRONG&gt;, and &lt;STRONG&gt;Memory Usage:&lt;/STRONG&gt; on the Hash node; Batches &amp;gt; 1 indicates spilling/partitioning. &lt;A href="https://www.postgresql.org/docs/current/using-explain.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;, &lt;A href="https://thoughtbot.com/blog/reading-an-explain-analyze-query-plan" target="_blank" rel="noopener"&gt;[thoughtbot.com]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;The default for hash_mem_multiplier is &lt;STRONG&gt;version‑dependent&lt;/STRONG&gt; (introduced in PG13; 1.0 in early versions, later 2.0). Tune with care; it scales the memory that hash operations may consume relative to work_mem. &lt;A href="https://pgpedia.info/h/hash_mem_multiplier.html" target="_blank" rel="noopener"&gt;[pgpedia.info]&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;A safe, reproducible demo (containerized community PostgreSQL)&lt;/H2&gt;
&lt;P&gt;The goal is to show that &lt;STRONG&gt;data distribution alone&lt;/STRONG&gt; can drive &lt;STRONG&gt;order(s) of magnitude&lt;/STRONG&gt; difference in hash table memory, using conservative settings.&lt;/P&gt;
&lt;P&gt;In order to simulate the behavior we´ll use &lt;STRONG&gt;pg_hint_plan&lt;/STRONG&gt; extension to guide the execution plans and create some data distribution that may not have a good application logic, just to force and show the behavior.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H4&gt;Start PostgreSQL 16 container&lt;/H4&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="bash"&gt;docker run --name=postgresql16.8 -p 5414:5432 -e POSTGRES_PASSWORD=&amp;lt;password&amp;gt; -d postgres:16.8
docker exec -it postgresql16.8 /bin/bash -c "apt-get update -y;apt-get install procps -y;apt-get install postgresql-16-pg-hint-plan -y;apt-get install vim -y;apt-get install htop -y"
docker exec -it postgresql16.8 /bin/bash

vi /var/lib/postgresql/data/postgresql.conf
-- Adding pg_hint_plan to shared_preload_libraries
psql -h localhost -U postgres
create extension pg_hint_plan;

docker stop postgresql16.8
docker start postgresql16.8
&lt;/LI-CODE&gt;
&lt;P&gt;To connect to our docker container we use:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;psql -h localhost -p 5414 -U postgres&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H4&gt;Connect and apply conservative session-level settings&lt;/H4&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We’ll &lt;EM&gt;discourage&lt;/EM&gt; other join methods so the planner prefers &lt;STRONG&gt;Hash Join&lt;/STRONG&gt;—without needing any extension.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;set hash_mem_multiplier=1;
set max_parallel_workers=0;
set max_parallel_workers_per_gather=0;
set enable_parallel_hash=off;
set enable_material=off;
set enable_sort=off;
set pg_hint_plan.debug_print=verbose;
set client_min_messages=notice;
set pg_hint_plan.enable_hint_table=on;&lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H4&gt;Create tables and load data&lt;/H4&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We´ll create two tables for the join, table_1, with a single row, table_h initially with 10mill rows&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;drop table table_s;
create table table_s (column_a text);
insert into table_s values ('30020');
vacuum full table_s;

drop table table_h;
create table table_h(column_a text,column_b text);

INSERT INTO table_h(column_a,column_b)
SELECT i::text, i::text
FROM generate_series(1, 10000000) AS t(i);
vacuum full table_h;&lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H4&gt;Run Hash Join (high cardinality)&lt;/H4&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We´ll run the join using column_a in both tables, that was created previously having high cardinality in table_h&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;explain (analyze,buffers,costs,verbose) SELECT /*+ HashJoin(s h) Leading((s h)) */ COUNT(*)
FROM table_s s
JOIN table_h h
  ON s.column_a= h.column_a;&lt;/LI-CODE&gt;
&lt;P&gt;You should see a Hash node with &lt;STRONG&gt;small Memory Usage (a few MB)&lt;/STRONG&gt; and &lt;STRONG&gt;Batches: 256 or similar&lt;/STRONG&gt; due to our tiny work_mem, but no ballooning. Exact numbers vary by hardware/version/stats. (EXPLAIN fields and interpretation are documented here.) &lt;A href="https://www.postgresql.org/docs/current/using-explain.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;                                                                   QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=280930.01..280930.02 rows=1 width=8) (actual time=1902.965..1902.968 rows=1 loops=1)
   Output: count(*)
   Buffers: shared read=54055, temp read=135 written=34041
   -&amp;gt;  Hash Join  (cost=279054.00..280805.01 rows=50000 width=0) (actual time=1900.539..1902.949 rows=1 loops=1)
         Hash Cond: (s.column_a = h.column_a)
         Buffers: shared read=54055, temp read=135 written=34041
         -&amp;gt;  Seq Scan on public.table_s s  (cost=0.00..1.01 rows=1 width=32) (actual time=0.021..0.022 rows=1 loops=1)
               Output: s.column_a
               Buffers: shared read=1
         -&amp;gt;  Hash  (cost=154054.00..154054.00 rows=10000000 width=32) (actual time=1896.895..1896.896 rows=10000000 loops=1)
               Output: h.column_a
               Buckets: 65536  Batches: 256  Memory Usage: 2031kB
               Buffers: shared read=54054, temp written=33785
               -&amp;gt;  Seq Scan on public.table_h h  (cost=0.00..154054.00 rows=10000000 width=32) (actual time=2.538..638.830 rows=10000000 loops=1)
                     Output: h.column_a
                     Buffers: shared read=54054
 Query Identifier: 334721522907995613
 Planning:
   Buffers: shared hit=10
 Planning Time: 0.302 ms
 JIT:
   Functions: 11
   Options: Inlining false, Optimization false, Expressions true, Deforming true
   Timing: Generation 0.441 ms, Inlining 0.000 ms, Optimization 0.236 ms, Emission 2.339 ms, Total 3.017 ms
 Execution Time: 1903.472 ms
(25 rows)&lt;/LI-CODE&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Findings (1)&lt;/STRONG&gt; When we have the data totally distributed with high cardinality it takes only &lt;STRONG&gt;2031kB&lt;/STRONG&gt; of memory usage (work_mem), &lt;STRONG&gt;shared hit/read=54055&lt;/STRONG&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H4&gt;Force&amp;nbsp;&lt;STRONG&gt;low cardinality / skew&lt;/STRONG&gt; and re‑run&lt;/H4&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We´ll update table_h to have column_a all values to '30020', so, having only 1 distinct value for all the rows in the table&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;update table_h set column_a='30020', column_b='30020';
vacuum full table_h;&lt;/LI-CODE&gt;
&lt;P&gt;Checking execution plan:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;                                                                   QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=279056.04..279056.05 rows=1 width=8) (actual time=3568.936..3568.938 rows=1 loops=1)
   Output: count(*)
   Buffers: shared read=54056, temp read=63480 written=63480
   -&amp;gt;  Hash Join  (cost=279055.00..279056.03 rows=1 width=0) (actual time=2650.696..3228.610 rows=10000000 loops=1)
         Hash Cond: (s.column_a = h.column_a)
         Buffers: shared read=54056, temp read=63480 written=63480
         -&amp;gt;  Seq Scan on public.table_s s  (cost=0.00..1.01 rows=1 width=32) (actual time=0.007..0.008 rows=1 loops=1)
               Output: s.column_a
               Buffers: shared read=1
         -&amp;gt;  Hash  (cost=154055.00..154055.00 rows=10000000 width=7) (actual time=1563.987..1563.989 rows=10000000 loops=1)
               Output: h.column_a
               Buckets: 131072 (originally 131072)  Batches: 512 (originally 256)  Memory Usage: 371094kB
               Buffers: shared read=54055, temp written=31738
               -&amp;gt;  Seq Scan on public.table_h h  (cost=0.00..154055.00 rows=10000000 width=7) (actual time=2.458..606.422 rows=10000000 loops=1)
                     Output: h.column_a
                     Buffers: shared read=54055
 Query Identifier: 334721522907995613
 Planning:
   Buffers: shared hit=6 read=1 dirtied=1
 Planning Time: 0.237 ms
 JIT:
   Functions: 11
   Options: Inlining false, Optimization false, Expressions true, Deforming true
   Timing: Generation 0.330 ms, Inlining 0.000 ms, Optimization 0.203 ms, Emission 2.311 ms, Total 2.844 ms
 Execution Time: 3584.439 ms
(25 rows)&lt;/LI-CODE&gt;
&lt;P&gt;Now, the Hash node typically reports &lt;STRONG&gt;hundreds of MB&lt;/STRONG&gt; of Memory Usage, with more/larger temp spills (higher Batches, more temp_blks_*). What changed? &lt;STRONG&gt;Only the distribution (cardinality)&lt;/STRONG&gt;. (Why buckets/batches behave this way is covered in the algorithm references.) &lt;A href="https://postgrespro.com/blog/pgsql/5969673" target="_blank" rel="noopener"&gt;[postgrespro.com]&lt;/A&gt;, &lt;A href="https://www.interdb.jp/pg/pgsql03/05/03.html" target="_blank" rel="noopener"&gt;[interdb.jp]&lt;/A&gt;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Findings (2)&lt;/STRONG&gt;&amp;nbsp;When we have the data distributed with LOW cardinality it takes &lt;STRONG&gt;371094kB &lt;/STRONG&gt;of memory usage (work_mem), shared hit/read=&lt;STRONG&gt;54056&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;So, same amount of data being handled by the query in terms of shared memory, but totally different work_mem usage pattern due to the low cardinality and the join method (Hash Join) that put most of those rows in a single bucket and that is not limited by default, so, it can cause OOM errors at any time.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H4&gt;Scale up rows to observe linear growth&lt;/H4&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We´ll add same rows in table_h repeat times so we can play with more data (low cardinality)&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;insert into table_h select * from table_h;
vacuum full table_h;&lt;/LI-CODE&gt;
&lt;P&gt;You’ll see &lt;STRONG&gt;Memory Usage and temp I/O scale with rowcount under skew&lt;/STRONG&gt;. (Beware: this can become I/O and RAM heavy—do this incrementally.) &lt;A href="https://thoughtbot.com/blog/reading-an-explain-analyze-query-plan" target="_blank" rel="noopener"&gt;[thoughtbot.com]&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;NumRows table_h&lt;/th&gt;&lt;th&gt;Shared read/hit&lt;/th&gt;&lt;th&gt;Dirtied&lt;/th&gt;&lt;th&gt;Written&lt;/th&gt;&lt;th&gt;Temp read/written&lt;/th&gt;&lt;th&gt;Memory Usage (work_mem)&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;10M&lt;/td&gt;&lt;td&gt;54056&lt;/td&gt;&lt;td&gt;0&lt;/td&gt;&lt;td&gt;&amp;nbsp;&lt;/td&gt;&lt;td&gt;63480+63480&lt;/td&gt;&lt;td&gt;371094kB&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;20M&lt;/td&gt;&lt;td&gt;108110&lt;/td&gt;&lt;td&gt;0&lt;/td&gt;&lt;td&gt;&amp;nbsp;&lt;/td&gt;&lt;td&gt;126956+126956&lt;/td&gt;&lt;td&gt;742188kB&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;80M&lt;/td&gt;&lt;td&gt;432434 (1,64GB)&lt;/td&gt;&lt;td&gt;0&lt;/td&gt;&lt;td&gt;0&lt;/td&gt;&lt;td&gt;253908+253908&lt;/td&gt;&lt;td&gt;2968750kB (2,8GB)&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Observability: what you will (and won’t) see&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H4&gt;EXPLAIN is your friend&lt;/H4&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;EXPLAIN (ANALYZE, BUFFERS) exposes &lt;STRONG&gt;Memory Usage&lt;/STRONG&gt;, &lt;STRONG&gt;Buckets:&lt;/STRONG&gt;, &lt;STRONG&gt;Batches:&lt;/STRONG&gt; in the Hash node and &lt;STRONG&gt;temp block&lt;/STRONG&gt; I/O. &lt;STRONG&gt;Batches &amp;gt; 1&lt;/STRONG&gt; is a near‑certain sign of spilling. &lt;A href="https://www.postgresql.org/docs/current/using-explain.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;, &lt;A href="https://thoughtbot.com/blog/reading-an-explain-analyze-query-plan" target="_blank" rel="noopener"&gt;[thoughtbot.com]&lt;/A&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H4&gt;Query Store / pg_stat_statements limitations&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Database for PostgreSQL – Flexible Server&lt;/STRONG&gt; Query Store aggregates runtime and (optionally) wait stats over time windows and stores them in azure_sys, with views under query_store.*. It’s fantastic to find &lt;STRONG&gt;which&lt;/STRONG&gt; queries chew CPU/I/O or wait, &lt;STRONG&gt;but&lt;/STRONG&gt; it doesn’t report &lt;STRONG&gt;per‑query transient memory usage&lt;/STRONG&gt; (e.g., “how many MB did that hash table peak at?”) you can estimate reviewing the temporary blocks. &lt;A href="https://learn.microsoft.com/en-us/azure/postgresql/monitor/concepts-query-store" target="_blank" rel="noopener"&gt;[learn.microsoft.com]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Under the hood, what you &lt;EM&gt;do&lt;/EM&gt; get—whether via Query Store or vanilla PostgreSQL pg_stat_statements—are cumulative counters like shared_blks_read, shared_blks_hit, temp_blks_read, temp_blks_written, timings, etc. Those help confirm buffer/temp activity, yet &lt;STRONG&gt;no direct hash table memory metric exists&lt;/STRONG&gt;. Combine them with EXPLAIN and server metrics to triangulate. &lt;A href="https://www.postgresql.org/docs/current/pgstatstatements.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Tip (Azure Flexible Server)&lt;/STRONG&gt;&lt;BR /&gt;Enable Query Store in &lt;EM&gt;Server parameters&lt;/EM&gt; via pg_qs.query_capture_mode and (optionally) wait sampling via pgms_wait_sampling.query_capture_mode, then use query_store.qs_view to correlate &lt;STRONG&gt;temp block&lt;/STRONG&gt; usage and execution times across intervals. &lt;A href="https://learn.microsoft.com/en-us/azure/postgresql/monitor/concepts-query-store" target="_blank" rel="noopener"&gt;[learn.microsoft.com]&lt;/A&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;Typical OOM symptom in logs&lt;/H2&gt;
&lt;P&gt;In extreme skew with concurrent executions, you may encounter:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;ERROR: out of memory&lt;/P&gt;
&lt;P&gt;DETAIL: Failed on request of size 32800 in memory context "HashBatchContext".&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is a classic signature of hash join memory pressure. &lt;A href="https://www.postgresql.org/message-id/B743D886-5469-4FB1-A75E-F262F399E7BA%40gmail.com" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;, &lt;A href="https://thisistheway.wiki/posts/software/postgres_memory/" target="_blank" rel="noopener"&gt;[thisistheway.wiki]&lt;/A&gt;&lt;/P&gt;
&lt;H2&gt;What to do about it (mitigations &amp;amp; best practices)&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Don’t force Hash Join unless required&lt;/STRONG&gt;&lt;BR /&gt;If you used planner hints (e.g., pg_hint_plan) or GUCs (Grand Unified Configuration) to force Hash Join, remove them and let the planner re‑evaluate. (If you &lt;EM&gt;must&lt;/EM&gt; hint, be aware pg_hint_plan is a third‑party extension and not available in all environments.) &lt;A href="https://pg-hint-plan.readthedocs.io/en/latest/" target="_blank" rel="noopener"&gt;[pg-hint-pl...thedocs.io]&lt;/A&gt;, &lt;A href="https://pg-hint-plan.readthedocs.io/en/latest/hint_table.html" target="_blank" rel="noopener"&gt;[pg-hint-pl...thedocs.io]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Fix skew / cardinality at the source&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;Re‑model data to avoid low‑NDV (Number of Distinct Values in a column) joins (e.g., pre‑aggregate, filter earlier, or exclude degenerate keys).&lt;/LI&gt;
&lt;LI&gt;Ensure &lt;STRONG&gt;statistics are current&lt;/STRONG&gt; so the planner estimates are realistic. (Skew awareness is limited; poor estimates → risky sizing.) &lt;A href="https://www.postgresql.org/docs/current/using-explain.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Pick a safer join strategy when appropriate&lt;/STRONG&gt;&lt;BR /&gt;If distribution is highly skewed, &lt;STRONG&gt;Merge Join&lt;/STRONG&gt; (with supporting indexes/sort order) or &lt;STRONG&gt;Nested Loop&lt;/STRONG&gt; (for selective probes) might be more memory‑predictable. Let the planner choose, or enable alternatives by undoing GUCs that disabled them. &lt;A href="https://www.postgresql.org/docs/current/using-explain.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Bound memory consciously&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;Keep work_mem modest for mixed/OLTP workloads; remember it’s &lt;STRONG&gt;per operation, per node, per worker&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Adjust hash_mem_multiplier judiciously (introduced in PG13; default now commonly 2.0) if you understand the spill trade‑offs. &lt;A href="https://postgresqlco.nf/doc/en/param/hash_mem_multiplier/" target="_blank" rel="noopener"&gt;[postgresqlco.nf]&lt;/A&gt;, &lt;A href="https://pgpedia.info/h/hash_mem_multiplier.html" target="_blank" rel="noopener"&gt;[pgpedia.info]&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Observe spills and tune iteratively&lt;/STRONG&gt;&lt;BR /&gt;Use EXPLAIN (ANALYZE, BUFFERS) to see Batches (spills) and Memory Usage; use Query Store/pg_stat_statements to find &lt;EM&gt;which&lt;/EM&gt; queries generate the most &lt;STRONG&gt;temp I/O&lt;/STRONG&gt;. Raise work_mem for a &lt;EM&gt;session&lt;/EM&gt; only when justified. &lt;A href="https://www.postgresql.org/docs/current/using-explain.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;, &lt;A href="https://www.postgresql.org/docs/current/pgstatstatements.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Parallelism awareness&lt;/STRONG&gt;&lt;BR /&gt;Each worker can perform its own memory‑using operations; parallel hash join has distinct behavior. If you aren’t sure, temporarily disable parallelism to simplify analysis, then re‑enable once you understand the footprint. &lt;A href="https://www.postgresql.org/docs/current/using-explain.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Validating on Azure Database for PostgreSQL – Flexible Server&lt;/H2&gt;
&lt;P&gt;The behavior is &lt;STRONG&gt;not Azure‑specific&lt;/STRONG&gt;, but you can reproduce the same sequence on Flexible Server (e.g., General Purpose). A few notes:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Confirm/adjust work_mem, hash_mem_multiplier, enable_* planner toggles as session settings. (Azure exposes standard PostgreSQL parameters.) &lt;A href="https://learn.microsoft.com/en-us/azure/postgresql/server-parameters/param-resource-usage-memory" target="_blank" rel="noopener"&gt;[learn.microsoft.com]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Use &lt;STRONG&gt;Query Store&lt;/STRONG&gt; to confirm stable &lt;STRONG&gt;shared/temporary block&lt;/STRONG&gt; patterns across executions, then use EXPLAIN (ANALYZE, BUFFERS) per query to spot &lt;STRONG&gt;hash table memory&lt;/STRONG&gt; footprints. &lt;A href="https://learn.microsoft.com/en-us/azure/postgresql/monitor/concepts-query-store" target="_blank" rel="noopener"&gt;[learn.microsoft.com]&lt;/A&gt;, &lt;A href="https://www.postgresql.org/docs/current/using-explain.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;H4&gt;Changing some default parameters&lt;/H4&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We´ll repeat previous steps in Azure Database for PostgreSQL Flexible server:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;set hash_mem_multiplier=1;
set max_parallel_workers=0;
set max_parallel_workers_per_gather=0;
set enable_parallel_hash=off;
set enable_material=off;
set enable_sort=off;
set pg_hint_plan.debug_print=verbose;
set client_min_messages=notice;
set pg_hint_plan.enable_hint_table=on;&lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H4&gt;Creating and populating tables&lt;/H4&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="sql"&gt;drop table table_s;
create table table_s (column_a text);
insert into table_s values ('30020');
vacuum full table_s;
 
drop table table_h;
create table table_h(column_a text,column_b text);
 
INSERT INTO table_h(column_a,column_b)
SELECT i::text, i::text
FROM generate_series(1, 10000000) AS t(i);
vacuum full table_h;
vacuum full table_s;&lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H4&gt;Query &amp;amp; Execution plan&lt;/H4&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="sql"&gt;explain (analyze,buffers,costs,verbose) SELECT /*+ HashJoin(s h) Leading((s h)) */ COUNT(*)
FROM table_s s
JOIN table_h h
  ON s.column_a= h.column_a;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;                                                                   QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=279052.88..279052.89 rows=1 width=8) (actual time=3171.186..3171.191 rows=1 loops=1)
   Output: count(*)
   Buffers: shared hit=33 read=54023, temp read=135 written=34042
   I/O Timings: shared read=184.869, temp read=0.278 write=333.970
   -&amp;gt;  Hash Join  (cost=279051.84..279052.88 rows=1 width=0) (actual time=3147.288..3171.182 rows=1 loops=1)
         Hash Cond: (s.column_a = h.column_a)
         Buffers: shared hit=33 read=54023, temp read=135 written=34042
         I/O Timings: shared read=184.869, temp read=0.278 write=333.970
         -&amp;gt;  Seq Scan on public.table_s s  (cost=0.00..1.01 rows=1 width=32) (actual time=0.315..0.316 rows=1 loops=1)
               Output: s.column_a
               Buffers: shared read=1
               I/O Timings: shared read=0.018
         -&amp;gt;  Hash  (cost=154053.04..154053.04 rows=9999904 width=7) (actual time=3109.278..3109.279 rows=10000000 loops=1)
               Output: h.column_a
               Buckets: 131072  Batches: 256  Memory Usage: 2551kB
               Buffers: shared hit=32 read=54022, temp written=33786
               I/O Timings: shared read=184.851, temp write=332.059
               -&amp;gt;  Seq Scan on public.table_h h  (cost=0.00..154053.04 rows=9999904 width=7) (actual time=0.019..1258.472 rows=10000000 loops=1)
                     Output: h.column_a
                     Buffers: shared hit=32 read=54022
                     I/O Timings: shared read=184.851
 Query Identifier: 5636209387670245929
 Planning:
   Buffers: shared hit=37
 Planning Time: 0.575 ms
 Execution Time: 3171.375 ms
(26 rows)&lt;/LI-CODE&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Findings (3)&lt;/STRONG&gt;&amp;nbsp;In Azure Database for PostgreSQL Flexible server, when we have the data totally distributed with high cardinality it takes only &lt;STRONG&gt;2551kB &lt;/STRONG&gt;of memory usage (work_mem), shared hit/read=54056&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H4&gt;Skew it to LOW cardinality&lt;/H4&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;As we did previously, we change column_a to having only one different value in all rows in table_h&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;update table_h set column_a='30020', column_b='30020';
vacuum full table_h;&lt;/LI-CODE&gt;
&lt;P&gt;In this case we force the join method with pg_hint_plan:&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;explain (analyze,buffers,costs,verbose) SELECT /*+ HashJoin(s h) Leading((s h)) */ COUNT(*)
FROM table_s s
JOIN table_h h
  ON s.column_a= h.column_a;

                                                                   QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=279056.04..279056.05 rows=1 width=8) (actual time=4397.556..4397.560 rows=1 loops=1)
   Output: count(*)
   Buffers: shared hit=2 read=54055, temp read=63480 written=63480
   I/O Timings: shared read=89.396, temp read=90.377 write=300.290
   -&amp;gt;  Hash Join  (cost=279055.00..279056.03 rows=1 width=0) (actual time=3271.145..3987.154 rows=10000000 loops=1)
         Hash Cond: (s.column_a = h.column_a)
         Buffers: shared hit=2 read=54055, temp read=63480 written=63480
         I/O Timings: shared read=89.396, temp read=90.377 write=300.290
         -&amp;gt;  Seq Scan on public.table_s s  (cost=0.00..1.01 rows=1 width=32) (actual time=0.006..0.008 rows=1 loops=1)
               Output: s.column_a
               Buffers: shared hit=1
         -&amp;gt;  Hash  (cost=154055.00..154055.00 rows=10000000 width=7) (actual time=1958.729..1958.731 rows=10000000 loops=1)
               Output: h.column_a
               Buckets: 262144 (originally 262144)  Batches: 256 (originally 128)  Memory Usage: 371094kB
               Buffers: shared read=54055, temp written=31738
               I/O Timings: shared read=89.396, temp write=149.076
               -&amp;gt;  Seq Scan on public.table_h h  (cost=0.00..154055.00 rows=10000000 width=7) (actual time=0.159..789.449 rows=10000000 loops=1)
                     Output: h.column_a
                     Buffers: shared read=54055
                     I/O Timings: shared read=89.396
 Query Identifier: 8893575855188549861
 Planning:
   Buffers: shared hit=5
 Planning Time: 0.157 ms
 Execution Time: 4414.268 ms
(25 rows)&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;NumRows table_h&lt;/th&gt;&lt;th&gt;Shared read/hit&lt;/th&gt;&lt;th&gt;Dirtied&lt;/th&gt;&lt;th&gt;Written&lt;/th&gt;&lt;th&gt;Temp read/written&lt;/th&gt;&lt;th&gt;Memory Usage (work_mem)&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;10M&lt;/td&gt;&lt;td&gt;54056&lt;/td&gt;&lt;td&gt;0&lt;/td&gt;&lt;td&gt;&amp;nbsp;&lt;/td&gt;&lt;td&gt;63480+63480&lt;/td&gt;&lt;td&gt;371094kB&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;20M&lt;/td&gt;&lt;td&gt;108110&lt;/td&gt;&lt;td&gt;0&lt;/td&gt;&lt;td&gt;&amp;nbsp;&lt;/td&gt;&lt;td&gt;126956+126956&lt;/td&gt;&lt;td&gt;742188kB&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;80M&lt;/td&gt;&lt;td&gt;432434&lt;/td&gt;&lt;td&gt;0&lt;/td&gt;&lt;td&gt;0&lt;/td&gt;&lt;td&gt;253908+253908&lt;/td&gt;&lt;td&gt;2968750kB&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;We observe the same numbers compared with our docker installation.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;See the extension docs for installation/usage and the&amp;nbsp;&lt;STRONG&gt;hint table&lt;/STRONG&gt; for cases where you want to force a specific join method.&amp;nbsp;&lt;A href="https://pg-hint-plan.readthedocs.io/en/latest/" target="_blank" rel="noopener"&gt;[pg-hint-pl...thedocs.io]&lt;/A&gt;, &lt;A href="https://pg-hint-plan.readthedocs.io/en/latest/hint_table.html" target="_blank" rel="noopener"&gt;[pg-hint-pl...thedocs.io]&lt;/A&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;FAQ&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Q: I set work_mem = 4MB. Why did my Hash Join report ~371MB Memory Usage?&lt;/STRONG&gt;&lt;BR /&gt;A: Hash joins can use up to hash_mem_multiplier × work_mem &lt;EM&gt;per hash table&lt;/EM&gt;, and skew can cause large per‑bucket chains. Multiple nodes/workers multiply usage. work_mem is not a global hard cap. &lt;A href="https://postgresqlco.nf/doc/en/param/hash_mem_multiplier/" target="_blank" rel="noopener"&gt;[postgresqlco.nf]&lt;/A&gt;, &lt;A href="https://pgpedia.info/h/hash_mem_multiplier.html" target="_blank" rel="noopener"&gt;[pgpedia.info]&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q: How do I know if a Hash Join spilled to disk?&lt;/STRONG&gt;&lt;BR /&gt;A: In EXPLAIN (ANALYZE), Hash node shows Batches: N. &lt;STRONG&gt;N &amp;gt; 1&lt;/STRONG&gt; indicates partitioning and temp I/O; you’ll also see temp_blks_read/written in buffers and Temp I/O timings. &lt;A href="https://www.postgresql.org/docs/current/using-explain.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;, &lt;A href="https://thoughtbot.com/blog/reading-an-explain-analyze-query-plan" target="_blank" rel="noopener"&gt;[thoughtbot.com]&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q: Can Query Store tell me per‑query memory consumption?&lt;/STRONG&gt;&lt;BR /&gt;A: Not directly. It gives time‑sliced runtime and wait stats (plus temp/shared block counters via underlying stats), but no “peak MB used by this hash table” metric. Use EXPLAIN and server metrics. &lt;A href="https://learn.microsoft.com/en-us/azure/postgresql/monitor/concepts-query-store" target="_blank" rel="noopener"&gt;[learn.microsoft.com]&lt;/A&gt;, &lt;A href="https://www.postgresql.org/docs/current/pgstatstatements.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q: I hit “Failed on request … in HashBatchContext.” What’s that?&lt;/STRONG&gt;&lt;BR /&gt;A: That’s an OOM raised by the executor while allocating memory. Reduce skew, avoid forced hash joins, or review per‑query memory and concurrency.&amp;nbsp;&lt;A href="https://www.postgresql.org/message-id/B743D886-5469-4FB1-A75E-F262F399E7BA%40gmail.com" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;&lt;/P&gt;
&lt;H2&gt;Further reading&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Server parameters &amp;amp; memory&lt;/STRONG&gt; (official docs): guidance on work_mem, shared_buffers, parallelism. &lt;A href="https://www.postgresql.org/docs/current/runtime-config-resource.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Hash joins under the hood&lt;/STRONG&gt;: deep dives into buckets, batches, and memory footprints. &lt;A href="https://postgrespro.com/blog/pgsql/5969673" target="_blank" rel="noopener"&gt;[postgrespro.com]&lt;/A&gt;, &lt;A href="https://www.pgcon.org/2017/schedule/attachments/455_pgcon-2017-hash-joins.pdf" target="_blank" rel="noopener"&gt;[pgcon.org]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;hash_mem_multiplier&lt;/STRONG&gt;: history and defaults by version. &lt;A href="https://pgpedia.info/h/hash_mem_multiplier.html" target="_blank" rel="noopener"&gt;[pgpedia.info]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;EXPLAIN primer&lt;/STRONG&gt;: how to read Hash node details, Batches, Memory Usage. &lt;A href="https://www.postgresql.org/docs/current/using-explain.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;, &lt;A href="https://thoughtbot.com/blog/reading-an-explain-analyze-query-plan" target="_blank" rel="noopener"&gt;[thoughtbot.com]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Query Store (Azure Flexible)&lt;/STRONG&gt;: enable, query, and interpret. &lt;A href="https://learn.microsoft.com/en-us/azure/postgresql/monitor/concepts-query-store" target="_blank" rel="noopener"&gt;[learn.microsoft.com]&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Ready‑to‑use mitigation checklist (DBA quick wins)&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Remove joins hints/GUC overrides that force Hash Join; re‑plan. &lt;A href="https://pg-hint-plan.readthedocs.io/en/latest/" target="_blank" rel="noopener"&gt;[pg-hint-pl...thedocs.io]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Refresh stats; confirm realistic rowcount/NDV estimates. &lt;A href="https://www.postgresql.org/docs/current/using-explain.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Consider alternate join strategies (Merge/Index‑Nested‑Loop) when skew is high. &lt;A href="https://www.postgresql.org/docs/current/using-explain.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Keep work_mem conservative for OLTP; consider session‑scoped bumps only for specific analytic queries. &lt;A href="https://www.postgresql.org/docs/current/runtime-config-resource.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Tune hash_mem_multiplier carefully only after understanding spill patterns. &lt;A href="https://postgresqlco.nf/doc/en/param/hash_mem_multiplier/" target="_blank" rel="noopener"&gt;[postgresqlco.nf]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Use EXPLAIN (ANALYZE, BUFFERS) to verify Batches and Memory Usage. &lt;A href="https://www.postgresql.org/docs/current/using-explain.html" target="_blank" rel="noopener"&gt;[postgresql.org]&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Use Query Store/pg_stat_statements to find heavy temp/shared I/O offenders over time.&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 09 Mar 2026 13:38:54 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/understanding-hash-join-memory-usage-and-oom-risks-in-postgresql/ba-p/4500308</guid>
      <dc:creator>FranciscoPardillo</dc:creator>
      <dc:date>2026-03-09T13:38:54Z</dc:date>
    </item>
    <item>
      <title>Guide to Upgrade Azure Database for MySQL from 8.0 to 8.4</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/guide-to-upgrade-azure-database-for-mysql-from-8-0-to-8-4/ba-p/4493669</link>
      <description>&lt;P&gt;This guide merges Azure official documentation and best practices from real-world upgrades. Always refer to the latest Azure documentation and portal/CLI for any updates or discrepancies.&lt;/P&gt;
&lt;H2&gt;1. Background&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Why Upgrade?&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;MySQL 5.7 reached end of community support in October 2023, and will enter extended support in August 2026.&lt;/LI&gt;
&lt;LI&gt;MySQL 8.0 will reach end of community support in April 2026 and will enter extended support in January 2027.&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-olk-copy-source="MessageBody"&gt;MySQL 8.4 is the first official LTS (Long Term Support) version, officially supported by Oracle until April 2032.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Upgrading ensures security, performance, and continued support.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Supported Upgrade Paths:&lt;/STRONG&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;5.7 → 8.0 (must be completed first)&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;8.0 → 8.4&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Direct 5.7 → 8.4 is NOT supported.&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;Downgrades are NOT supported.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Key Notes:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Irreversible:&lt;/STRONG&gt; Major version upgrades are irreversible and cannot be rolled back directly.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Downtime:&lt;/STRONG&gt; The server is offline during the upgrade. Downtime depends on database size and table count.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;HA Limitation:&lt;/STRONG&gt; High Availability (HA) servers cannot achieve near-zero downtime during major upgrades due to cross-version replication instability.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Performance:&lt;/STRONG&gt; Some workloads may not see performance improvements after upgrade.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;2. Key Changes&lt;/H2&gt;
&lt;H3&gt;Authentication Plugin Changes&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;mysql_native_password is disabled by default in the MySQL 8.4 community version. However, it is enabled by default in Azure MySQL 8.4. Older client versions may not be able to connect using mysql_native_password. If you encounter this issue, upgrade your client version first.&lt;/LI&gt;
&lt;LI&gt;The default_authentication_plugin variable should be removed.&lt;/LI&gt;
&lt;LI&gt;All users should be migrated to the caching_sha2_password plugin.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Foreign Key Constraint Enforcement&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Before the upgrade, referenced columns must have a unique key; check all foreign key constraints before upgrade.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Removed Features&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;FLUSH HOSTS command → use TRUNCATE TABLE performance_schema.host_cache&lt;/LI&gt;
&lt;LI&gt;PASSWORD() function → use ALTER USER&lt;/LI&gt;
&lt;LI&gt;tx_isolation option → use transaction_isolation&lt;/LI&gt;
&lt;LI&gt;expire_logs_days → use binlog_expire_logs_seconds&lt;/LI&gt;
&lt;LI&gt;AUTO_INCREMENT is no longer allowed on FLOAT/DOUBLE columns&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;New Reserved Keywords&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;MANUAL, PARALLEL, QUALIFY, TABLESAMPLE — do not use these as unquoted identifiers&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Terminology Changes&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;MASTER/SLAVE terminology updated to SOURCE/REPLICA&lt;/LI&gt;
&lt;LI&gt;Update all scripts and applications using old the terms&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;InnoDB 8.0 → 8.4 Default Parameter Changes&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;innodb_adaptive_hash_index: ON → OFF&lt;/LI&gt;
&lt;LI&gt;innodb_change_buffering: All → None&lt;/LI&gt;
&lt;LI&gt;innodb_buffer_pool_in_core_file: ON → OFF&lt;/LI&gt;
&lt;LI&gt;innodb_io_capacity: 200 → 10000&lt;/LI&gt;
&lt;LI&gt;innodb_log_buffer_size: 16MB → 64MB&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;3. Upgrade Prerequisites&lt;/H2&gt;
&lt;H3&gt;Basic Requirements&lt;/H3&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Item&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Requirement&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Deployment Type&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Azure Database for MySQL Flexible Server instance only&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Region&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Region must support MySQL 8.4 in Azure Portal&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Current Version&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Must be 8.0.x (check via SELECT VERSION(); or Portal)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Server Status&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Server must be running, with no ongoing scaling/restart/updates&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H3&gt;Read Replica Version&lt;/H3&gt;
&lt;P&gt;If read replicas exist, upgrade them before upgrading the primary server. Cross-version replication is not guaranteed to be stable. Follow official documentation for handling replicas (stop replication and upgrade separately, or delete and recreate after upgrade).&lt;/P&gt;
&lt;H3&gt;Create On-Demand Backup&lt;/H3&gt;
&lt;P&gt;Before upgrading production, create an on-demand backup for rollback if needed: - Azure Portal &amp;gt; Your MySQL Server &amp;gt; Backup &amp;amp; Restore &amp;gt; Backup Now - Backups created before upgrade restore to the old version; backups after upgrade restore to the new version.&lt;/P&gt;
&lt;H3&gt;XA Transactions&lt;/H3&gt;
&lt;P&gt;Ensure no active or pending XA transactions:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;XA RECOVER;&lt;/LI-CODE&gt;
&lt;P&gt;If any results are returned, roll back these transactions:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;XA ROLLBACK 'xid';&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;SQL Mode&lt;/H3&gt;
&lt;P&gt;Remove deprecated sql_mode values before upgrade.&lt;/P&gt;
&lt;P&gt;Deprecated: NO_AUTO_CREATE_USER, NO_FIELD_OPTIONS, NO_KEY_OPTIONS, NO_TABLE_OPTIONS - Check current SQL mode:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT @@sql_mode;&lt;/LI-CODE&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;In Azure Portal: Go to Server Parameters &amp;gt; Search “sql_mode” &amp;gt; Remove deprecated values &amp;gt; Save&lt;/P&gt;
&lt;H2&gt;4. Pre-Upgrade Checks&lt;/H2&gt;
&lt;H3&gt;Use Upgrade Check Tool (&lt;A class="lia-external-url" href="https://dev.mysql.com/doc/mysql-shell/8.4/en/mysql-shell-utilities-upgrade.html" target="_blank" rel="noopener"&gt;MySQL Shell&lt;/A&gt;)&lt;/H3&gt;
&lt;H4&gt;Check Items&lt;/H4&gt;
&lt;P&gt;The tool performs 40+ automated checks, mainly including:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-border-style-solid" border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Check Category&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Description&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Deprecated Time Types&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Checks for old-format time columns&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Reserved Keyword Conflicts&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Checks for conflicts with reserved keywords in the new version&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;UTF8MB3 Charset&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Checks for objects needing migration to UTF8MB4&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Removed System Variables&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Checks for configuration items no longer supported&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Authentication Plugin Change&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Checks for users needing updated authentication methods&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Partition Table Restrictions&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Checks for partition compatibility&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Foreign Key Names&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Checks for foreign key name conflicts&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;Oracle provides util.checkForServerUpgrade() to detect compatibility issues.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Install MySQL Shell 8.4 or higher.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Connect to Azure MySQL server:&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="shell"&gt;mysqlsh --host=&amp;lt;your-server&amp;gt;.mysql.database.azure.com \
  --user=&amp;lt;admin-user&amp;gt; \
  --password \
  --ssl-mode=REQUIRED&lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI&gt;Run upgrade check:&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="shell"&gt;util.checkForServerUpgrade()
util.checkForServerUpgrade({"targetVersion": "8.4.0"})&lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI&gt;Command-line (recommended):&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="sql"&gt;mysqlsh &amp;lt;admin-user&amp;gt;@&amp;lt;your-server&amp;gt;.mysql.database.azure.com:3306 \
  --ssl-mode=REQUIRED \
  -- util checkForServerUpgrade \
  --target-version=8.4.0 \
  --output-format=JSON&lt;/LI-CODE&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Required privileges: RELOAD, PROCESS, SELECT.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;The tool performs 40+ checks (deprecated types, reserved keywords, charset, removed variables, auth plugins, partitioning, FK constraints, etc.).&lt;/P&gt;
&lt;H4&gt;Review Results&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Errors:&lt;/STRONG&gt; Must be fixed before upgrade&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Warnings:&lt;/STRONG&gt; Strongly recommended to fix&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Notices:&lt;/STRONG&gt; Informational&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Additional Compatibility and Validation Checks&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Client Library Compatibility:&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Some client libraries may be incompatible with MySQL 8.0+ due to outdated drivers. Collect and review all client driver versions and test compatibility in a staging environment before upgrade.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Authentication Changes&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;After upgrade, applications may fail to connect if not compatible with new authentication plugins. MySQL 8.0 defaults to caching_sha2_password, and MySQL 8.4 disables mysql_native_password by default.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Solution&lt;STRONG&gt;:&lt;/STRONG&gt; Upgrade client drivers to versions supporting the new authentication method. If legacy drivers must be used, temporarily set user authentication to mysql_native_password and plan to upgrade drivers as soon as possible.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Charset and Collation Changes:&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;MySQL 8.0 defaults to utf8mb4 charset and utf8mb4_0900_ai_ci collation, which may cause changes in string comparison, index length limits, sort order, or JOIN-related errors due to charset mismatches.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Recommendations:&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;Check if ORM and drivers support utf8mb4.&lt;/LI&gt;
&lt;LI&gt;Adjust index lengths (utf8mb4 uses up to 4 bytes per character).&lt;/LI&gt;
&lt;LI&gt;Specify COLLATE explicitly if you need to preserve original sort behavior.&lt;/LI&gt;
&lt;LI&gt;Standardize charset and collation settings for all tables and columns.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Object Statistics:&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Review table and object statistics to estimate upgrade and rollback time.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Performance Considerations:&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Most SQL workloads benefit from upgrades, but some queries may experience degraded performance (e.g., queries with ORDER BY, complex JOINs, subqueries, or many bound parameters). The optimizer evolves between versions (e.g., max_length_for_sort_data was deprecated in 8.0.20, execution plan changes, etc.).&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Recommendations:&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;Use standard SQL tuning methods: review execution plans, optimize queries, and add indexes as needed.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Test in Staging&lt;/H3&gt;
&lt;P&gt;💡 &lt;STRONG&gt;Strong Recommendation:&lt;/STRONG&gt; First perform the upgrade in a staging environment (such as a restored backup copy of the original server) and validate the following:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Whether the upgrade process completes successfully&lt;/LI&gt;
&lt;LI&gt;Estimated downtime&lt;/LI&gt;
&lt;LI&gt;Application compatibility&lt;/LI&gt;
&lt;LI&gt;Performance baseline comparison&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;5. Upgrade Methods &amp;amp; Scenarios&lt;/H2&gt;
&lt;H3&gt;Upgrade Method Comparison&lt;/H3&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Method&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Pros&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Cons&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Recommended For&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;In-Place Upgrade&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Simple, no connection string change, no extra cost&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Longer downtime, can be hours or even days, rollback only via backup&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Dev/test, small prod, downtime allowed&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Replica Based Upgrade&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Low risk, fast rollback, minimal downtime&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Extra cost, more steps, connection string change&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Production, downtime sensitive&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H3&gt;Method 1: In-Place Upgrade, Azure Portal (In-Place Upgrade, Recommended)&lt;/H3&gt;
&lt;P&gt;For General Purpose and &lt;SPAN data-olk-copy-source="MessageBody"&gt;Memory Optimized&lt;/SPAN&gt;&amp;nbsp;SKUs:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Sign in to Azure Portal&lt;/LI&gt;
&lt;LI&gt;Navigate to your MySQL Flexible Server&lt;/LI&gt;
&lt;LI&gt;Select&amp;nbsp;&lt;STRONG&gt;Overview&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Find&amp;nbsp;&lt;STRONG&gt;MySQL Version&lt;/STRONG&gt; and click &lt;STRONG&gt;Upgrade&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Select target version 8.4&lt;/LI&gt;
&lt;LI&gt;Review pre-upgrade validation&lt;/LI&gt;
&lt;LI&gt;Confirm and start upgrade&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;For Burstable SKU: May require temporary SKU change to General Purpose before upgrade, then revert after.&lt;/P&gt;
&lt;H3&gt;Method 2: Azure CLI (In-Place Upgrade)&lt;/H3&gt;
&lt;LI-CODE lang="powershell"&gt;az cloud set --name AzureCloud
az login
az mysql flexible-server upgrade \
  --name &amp;lt;your-server-name&amp;gt; \
  --resource-group &amp;lt;your-resource-group&amp;gt; \
  --version 8.4&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Method 3: Minimal Downtime (Read Replica)&lt;/H3&gt;
&lt;P&gt;For production environments sensitive to downtime:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Create read replica (Portal &amp;gt; Replication &amp;gt; Add Replica)&lt;/LI&gt;
&lt;LI&gt;Wait for replica sync (check SHOW REPLICA STATUS\G, ensure Seconds_Behind_Master = 0)&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Upgrade replica to 8.4 (Portal or CLI)&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Validate replica upgrade (SELECT VERSION();)&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Promote replica to primary (Portal &amp;gt; Replication &amp;gt; Stop Replication)&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Update application connection strings to new primary&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;(Optional) Delete old primary&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;This approach is an architecture migration + failover, not the same as in-place upgrade. Cross-version replication stability is not guaranteed. Fully test in staging and communicate risks to stakeholders.&lt;/P&gt;
&lt;H2&gt;6. Post-Upgrade Validation&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;Check MySQL version:&lt;/LI&gt;
&lt;/OL&gt;
&lt;LI-CODE lang="sql"&gt;SELECT VERSION();&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;Check database and table status:&lt;/LI&gt;
&lt;/OL&gt;
&lt;LI-CODE lang="sql"&gt;SHOW DATABASES;
USE your_database;
SHOW TABLE STATUS;
CHECK TABLE your_table;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;Validate application connectivity and key business functions&lt;/LI&gt;
&lt;LI&gt;Monitor metrics for 24-48 hours: CPU, memory, I/O, query latency, error logs&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;7. Rollback Strategy&lt;/H2&gt;
&lt;P&gt;Major version upgrades are irreversible. Rollback requires restoring from backup:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Azure Portal &amp;gt; MySQL Server &amp;gt; Backup &amp;amp; Restore &amp;gt; Select pre-upgrade backup &amp;gt; Restore&lt;/LI&gt;
&lt;LI&gt;Specify new server name&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Wait for restore to complete&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Update application connection strings to restored server&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;💡 The restored server is a new, independent server. The original server remains.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Business Rollback Advice:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class=""&gt;- Confirm data loss/replay strategy for changes between upgrade and rollback &lt;BR /&gt;- Assess impact on dependent systems (reports, ETL, etc.)&lt;BR /&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;- After rollback, point applications to old server, clean up new objects, and re-validate key functions&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;8. FAQs&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Q1: Can I upgrade directly from MySQL 5.7 to 8.4?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;No. Upgrade must be sequential: 5.7 → 8.0 → 8.4&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q2: How long does the upgrade take?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Downtime depends on database size, storage, and table count. Test in staging to estimate.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q3: Can HA servers achieve zero downtime?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;No. Major upgrades cannot use HA for near-zero downtime due to cross-version replication instability.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q4: Will performance improve after upgrade?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Not guaranteed. Some workloads may see no change or slight decrease. Benchmark before and after.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q5: Can I change other server properties during upgrade?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Not recommended. Avoid changing other properties via REST API or SDK during upgrade.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q6: How to estimate downtime?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The number of objects contained in the database is related to the upgrade time; the larger the number, the longer it takes. The most accurate way to assess this is as follows. 1. Restore a test server from backup 2. Perform upgrade on test server 3. Record actual upgrade time 4. Add buffer for production&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q7: What happens to backups during/after upgrade?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Backups before upgrade restore to old version; after upgrade, to 8.4. Confirm retention and PITR settings before upgrade.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q8: Is upgrade downtime included in SLA?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Planned downtime for upgrade is not counted in availability SLA. Schedule during maintenance window and communicate with stakeholders.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q9: Charset and collation issues after upgrade?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;MySQL 8.0+ defaults to utf8mb4 and utf8mb4_0900_ai_ci. Check ORM/driver support, adjust index length, specify COLLATE if needed, and unify charset settings.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q10: Performance issues after upgrade?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Some queries may slow down due to optimizer changes. Use EXPLAIN/ANALYZE, add/optimize indexes, use query hints, and review execution plans.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q11: Authentication issues after upgrade?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;MySQL 8.0+ defaults to caching_sha2_password. Upgrade client drivers. If needed, temporarily revert user auth to mysql_native_password (not recommended for 8.4).&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q12: Client library compatibility?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Test all client libraries in staging. Upgrade outdated connectors/ORMs.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Q13: What is Azure Extended Support?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure provides paid extended support for MySQL versions after community EOL. See&amp;nbsp;&lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/adformysql/announcing-extended-support-for-azure-database-for-mysql/4442924" target="_blank" rel="noopener" data-lia-auto-title="Extended Support for MySQL." data-lia-auto-title-active="0"&gt;Extended Support for MySQL.&lt;/A&gt;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Stay Connected&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;We welcome your feedback and invite you to share your experiences or suggestions at&amp;nbsp;&lt;A href="mailto:AskAzureDBforMySQL@service.microsoft.com" target="_blank" rel="noopener"&gt;AskAzureDBforMySQL@service.microsoft.com&lt;/A&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Stay up to date by visiting&amp;nbsp;&lt;A href="https://learn.microsoft.com/azure/mysql/flexible-server/whats-new" target="_blank" rel="noopener"&gt;What's new in Azure Database for MySQL&lt;/A&gt;, and follow us on&amp;nbsp;&lt;A href="https://aka.ms/mysql-yt-subscribe" target="_blank" rel="noopener"&gt;YouTube&lt;/A&gt; | &lt;A href="https://www.linkedin.com/company/azure-database-for-mysql/" target="_blank" rel="noopener"&gt;LinkedIn&lt;/A&gt; | &lt;A href="https://twitter.com/AzureDBMySQL" target="_blank" rel="noopener"&gt;X&lt;/A&gt;&amp;nbsp;for ongoing updates.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for choosing Azure Database for MySQL!&amp;nbsp;&lt;/P&gt;
&lt;H5&gt;References:&lt;/H5&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/mysql/flexible-server/how-to-upgrade" target="_blank" rel="noopener"&gt;Azure Database for MySQL Flexible Server - Major Version Upgrade&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/mysql/flexible-server/how-to-upgrade" target="_blank" rel="noopener"&gt;Azure Database for MySQL Flexible Server - Upgrade FAQ&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://dev.mysql.com/doc/mysql-shell/8.4/en/mysql-shell-utilities-upgrade.html" target="_blank" rel="noopener"&gt;MySQL Shell Upgrade Check Tool Documentation&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 05 Mar 2026 14:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/guide-to-upgrade-azure-database-for-mysql-from-8-0-to-8-4/ba-p/4493669</guid>
      <dc:creator>SamZhao</dc:creator>
      <dc:date>2026-03-05T14:00:00Z</dc:date>
    </item>
    <item>
      <title>Troubleshooting Intermittent Query Failures on Azure SQL DB Read Replicas (Error 3947)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/troubleshooting-intermittent-query-failures-on-azure-sql-db-read/ba-p/4499635</link>
      <description>&lt;P&gt;Recently, I worked on an interesting customer case involving intermittent query failures while fetching reporting data from a read replica.&lt;/P&gt;
&lt;P&gt;The customer was running their database on the Azure SQL Database Hyperscale service tier, utilizing the &lt;STRONG&gt;read scale-out replica&lt;/STRONG&gt; to offload reporting traffic from the primary compute node.&lt;/P&gt;
&lt;P&gt;However, while loading reports, the application occasionally took longer than usual and then failed with the following error:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;[DataSource.Error] ERROR [HY000] [Microsoft][ODBC Driver 18 for SQL Server] Unspecified error occurred on SQL Server. Connection may have been terminated by the server. &lt;BR /&gt;&lt;BR /&gt;ERROR [HY000] [Microsoft][ODBC Driver 18 for SQL Server][SQL Server] The service has encountered an error processing your request. Please try again. &lt;BR /&gt;Error code 3947.&lt;/STRONG&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;BR /&gt;In this post, I’ll walk through how we analyzed the issue, identified the root cause, and the mitigation strategies that helped the customer.&lt;/P&gt;
&lt;H3&gt;Understanding Error 3947&lt;/H3&gt;
&lt;P&gt;The key indicator in the error message was &lt;STRONG&gt;Error 3947&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;3947&lt;/STRONG&gt; – The transaction was aborted because the secondary compute failed to catch up redo. Retry the transaction.&lt;/P&gt;
&lt;P&gt;This error typically occurs when the secondary compute node &lt;STRONG&gt;(read replica) &lt;/STRONG&gt;cannot keep up with redo operations being replayed from the primary replica.&lt;/P&gt;
&lt;P&gt;When this happens, the system may terminate long-running queries on the replica to prevent the replica from falling too far behind the primary.&lt;/P&gt;
&lt;H3&gt;Step 1 – Connect to the Read Replica&lt;/H3&gt;
&lt;P&gt;To begin troubleshooting, connect to the read replica using SSMS or your client tool with the following connection parameter:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;ApplicationIntent=ReadOnly&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;This ensures the session connects to the read scale-out replica instead of the primary compute node.&lt;/P&gt;
&lt;H3&gt;Step 2 – Check for Active Blocking Sessions&lt;/H3&gt;
&lt;P&gt;Once connected to the read replica, the first step is to check whether queries are blocking each other.&lt;/P&gt;
&lt;P&gt;The following query helps identify active requests and blocking chains.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;SELECT&lt;/P&gt;
&lt;P&gt;r.session_id,&lt;/P&gt;
&lt;P&gt;r.status,&lt;/P&gt;
&lt;P&gt;r.command,&lt;/P&gt;
&lt;P&gt;r.blocking_session_id,&lt;/P&gt;
&lt;P&gt;r.wait_type,&lt;/P&gt;
&lt;P&gt;r.wait_time,&lt;/P&gt;
&lt;P&gt;r.wait_resource,&lt;/P&gt;
&lt;P&gt;r.cpu_time,&lt;/P&gt;
&lt;P&gt;r.total_elapsed_time,&lt;/P&gt;
&lt;P&gt;DB_NAME(r.database_id) AS database_name,&lt;/P&gt;
&lt;P&gt;SUBSTRING(t.text,&lt;/P&gt;
&lt;P&gt;(r.statement_start_offset/2)+1,&lt;/P&gt;
&lt;P&gt;((CASE r.statement_end_offset&lt;/P&gt;
&lt;P&gt;WHEN -1 THEN DATALENGTH(t.text)&lt;/P&gt;
&lt;P&gt;ELSE r.statement_end_offset END&lt;/P&gt;
&lt;P&gt;- r.statement_start_offset)/2) + 1) AS running_statement&lt;/P&gt;
&lt;P&gt;FROM sys.dm_exec_requests r&lt;/P&gt;
&lt;P&gt;CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t&lt;/P&gt;
&lt;P&gt;WHERE r.session_id &amp;lt;&amp;gt; @@SPID&lt;/P&gt;
&lt;P&gt;ORDER BY r.total_elapsed_time DESC;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Columns to observe:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;blocking_session_id&lt;/STRONG&gt; – indicates if a query is being blocked&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;wait_type&lt;/STRONG&gt; – shows what the query is waiting for&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;total_elapsed_time&lt;/STRONG&gt; – identifies long-running queries&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Step 3 – Identify Compile Locks and Schema Locks&lt;/H3&gt;
&lt;P&gt;While investigating the issue, we observed multiple compile locks and schema locks on several table objects.&lt;/P&gt;
&lt;P&gt;Even though the database is read-only, queries still need access to metadata, and this can result in schema-related locks.&lt;/P&gt;
&lt;P&gt;Use the following query to identify schema locks on objects.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;SELECT&lt;/P&gt;
&lt;P&gt;tl.request_session_id,&lt;/P&gt;
&lt;P&gt;tl.resource_type,&lt;/P&gt;
&lt;P&gt;tl.resource_associated_entity_id AS object_id,&lt;/P&gt;
&lt;P&gt;OBJECT_NAME(tl.resource_associated_entity_id) AS object_name,&lt;/P&gt;
&lt;P&gt;tl.request_mode,&lt;/P&gt;
&lt;P&gt;tl.request_status&lt;/P&gt;
&lt;P&gt;FROM sys.dm_tran_locks tl&lt;/P&gt;
&lt;P&gt;WHERE tl.resource_type = 'OBJECT'&lt;/P&gt;
&lt;P&gt;AND tl.request_mode LIKE '%SCH%'&lt;/P&gt;
&lt;P&gt;ORDER BY tl.request_session_id;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Typical schema locks include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;SCH-S&lt;/STRONG&gt; – Schema Stability&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;SCH-M&lt;/STRONG&gt; – Schema Modification&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These locks indicate metadata access or schema changes.&lt;/P&gt;
&lt;H3&gt;Step 4 – Identify Waiting Tasks&lt;/H3&gt;
&lt;P&gt;Next, check whether sessions are waiting due to schema lock contention.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;SELECT&lt;/P&gt;
&lt;P&gt;wt.session_id,&lt;/P&gt;
&lt;P&gt;wt.wait_type,&lt;/P&gt;
&lt;P&gt;wt.wait_duration_ms,&lt;/P&gt;
&lt;P&gt;wt.blocking_session_id,&lt;/P&gt;
&lt;P&gt;wt.resource_description&lt;/P&gt;
&lt;P&gt;FROM sys.dm_os_waiting_tasks wt&lt;/P&gt;
&lt;P&gt;WHERE wt.session_id &amp;gt; 50&lt;/P&gt;
&lt;P&gt;ORDER BY wt.wait_duration_ms DESC;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Common wait types that may appear include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;LCK_M_SCH_S&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;LCK_M_SCH_M&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These wait types usually indicate schema lock contention between queries and redo operations.&lt;/P&gt;
&lt;H3&gt;Why Blocking Happens on Read Replicas&lt;/H3&gt;
&lt;P&gt;In normal scenarios, read-only databases eliminate most user-generated blocking since users cannot modify data.&lt;/P&gt;
&lt;P&gt;However, schema changes performed on the primary replica are still replayed on the read-only replica through the redo process.&lt;/P&gt;
&lt;P&gt;Examples include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;ALTER TABLE&lt;/LI&gt;
&lt;LI&gt;Index rebuilds&lt;/LI&gt;
&lt;LI&gt;Statistics updates&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;During this replay process, the redo thread temporarily acquires &lt;STRONG&gt;schema locks&lt;/STRONG&gt; to maintain metadata consistency.&lt;/P&gt;
&lt;P&gt;If a read query accesses the same object at the same time, the query may need to wait until the schema operation completes.&lt;/P&gt;
&lt;H3&gt;When Long Running Queries Block the Redo Process&lt;/H3&gt;
&lt;P&gt;Queries running on read replicas must still access metadata such as:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Tables&lt;/LI&gt;
&lt;LI&gt;Indexes&lt;/LI&gt;
&lt;LI&gt;Statistics&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In rare cases, the following scenario can occur:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;A query on the read replica acquires metadata locks.&lt;/LI&gt;
&lt;LI&gt;The primary replica executes schema changes.&lt;/LI&gt;
&lt;LI&gt;The redo process attempts to replay those changes on the secondary.&lt;/LI&gt;
&lt;LI&gt;The redo process becomes blocked by the query.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;If the blocking query runs for too long, the system may &lt;STRONG&gt;terminate the query automatically&lt;/STRONG&gt; to prevent the read replica from falling behind the primary.&lt;/P&gt;
&lt;P&gt;When this occurs, the session may receive errors such as:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Error 1219&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Error 3947&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Root Cause in This Customer Scenario&lt;/H3&gt;
&lt;P&gt;In this particular case, the customer had an ETL pipeline running every 5 minutes on the primary replica.&lt;/P&gt;
&lt;P&gt;The ETL process frequently modified database objects, which resulted in metadata changes that needed to be replayed on the read replica.&lt;/P&gt;
&lt;P&gt;At the same time, reporting queries were running on the read replica. Occasionally, these queries conflicted with schema operations being replayed through the redo process.&lt;/P&gt;
&lt;P&gt;This caused the redo process to lag behind, and eventually the system terminated the queries to maintain replication health, resulting in &lt;STRONG&gt;Error 3947&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H3&gt;Mitigation and Best Practices&lt;/H3&gt;
&lt;P&gt;While this behavior is &lt;STRONG&gt;by design&lt;/STRONG&gt;, the following best practices can help minimize the impact.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Avoid Frequent Schema Changes During Peak Workloads&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Frequent schema modifications increase the likelihood of blocking on read replicas.&lt;/P&gt;
&lt;P&gt;Where possible:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Schedule schema changes during low workload windows&lt;/LI&gt;
&lt;LI&gt;Reduce frequent metadata modifications in ETL jobs&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Monitor Long Running Queries&lt;/H3&gt;
&lt;P&gt;Long-running queries increase the risk of blocking redo operations.&lt;/P&gt;
&lt;P&gt;Use the following query to identify such queries:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;SELECT&lt;/P&gt;
&lt;P&gt;r.session_id,&lt;/P&gt;
&lt;P&gt;r.start_time,&lt;/P&gt;
&lt;P&gt;r.total_elapsed_time/1000 AS elapsed_seconds,&lt;/P&gt;
&lt;P&gt;r.status,&lt;/P&gt;
&lt;P&gt;r.wait_type,&lt;/P&gt;
&lt;P&gt;r.blocking_session_id,&lt;/P&gt;
&lt;P&gt;t.text AS query_text&lt;/P&gt;
&lt;P&gt;FROM sys.dm_exec_requests r&lt;/P&gt;
&lt;P&gt;CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t&lt;/P&gt;
&lt;P&gt;WHERE r.session_id &amp;lt;&amp;gt; @@SPID&lt;/P&gt;
&lt;P&gt;ORDER BY r.total_elapsed_time DESC;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H3&gt;Implement Retry Logic in Applications&lt;/H3&gt;
&lt;P&gt;Because these errors can occur intermittently, applications should implement retry logic when encountering:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Error 3947&lt;/LI&gt;
&lt;LI&gt;Error 1219&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Retrying the transaction typically succeeds once the redo process catches up.&lt;/P&gt;
&lt;H3&gt;Conclusion&lt;/H3&gt;
&lt;P&gt;Read replicas significantly improve scalability by offloading reporting workloads, but they still depend on the redo process to stay synchronized with the primary replica.&lt;/P&gt;
&lt;P&gt;If long-running queries interfere with redo operations, the system may terminate those queries to protect replication health and availability.&lt;/P&gt;
&lt;P&gt;By monitoring blocking, schema locks, and long-running queries on read replicas, it becomes easier to identify and mitigate these scenarios.&lt;/P&gt;</description>
      <pubDate>Thu, 05 Mar 2026 07:58:51 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/troubleshooting-intermittent-query-failures-on-azure-sql-db-read/ba-p/4499635</guid>
      <dc:creator>Anasuah_Chakraborty</dc:creator>
      <dc:date>2026-03-05T07:58:51Z</dc:date>
    </item>
    <item>
      <title>Managed Identity Support for Azure SQL Database Import &amp; Export services (preview)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-sql-blog/managed-identity-support-for-azure-sql-database-import-export/ba-p/4498732</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Today&amp;nbsp;we’re&amp;nbsp;announcing&amp;nbsp;a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;public preview&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;that lets Azure SQL Database Import&amp;nbsp;&amp;amp;&amp;nbsp;Export&amp;nbsp;services&amp;nbsp;authenticate with&amp;nbsp;user-assigned&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;managed identity&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&amp;nbsp;Now Azure SQL Databases can perform import and export operations with no passwords, storage&amp;nbsp;keys&amp;nbsp;or SAS tokens.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;With this preview, customers can choose to use either a single user-assigned managed identity (UAMI) for both SQL and Storage permissions or assign separate UAMIs, one for the Azure SQL logical server and another for the Storage account, for full separation of duties.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:279}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;At a glance:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="5" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Run Import/Export using a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;user-assigned managed identity (UAMI).&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="5" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Use one identity for both SQL and Storage, or split them if you prefer tighter scoping.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="5" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Works in the portal, REST, Azure CLI, and PowerShell.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P aria-level="2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P aria-level="2"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Why &lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;this matters:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Managed identity support makes SQL migrations simpler and safer, no passwords, storage keys, or SAS tokens. By leveraging managed identity when integrating Import/Export into a pipeline, you streamline access management and enhance security: permissions are granted directly to the identity, reducing manual credential handling and the risk of exposing sensitive information. This keeps operations efficient and secure, without secrets embedded in scripts&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;You’ve&amp;nbsp;got two straightforward options:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;One UAMI for everything&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;(simplest setup).&lt;/SPAN&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Two UAMIs, &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;one for SQL and one for Storage, recommended if you wish to maintain more strictly defined permissions.&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P aria-level="2"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Getting started:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Create a user-assigned managed identity (UAMI)&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;Decide up front whether you want one identity end-to-end, or two identities (SQL vs Storage) for separation of duties.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Attach the UAMI to the Azure SQL logical server&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;On the server&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Identity&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;blade, add the UAMI so the Import/Export job can run as that identity.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Set the server’s Microsoft Entra ID admin to the UAMI&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;In&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Microsoft Entra ID&amp;nbsp;&amp;gt;&amp;nbsp;Set admin,&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;select the UAMI. This is what lets the workflow authenticate to SQL without a password.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Grant Storage access&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;Use&amp;nbsp;Storage Blob Data Reader&amp;nbsp;for import and&amp;nbsp;Storage Blob Data Contributor&amp;nbsp;for export, assigned in&amp;nbsp;Access control (IAM). If you can, scope the assignment to the container that holds the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.bacpac&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Pass resource IDs (not names) in your calls&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;
&lt;P&gt;In REST/CLI/PowerShell, you pass the UAMI&amp;nbsp;&lt;STRONG&gt;resource ID&lt;/STRONG&gt; as the value of &lt;EM&gt;administratorLogin&lt;/EM&gt; (SQL identity) and &lt;EM&gt;storageKey&lt;/EM&gt; (Storage identity), and set &lt;EM&gt;authenticationType&lt;/EM&gt; / &lt;EM&gt;storageKeyType&lt;/EM&gt; to &lt;EM&gt;ManagedIdentity&lt;/EM&gt;.&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="5" data-aria-level="1"&gt;
&lt;P&gt;&lt;EM&gt;administratorLogin → UAMI resource ID used for SQL auth&lt;/EM&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI style="font-style: italic;" aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="5" data-aria-level="1"&gt;
&lt;P&gt;&lt;EM&gt;storageKey → UAMI resource ID used for Storage &lt;/EM&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI style="font-style: italic;" aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="5" data-aria-level="1"&gt;
&lt;P&gt;&lt;EM&gt;authauthenticationType / storageKeyType → ManagedIdentity&lt;/EM&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="6" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Run the import/export job&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;Kick it off from the portal, REST, Azure CLI, or PowerShell. From there, the service uses the identity you selected to reach both SQL and Storage.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P aria-level="3"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Portal experience&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;In the Azure portal, you can choose&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Authentication type&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;=&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Managed identity&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and select the user-assigned managed identity to use for the operation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Figure 1: Azure portal Import/Export experience with Managed identity authentication selected.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Notes&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;This preview supports&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;user-assigned&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;managed identities (UAMIs).&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;For least privilege, scope Storage roles to the specific container used for the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.bacpac&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; file and use two user-assigned managed identities (UAMIs), one for SQL and one for the storage.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P aria-level="3"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Sample 1: REST API — Export using one UAMI:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;$exportBody = "{
`n  `"storageKeyType`": `"ManagedIdentity`",
`n  `"storageKey`": `"${managedIdentityServerResourceId}`",
`n  `"storageUri`": `"${storageUri}`",
`n  `"administratorLogin`": `"${managedIdentityServerResourceId}`",
`n  `"authenticationType`": `"ManagedIdentity`"
`n}"

$export = Invoke-AzRestMethod -Method POST -Path "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.Sql/servers/${serverName}/databases/${databaseName}/export?api-version=2024-05-01-preview" -Payload $exportBody

# Poll operation status
Invoke-AzRestMethod -Method GET $export.Headers.Location.AbsoluteUri&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Sample 2: REST API — Import to a new server using &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;one UAMI:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;$serverName = "sql-mi-demo-target"
$databaseName = "sqldb-mi-demo-target"

# Same UAMI for SQL auth + Storage access
$importBody = "{
`n  `"operationMode`": `"Import`",
`n  `"administratorLogin`": `"${managedIdentityServerResourceId}`",
`n  `"authenticationType`": `"ManagedIdentity`",
`n  `"storageKeyType`": `"ManagedIdentity`",
`n  `"storageKey`": `"${managedIdentityServerResourceId}`",
`n  `"storageUri`": `"${storageUri}`",
`n  `"databaseName`": `"${databaseName}`"
`n}"

$import = Invoke-AzRestMethod -Method POST -Path "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.Sql/servers/${serverName}/databases/${databaseName}/import?api-version=2024-05-01-preview" -Payload $importBody

# Poll operation status
Invoke-AzRestMethod -Method GET $import.Headers.Location.AbsoluteUri&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Sample 3: PowerShell — Export using two UAMIs:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Server UAMI for SQL auth, Storage UAMI for storage access
New-AzSqlDatabaseExport -ResourceGroupName $resourceGroupName -DatabaseName $databaseName -ServerName $serverName -StorageKeyType ManagedIdentity -StorageKey $managedIdentityStorageResourceId -StorageUri $storageUri -AuthenticationType ManagedIdentity -AdministratorLogin $managedIdentityServerResourceId&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Sample 4: PowerShell — Import to a new server using two UAMIs:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;New-AzSqlDatabaseImport -ResourceGroupName $resourceGroupName -DatabaseName $databaseName -ServerName $serverName -DatabaseMaxSizeBytes $databaseSizeInBytes -StorageKeyType "ManagedIdentity" -StorageKey $managedIdentityStorageResourceId -StorageUri $storageUri -Edition $edition -ServiceObjectiveName $serviceObjectiveName -AdministratorLogin $managedIdentityServerResourceId -AuthenticationType ManagedIdentity&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Sample 5: Azure CLI — Export using two UAMIs:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;az sql db export -s $serverName -n $databaseName -g $resourceGroupName --auth-type ManagedIdentity -u $managedIdentityServerResourceId --storage-key $managedIdentityStorageResourceId --storage-key-type ManagedIdentity --storage-uri $storageUri&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Sample 6: Azure CLI — Import to a new server using two UAMIs:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;az sql db import -s $serverName -n $databaseName -g $resourceGroupName --auth-type ManagedIdentity -u $managedIdentityServerResourceId --storage-key $managedIdentityStorageResourceId --storage-key-type ManagedIdentity --storage-uri $storageUrib&lt;/LI-CODE&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;For more information and samples, please check &lt;/SPAN&gt;&lt;A href="https://aka.ms/importMIpupr" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Tutorial: Use managed identity with Azure SQL import and export (preview)&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 03 Mar 2026 18:05:24 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-sql-blog/managed-identity-support-for-azure-sql-database-import-export/ba-p/4498732</guid>
      <dc:creator>HugoQueiroz_MSFT</dc:creator>
      <dc:date>2026-03-03T18:05:24Z</dc:date>
    </item>
    <item>
      <title>GA of update policy SQL Server 2025 for Azure SQL Managed Instance</title>
      <link>https://techcommunity.microsoft.com/t5/azure-sql-blog/ga-of-update-policy-sql-server-2025-for-azure-sql-managed/ba-p/4498802</link>
      <description>&lt;P&gt;We’re happy to announce that the update policy&amp;nbsp;&lt;EM&gt;SQL Server 2025&lt;/EM&gt;&amp;nbsp;for Azure SQL Managed Instance is now generally available (GA). &lt;EM&gt;SQL Server 2025&lt;/EM&gt;&amp;nbsp;update policy contains all the latest SQL engine innovation while retaining database portability to the recent major release of SQL Server.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://aka.ms/sqlmiupdatepolicydocs" target="_blank" rel="noopener"&gt;Update policy&lt;/A&gt;&amp;nbsp;is an instance configuration option that provides flexibility and allows you to choose between instant access to the latest SQL engine features and fixed SQL engine feature set corresponding to 2022 and 2025 major releases of SQL Server. Regardless of the update policy chosen, you continue to benefit from Azure SQL platform innovation. New features and capabilities not related to the SQL engine – everything that makes Azure SQL Managed Instance a true PaaS service – are successively delivered to your Azure SQL Managed Instance resources.&lt;/P&gt;
&lt;H2&gt;What’s new in SQL Server 2025 update policy&lt;/H2&gt;
&lt;P&gt;In short, instances with update policy&amp;nbsp;&lt;EM&gt;SQL Server 2025&lt;/EM&gt;&amp;nbsp;benefit from all the SQL engine features that were gradually added to the&amp;nbsp;&lt;EM&gt;Always-up-to-date&lt;/EM&gt;&amp;nbsp;policy over the past few years and are not available in the&amp;nbsp;&lt;EM&gt;SQL Server 2022&lt;/EM&gt;&amp;nbsp;update policy. Let’s name few most notable features, with complete list available in the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/update-policy?view=azuresql&amp;amp;tabs=azure-portal#feature-comparison" target="_blank" rel="noopener"&gt;update policy documentation&lt;/A&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/performance/optimized-locking" target="_blank" rel="noopener"&gt;Optimized locking&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/mirroring/azure-sql-managed-instance" target="_blank" rel="noopener"&gt;Mirroring in Fabric&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/regular-expressions/overview" target="_blank" rel="noopener"&gt;Regular expression functions&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/data-types/vector-data-type" target="_blank" rel="noopener"&gt;Vector data type&lt;/A&gt;&amp;nbsp;and&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/functions/vector-functions-transact-sql" target="_blank" rel="noopener"&gt;functions&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/data-types/json-data-type" target="_blank" rel="noopener"&gt;JSON data type&lt;/A&gt;&amp;nbsp;and&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/functions/json-arrayagg-transact-sql" target="_blank" rel="noopener"&gt;aggregate functions&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-invoke-external-rest-endpoint-transact-sql" target="_blank" rel="noopener"&gt;Invoking HTTP REST endpoints&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/copy-only-backups-sql-server" target="_blank" rel="noopener"&gt;Manual (copy-only) backup to immutable Azure Blob Storage&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/azuresqlblog/stream-data-in-near-real-time-from-sql-to-azure-event-hubs---public-preview/4470724" target="_blank" rel="noopener" data-lia-auto-title="Change Event Streaming (private preview)" data-lia-auto-title-active="0"&gt;Change Event Streaming (private preview)&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Update policy for each modernization strategy&lt;/H2&gt;
&lt;P&gt;&lt;EM&gt;Always-up-to-date&lt;/EM&gt;&amp;nbsp;is a “perpetual” update policy. It has no end of lifetime and brings new SQL engine features to instances as soon as they are available in Azure. It enables you to always be at the forefront - to quickly adopt new yet production-ready SQL engine features, benefit from them in everyday operations and keep a competitive edge without waiting for the next major release of SQL Server.&lt;/P&gt;
&lt;P&gt;In contrast, update policies&amp;nbsp;&lt;EM&gt;SQL Server 2025&lt;/EM&gt;&amp;nbsp;and&amp;nbsp;&lt;EM&gt;SQL Server 2022&lt;/EM&gt;&amp;nbsp;contain fixed sets of SQL engine features corresponding to the respective releases of SQL Server. They’re optimized to fulfill regulatory compliance, contractual, or other requirements for database/workload portability from managed instance to SQL Server. Over time, they get security patches, fixes, and incremental functional improvements in form of Cumulative Updates, but not new SQL engine features. They also have limited lifetime, aligned with the period of mainstream support of SQL Server releases. As the end of mainstream support for the update policy approaches, you should upgrade instances to a newer policy. Instances will be &lt;STRONG&gt;automatically upgraded&lt;/STRONG&gt; to the next more recent update policy at the end of mainstream support of their existing update policy.&lt;/P&gt;
&lt;H2&gt;Best practices with the Update policy feature&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Plan for the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/lifecycle/products/azure-sql-managed-instance" target="_blank" rel="noopener"&gt;end of lifetime&lt;/A&gt;&amp;nbsp;of&amp;nbsp;&lt;EM&gt;SQL Server 2022&lt;/EM&gt;&amp;nbsp;update policy if you’re using it today, and upgrade to a newer policy on your terms before automatic upgrade kicks in. Choose between &lt;EM&gt;Always-up-to-date&lt;/EM&gt; and SQL Server 2025&amp;nbsp;update policy.&lt;/LI&gt;
&lt;LI&gt;Make sure to add update policy configuration to your&amp;nbsp;&lt;STRONG&gt;deployment templates and scripts&lt;/STRONG&gt;, so that you don’t rely on service defaults that may change in the future.&lt;/LI&gt;
&lt;LI&gt;Be aware that using some of the newly introduced features may require changing the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/databases/view-or-change-the-compatibility-level-of-a-database" target="_blank" rel="noopener"&gt;database compatibility level&lt;/A&gt;. Consult feature documentation for details.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;What’s coming next&lt;/H2&gt;
&lt;P&gt;&lt;EM&gt;SQL Server 2025&lt;/EM&gt; will become the default update policy in Azure portal during March 2026.&lt;/P&gt;
&lt;P&gt;Future versions of REST API, PowerShell and CLI will also have the default value changed to &lt;EM&gt;SQL Server 2025 &lt;/EM&gt;for the „database format“ parameter which corresponds to the instance’s update policy.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;SQL Server 2022 &lt;/EM&gt;policy will reach end of lifetime on January 11, 2028 when the &lt;A href="https://learn.microsoft.com/en-us/lifecycle/products/sql-server-2022" target="_blank" rel="noopener"&gt;mainstream support for SQL Server 2022&lt;/A&gt; ends. Plan timely and change the update policy of your instances before that date.&lt;/P&gt;
&lt;img&gt;Update policy transitions&lt;/img&gt;
&lt;H2&gt;Summary&lt;/H2&gt;
&lt;P&gt;Update policy&amp;nbsp;&lt;EM&gt;SQL Server 2025 &lt;/EM&gt;for Azure SQL Managed Instance is now&lt;EM&gt; &lt;/EM&gt;&lt;STRONG&gt;generally available&lt;/STRONG&gt;. It brings the same set of SQL engine features that exist in the new SQL Server 2025. Consider it if you have regulatory compliance, contractual, or other reasons for database/workload portability from Azure SQL Managed Instance to SQL Server 2025. Otherwise, use the &lt;EM&gt;Always-up-to-date&lt;/EM&gt; policy which always provides the latest features and benefits available to Azure SQL Managed Instance.&lt;/P&gt;
&lt;P&gt;If your instances are currently configured with &lt;EM&gt;SQL Server 2022&lt;/EM&gt; update policy, &lt;STRONG&gt;update them to a newer policy&lt;/STRONG&gt; before the end of mainstream support.&lt;/P&gt;
&lt;P&gt;For more details visit&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/update-policy?view=azuresql&amp;amp;tabs=azure-portal" target="_blank" rel="noopener"&gt;Update policy documentation&lt;/A&gt;. To stay up to date with the latest feature additions to Azure SQL Managed Instance, subscribe to the&amp;nbsp;&lt;A href="https://www.youtube.com/@AzureSQL" target="_blank" rel="noopener"&gt;Azure SQL video channel&lt;/A&gt;, subscribe to the&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/t5/azure-sql-blog/bg-p/AzureSQLBlog" target="_blank" rel="noopener"&gt;Azure SQL Blog&lt;/A&gt;&amp;nbsp;feed, or bookmark&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/doc-changes-updates-release-notes-whats-new" target="_blank" rel="noopener"&gt;What’s new in Azure SQL Managed Instance&lt;/A&gt; article with regular updates.&lt;/P&gt;</description>
      <pubDate>Tue, 03 Mar 2026 00:22:29 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-sql-blog/ga-of-update-policy-sql-server-2025-for-azure-sql-managed/ba-p/4498802</guid>
      <dc:creator>Mladen_Andzic</dc:creator>
      <dc:date>2026-03-03T00:22:29Z</dc:date>
    </item>
    <item>
      <title>Why Developers and DBAs love SQL’s Dynamic Data Masking (Series-Part 1)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-sql-blog/why-developers-and-dbas-love-sql-s-dynamic-data-masking-series/ba-p/4498450</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Dynamic Data Masking (DDM) is one of those SQL features (available in SQL Server, Azure SQL DB, Azure SQL MI, SQL Database in Microsoft Fabric) that both developers and DBAs can rally behind. Why? Because it delivers a simple, built-in way to protect sensitive data—like phone numbers, emails, or IDs—without rewriting application logic or duplicating security rules across layers. With just a single line of T-SQL, you can configure masking directly at the column level, ensuring that non-privileged users see only obfuscated values while privileged users retain full access. This not only streamlines development but also supports compliance with data privacy regulations like GDPR and HIPAA, etc. by minimizing exposure to personally identifiable information (PII).&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;In this first post of our DDM series, we’ll walk through a real-world scenario using the default masking function to show how easy it is to implement and how much development effort it can save.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Scenario: Hiding customer phone numbers from support queries&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Imagine you have a support application where agents can look up customer profiles. They need to&amp;nbsp;know if&amp;nbsp;a phone number exists for the&amp;nbsp;customer but&amp;nbsp;shouldn’t&amp;nbsp;see the actual digits for privacy. In a traditional approach, a developer might implement custom logic in the app (or a SQL view) to replace phone numbers with placeholders like “XXXX” for non-privileged users. This adds complexity and duplicate logic across the app.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;With DDM’s default masking, the database can handle this automatically.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;By applying a mask to the phone number column, any query by a non-privileged user will return a generic masked value (e.g.&amp;nbsp;“XXXX”) instead of the real number. The support agent gets the information they need (that a number is on file) without revealing the actual phone number, and the developer writes&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;zero&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;masking code in the app.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This not only simplifies the application codebase but also ensures consistent data protection across all query access paths. As Microsoft’s documentation puts it, DDM lets you control how much sensitive data to reveal&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;“with minimal effect on the application layer”&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;– exactly what our scenario achieves.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Using the ‘Default’ Mask in T-SQL :&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The ‘Default’ masking function is the simplest mask: it fully replaces the actual value with a fixed default based on data type. For text data, that default is XXXX.&amp;nbsp;Let’s&amp;nbsp;apply this to our phone&amp;nbsp;number&amp;nbsp;example. The&amp;nbsp;following T-SQL snippet&amp;nbsp;works in Azure SQL Database, Azure SQL&amp;nbsp;MI&amp;nbsp;and SQL Server:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;SQL&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;-- Step 1: Create the table with a default mask on the Phone column&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;CREATE TABLE&amp;nbsp;SupportCustomers&amp;nbsp;(&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;   &amp;nbsp;CustomerID   INT PRIMARY KEY,&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;    Name        &amp;nbsp;NVARCHAR(100),&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;    Phone       &amp;nbsp;NVARCHAR(15) MASKED WITH (FUNCTION = 'default()')  --&amp;nbsp;Apply default masking&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;);&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;GO&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;-- Step 2: Insert sample data&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;INSERT INTO&amp;nbsp;SupportCustomers&amp;nbsp;(CustomerID, Name, Phone)&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;VALUES (1, 'Alice Johnson', '222-555-1234');&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;GO&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;-- Step 3: Create a non-privileged user (no login for simplicity)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;CREATE USER&amp;nbsp;SupportAgent&amp;nbsp;WITHOUT LOGIN;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;GO&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;-- Step 4: Grant SELECT permission on the table to the user&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;GRANT SELECT ON&amp;nbsp;SupportCustomers&amp;nbsp;TO&amp;nbsp;SupportAgent;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;GO&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;-- Step 5: Execute a SELECT as the non-privileged user&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;EXECUTE AS USER = 'SupportAgent';&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;SELECT Name, Phone FROM&amp;nbsp;SupportCustomers&amp;nbsp;WHERE&amp;nbsp;CustomerID&amp;nbsp;= 1&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Alternatively, you can use Azure Portal to configure masking as shown in the following screenshot:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Expected result:&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;The query above would return Alice’s name and a masked phone number. Instead of seeing&amp;nbsp;222-555-1234, the Phone column would show XXXX. Alice’s actual number&amp;nbsp;remains&amp;nbsp;safely stored in the database, but&amp;nbsp;it’s&amp;nbsp;dynamically obscured for the support agent’s query.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Meanwhile, privileged users such as administrator or db_owner which has CONTROL permission on the database or user with proper UNMASK permission would see the real phone number when running the same query.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;How this helps Developers :&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;By pushing the masking logic down to the database, developers and DBAs avoid writing repetitive masking code in every app or report that touches this data. In our scenario, without DDM you might implement a check in the application like:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;If&amp;nbsp;user_role&amp;nbsp;== “Support”,&amp;nbsp;then show “XXXX” for phone number, else show full phone.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;With DDM, such conditional code&amp;nbsp;isn’t&amp;nbsp;needed – the database takes care of it. This means:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Less application code&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;to write and&amp;nbsp;maintain&amp;nbsp;for masking&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Consistent masking&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;everywhere (whether data is accessed via app, report, or ad-hoc query).&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Quick changes&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;to masking rules in one place if requirements change, without hunting through application code.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;From a security standpoint, DDM reduces the risk of accidental data exposure and helps in compliance scenarios where personal data must be protected in lower environments or by certain roles, while reducing the developer effort drastically.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;In the next posts of this series, we’ll explore other masking functions (like Email, Partial, and Random etc) with different scenarios. By the end, you’ll see how each built-in mask can be applied to make data security and compliance more developer-friendly!&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Reference Links :&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/security/dynamic-data-masking?view=sql-server-ver17" target="_blank"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Dynamic Data Masking - SQL Server | Microsoft Learn&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/dynamic-data-masking-overview?view=azuresql" target="_blank"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Dynamic Data Masking - Azure SQL Database &amp;amp; Azure SQL Managed Instance &amp;amp; Azure Synapse Analytics | Microsoft Learn&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 02 Mar 2026 10:17:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-sql-blog/why-developers-and-dbas-love-sql-s-dynamic-data-masking-series/ba-p/4498450</guid>
      <dc:creator>MadhumitaTripathyMSFT</dc:creator>
      <dc:date>2026-03-02T10:17:08Z</dc:date>
    </item>
    <item>
      <title>Part 2: Safely Cleaning Orphaned Records in Change Tracking Side Tables</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/part-2-safely-cleaning-orphaned-records-in-change-tracking-side/ba-p/4497998</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Applies to:&lt;/STRONG&gt; Azure SQL Database (Change Tracking enabled)&lt;/P&gt;
&lt;H2&gt;Recap (Part 1)&lt;/H2&gt;
&lt;P&gt;In &lt;STRONG&gt;Part 1&lt;/STRONG&gt;, we covered how to &lt;EM&gt;detect&lt;/EM&gt; “orphaned” records in Change Tracking (CT) side tables — rows whose sys_change_xdes_id no longer has a matching transaction entry in the commit table (sys.syscommittab). This situation often leads to &lt;STRONG&gt;unexpected CT growth&lt;/STRONG&gt; and “stuck cleanup” symptoms because the mapping data required for normal cleanup is missing.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Part 1 link:&lt;/STRONG&gt; &lt;A href="https://techcommunity.microsoft.com/blog/azuredbsupport/identifying-orphaned-records-in-change-tracking-side-tables-read%E2%80%91only-health-che/4495617" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Identifying Orphaned Records in Change Tracking Side Tables (Read‑Only Health Check)&lt;/A&gt;&lt;/P&gt;
&lt;H2&gt;Why Part 2 is needed&lt;/H2&gt;
&lt;P&gt;A common “root pattern” we see in the field is:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Side-table cleanup &lt;EM&gt;attempts&lt;/EM&gt; to delete expired metadata&lt;/LI&gt;
&lt;LI&gt;Some side-table deletions fail (locks/timeouts/errors)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Commit-table cleanup proceeds anyway&lt;/STRONG&gt; (or a custom workflow deletes from commit table without validating side-table deletes)&lt;/LI&gt;
&lt;LI&gt;Remaining side-table rows now reference xdes_id values that no longer exist in sys.syscommittab → &lt;STRONG&gt;orphans&lt;/STRONG&gt;&amp;nbsp;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Microsoft Learn also emphasizes that &lt;STRONG&gt;syscommittab cleanup depends on side-table cleanup&lt;/STRONG&gt; — commit-table cleanup should happen only after side tables are cleaned.&lt;/P&gt;
&lt;P&gt;This Part 2 script focuses on &lt;STRONG&gt;removing orphaned rows from side tables&lt;/STRONG&gt; (and does &lt;EM&gt;not&lt;/EM&gt; touch sys.syscommittab), so cleanup logic can stabilize again.&lt;/P&gt;
&lt;H2&gt;Important prerequisites &amp;amp; constraints (read this first)&lt;/H2&gt;
&lt;H3&gt;1) Internal table access on Azure SQL Database&lt;/H3&gt;
&lt;P&gt;In Azure SQL Database, customers may not be able to access certain internal CT artifacts directly (even when attempting DAC-style workflows). In the related case discussions, internal testing noted that &lt;STRONG&gt;self‑service cleanup against internal tables can be infeasible&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H3&gt;2) CHECKPOINT note (why it’s in the script)&lt;/H3&gt;
&lt;P&gt;sys.dm_tran_commit_table exposes commit-table data and is backed by sys.syscommittab. Microsoft Learn notes that &lt;STRONG&gt;read-only users may not see live changes until a CHECKPOINT occurs&lt;/STRONG&gt;. &lt;BR /&gt;That’s why your script includes the optional CHECKPOINT comment before reading commit-table state.&lt;/P&gt;
&lt;H3&gt;3) Supported guidance vs. custom remediation&lt;/H3&gt;
&lt;P&gt;Microsoft Learn provides official troubleshooting/mitigation guidance for CT cleanup issues (including checking dbo.MSChange_tracking_history, assessing stale rows, and using sp_flush_commit_table_on_demand for commit-table cleanup). &lt;BR /&gt;Your script is a &lt;STRONG&gt;targeted remediation pattern&lt;/STRONG&gt; for a specific failure mode (orphaned side-table rows). Use it carefully, test first, and follow organizational approval processes.&lt;/P&gt;
&lt;H2&gt;What this script does (high-level)&lt;/H2&gt;
&lt;P&gt;T-SQL script is essentially &lt;STRONG&gt;Part 1 detection + optional targeted delete generation&lt;/STRONG&gt;:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Compute the “safe cleanup point”&lt;/STRONG&gt; from CT retention (wall-clock → CSN) using sp_changetracking_time_to_csn — the same concept used in Part 1.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Enumerate &lt;STRONG&gt;CT side tables&lt;/STRONG&gt; via sys.internal_tables where internal_type = 209 (CT side tables).&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;For each side table, identify &lt;STRONG&gt;candidate orphaned transaction IDs&lt;/STRONG&gt; (sys_change_xdes_id) that:
&lt;UL&gt;
&lt;LI&gt;are older than a computed boundary (@minXdesId derived from sys.dm_tran_commit_table), and&lt;/LI&gt;
&lt;LI&gt;have &lt;STRONG&gt;no matching&lt;/STRONG&gt; xdes_id in sys.syscommittab at/before the cleanup point&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Print orphan counts per side table using RAISERROR … WITH NOWAIT (operator-friendly, streaming output).&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Safety cross-check:&lt;/STRONG&gt; abort if any “orphan” unexpectedly exists in sys.syscommittab (defensive sanity gate)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Generate a DELETE statement&lt;/STRONG&gt; for the current side table (execution is commented out)&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;The script (Part 2) — “detect + generate delete”&lt;/H2&gt;
&lt;P&gt;Below is T-SQL script. I kept the &lt;STRONG&gt;delete step disabled by default,&lt;/STRONG&gt; so it remains safe to share. (You can enable execution only after approvals/testing.)&lt;/P&gt;
&lt;DIV&gt;&lt;LI-CODE lang="sql"&gt;-- use &amp;lt;[DBName]&amp;gt; -- switch to the right database


-- run checkpoint first to ensure all in-memory commit table data is persisted to disk (syscommittab)
-- checkpoint


SET NOCOUNT ON


-- find the invalid clean version based on configured retention
DECLARE &lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="2625139" data-lia-user-login="time" class="lia-mention lia-mention-user"&gt;time&lt;/a&gt; DATETIME, @csn BIGINT = 0, @minCleanupPoint BIGINT = 0
DECLARE @retention_period INT, @retention_period_units NVARCHAR(10)
SELECT @retention_period = retention_period,
@retention_period_units = retention_period_units
FROM sys.change_tracking_databases where database_id = DB_ID()
SELECT &lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="2625139" data-lia-user-login="time" class="lia-mention lia-mention-user"&gt;time&lt;/a&gt; = CASE WHEN @retention_period_units = 1 then DATEADD(minute, (-1 * @retention_period), GETUTCDATE())
WHEN @retention_period_units = 2 then DATEADD(hour, (-1 * @retention_period), GETUTCDATE())
ELSE DATEADD(day, (-1 * @retention_period), GETUTCDATE()) END

EXEC sp_changetracking_time_to_csn &lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="2625139" data-lia-user-login="time" class="lia-mention lia-mention-user"&gt;time&lt;/a&gt; = &lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="2625139" data-lia-user-login="time" class="lia-mention lia-mention-user"&gt;time&lt;/a&gt;, @csn = @csn OUTPUT


SELECT @minCleanupPoint = @csn
SELECT @minCleanupPoint as minCsn -- 688118


-- iterate over all the change tracking side tables
DECLARE @sideTable SYSNAME;
DECLARE ct_cursor CURSOR FAST_FORWARD FOR
SELECT name FROM sys.internal_tables WHERE internal_type = 209; -- internal_type = 209 is for change tracking side tables


OPEN ct_cursor;
FETCH NEXT FROM ct_cursor INTO @sideTable;


WHILE @@FETCH_STATUS = 0
BEGIN
-- find the minimum expired xdes id
declare @minXdesId BIGINT
SELECT @minXdesId = min(xdes_id) FROM sys.dm_tran_commit_table where commit_ts &amp;lt;= @minCleanupPoint
-- SELECT @minXdesId as minXdes

-- create temp table for storing orphaned xdes id
DROP TABLE IF EXISTS #OrphanedXdes;
    CREATE TABLE #OrphanedXdes
    (
        sys_change_xdes_id BIGINT NOT NULL
    );


DECLARE @sql NVARCHAR(MAX);
SET @sql = N'
    INSERT INTO #OrphanedXdes(sys_change_xdes_id)
    SELECT ct.sys_change_xdes_id
    FROM sys.' + QUOTENAME(@sideTable) + N' AS ct
    WHERE ct.sys_change_xdes_id &amp;lt; @minXdesId
      AND NOT EXISTS
      (
          SELECT 1
          FROM sys.syscommittab AS s
          WHERE s.xdes_id = ct.sys_change_xdes_id AND s.commit_ts &amp;lt;= @minCleanupPoint
      );';


    EXEC sys.sp_executesql
        @sql,
        N'@minXdesId BIGINT, @minCleanupPoint BIGINT',
        @minXdesId = @minXdesId,
@minCleanupPoint = @minCleanupPoint;


DECLARE @orphanedIdsCount BIGINT;
SET @sql = N'
SELECT @cnt = COUNT_BIG(sys_change_xdes_id)
FROM #OrphanedXdes;
';


EXEC sys.sp_executesql
@sql,
N'@cnt BIGINT OUTPUT',
@cnt = @orphanedIdsCount OUTPUT;


-- Raise error if any orphaned xdes exists
IF (@orphanedIdsCount &amp;gt; 0)
BEGIN
DECLARE @msg NVARCHAR(4000) =
@sideTable + N' : ' + CONVERT(NVARCHAR(30), @orphanedIdsCount);
RAISERROR (@msg, 16, 1) WITH NOWAIT;

DECLARE @newLine NVARCHAR(10) = CHAR(13) + CHAR(10)
PRINT (@newLine)
END




-- Cross-check that no xdes should exist in syscommittab
-- !!!IMPORTANT!!! RAISE an error and stop the cleanup if it does
SET @sql = N'
DECLARE @nonMatchingXdesCount BIGINT;


SELECT @nonMatchingXdesCount = COUNT_BIG(*)
FROM #OrphanedXdes AS ct
WHERE EXISTS (
SELECT 1
FROM sys.syscommittab AS s
WHERE s.xdes_id = ct.sys_change_xdes_id
);


-- SELECT @nonMatchingXdesCount as nonMatchingXdesCount

IF (COALESCE(@nonMatchingXdesCount, 0)  &amp;gt; 0)
BEGIN TRY
DECLARE @msg NVARCHAR(1024);
SET @msg = N''Cleanup aborted: orphan cross-check failed for side table [' + @sideTable + N'].'';
RAISERROR(@msg, 16, 1) WITH NOWAIT;
RETURN;
END TRY
BEGIN CATCH
THROW;
END CATCH
';


EXEC sys.sp_executesql @sql;


IF (@orphanedIdsCount &amp;gt; 0)
BEGIN
-- Prepare the query to delete the orphaned rows from the side table
SET @sql = N'DELETE ct FROM sys.' + @sideTable + N' ct WHERE EXISTS (SELECT 1 FROM #OrphanedXdes AS o WHERE o.sys_change_xdes_id = ct.sys_change_xdes_id);';


SELECT @sql -- validate the delete query is correctly generated
-- Sample delete statement: DELETE ct FROM sys.change_tracking_1221579390 ct WHERE EXISTS (SELECT 1 FROM #OrphanedXdes AS o WHERE o.sys_change_xdes_id = ct.sys_change_xdes_id);


-- NOTE: Uncomment the below query to execute the delete statement and remove the orphaned records
-- EXEC sys.sp_executesql @sql;
END


DROP TABLE IF EXISTS #OrphanedXdes;


    FETCH NEXT FROM ct_cursor INTO @sideTable;
END


CLOSE ct_cursor;
DEALLOCATE ct_cursor;


SET NOCOUNT OFF&lt;/LI-CODE&gt;
&lt;H2&gt;Why the “cross-check abort” is a good idea&lt;/H2&gt;
&lt;P&gt;Notice the safety gate:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;You first define orphans as those &lt;STRONG&gt;not existing&lt;/STRONG&gt; in sys.syscommittab (for the cleanup horizon).&lt;/LI&gt;
&lt;LI&gt;Then you re-check: “If any of these appear in syscommittab, abort.”&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This prevents an accidental delete if:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;the cleanup horizon math is wrong,&lt;/LI&gt;
&lt;LI&gt;the environment has unexpected visibility differences, or&lt;/LI&gt;
&lt;LI&gt;the temp-table contents are not what you expect.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;That defensive posture aligns well with the general principle documented in Microsoft Learn: &lt;STRONG&gt;commit-table cleanup should only occur after side-table cleanup&lt;/STRONG&gt;, and troubleshooting should be data-driven and cautious.&lt;/P&gt;
&lt;H2&gt;How to interpret the output&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;If you see &lt;STRONG&gt;no RAISERROR lines&lt;/STRONG&gt;, the script did not find orphaned rows under the defined criteria.&lt;/LI&gt;
&lt;LI&gt;If you see:
&lt;UL&gt;
&lt;LI&gt;change_tracking_&amp;lt;id&amp;gt; : &amp;lt;count&amp;gt;&lt;/LI&gt;
&lt;/UL&gt;
that indicates &amp;lt;count&amp;gt; orphaned transaction references in that CT side table. This is the same style used in Part 1 for long-running, streaming progress.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Next steps (recommended order)&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Confirm CT configuration&lt;/STRONG&gt; (retention + AUTO_CLEANUP status) using official guidance.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Run Part 1 / Part 2 detection&lt;/STRONG&gt; to quantify scope (which side tables, how many).&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;If you need to remediate:
&lt;UL&gt;
&lt;LI&gt;Prefer &lt;STRONG&gt;supported mitigations&lt;/STRONG&gt; where possible (for example, disable/enable CT on a table to purge tracking metadata for that table is listed as a “quickest remedy” in Microsoft Learn for certain cleanup lock conflict scenarios).&lt;/LI&gt;
&lt;LI&gt;If table-level disable/enable isn’t acceptable, use an approval-driven approach for targeted cleanup.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Closing&lt;/H2&gt;
&lt;P&gt;Orphaned CT side-table records are one of those “silent growth” conditions that can be easy to miss until storage or CHANGETABLE performance becomes painful. Part 1 helps you &lt;STRONG&gt;spot&lt;/STRONG&gt; the issue early; Part 2 helps you &lt;STRONG&gt;prepare a safe, targeted cleanup&lt;/STRONG&gt; workflow — with explicit safety gates and a delete step that remains disabled by default.&lt;/P&gt;
&lt;H3&gt;References&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/blog/azuredbsupport/identifying-orphaned-records-in-change-tracking-side-tables-read%E2%80%91only-health-che/4495617" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Identifying Orphaned Records in Change Tracking Side Tables (Read‑Only Health Check)&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/track-changes/cleanup-and-troubleshoot-change-tracking-sql-server?view=sql-server-ver17" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Troubleshoot change tracking auto cleanup issues&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/change-tracking-sys-dm-tran-commit-table?view=sql-server-ver17" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;sys.dm_tran_commit_table (Change Tracking)&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;</description>
      <pubDate>Sat, 28 Feb 2026 02:11:17 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/part-2-safely-cleaning-orphaned-records-in-change-tracking-side/ba-p/4497998</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-02-28T02:11:17Z</dc:date>
    </item>
    <item>
      <title>Troubleshooting Change Tracking cleanup growth and orphaned rows in Azure SQL Database</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/troubleshooting-change-tracking-cleanup-growth-and-orphaned-rows/ba-p/4497997</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Applies to:&lt;/STRONG&gt; Azure SQL Database&lt;BR /&gt;&lt;STRONG&gt;Scenario:&lt;/STRONG&gt; Change Tracking (CT) side tables grow unexpectedly, and “orphaned” rows appear after switching between auto-cleanup and custom/scheduled manual cleanup.&lt;/P&gt;
&lt;H2&gt;The problem (what we observed)&lt;/H2&gt;
&lt;P&gt;In the case tracked, the discussion focused on &lt;STRONG&gt;Change Tracking cleanup behavior&lt;/STRONG&gt;—including &lt;STRONG&gt;unexpected growth in CT side tables&lt;/STRONG&gt; and &lt;STRONG&gt;orphaned records&lt;/STRONG&gt;. The customer also referenced earlier guidance to move away from auto-cleanup due to locking concerns during upgrades, and the team needed to propose safe next steps quickly.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A parallel concern was that &lt;STRONG&gt;CT auto-cleanup could block DDL during upgrades&lt;/STRONG&gt; (schema lock behavior), which triggered work to deploy a fix and validate it in a lab before broader rollout.&lt;/P&gt;
&lt;H2&gt;Why this is tricky&lt;/H2&gt;
&lt;P&gt;The engagement highlighted that &lt;STRONG&gt;manual cleanup&lt;/STRONG&gt; and &lt;STRONG&gt;auto-cleanup&lt;/STRONG&gt; can behave differently in real-world, high-scale environments (large number of CT-enabled tables, heavy activity, and operational constraints like access and auditing). Investigation efforts included:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;validating where orphaned rows exist and how many CT side tables are affected,&lt;/LI&gt;
&lt;LI&gt;checking whether auto-cleanup is enabled/disabled, and&lt;/LI&gt;
&lt;LI&gt;using auditing / Extended Events to identify who/what is dropping related history objects.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Additionally, &lt;STRONG&gt;Snapshot Isolation&lt;/STRONG&gt; can prevent cleanup from progressing in some cases. it was noted that long-running snapshot transactions can prevent a safe cleanup point from advancing, which can block removal of expired entries from internal commit tracking tables until those transactions complete.&lt;/P&gt;
&lt;H2&gt;Practical troubleshooting steps (what helped)&lt;/H2&gt;
&lt;H3&gt;1) Confirm CT configuration (retention + auto-cleanup)&lt;/H3&gt;
&lt;P&gt;Use the Change Tracking configuration options to validate retention and whether auto-cleanup is enabled. Microsoft Learn documents enabling CT at the database level (including CHANGE_RETENTION and AUTO_CLEANUP). &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/track-changes/enable-and-disable-change-tracking-sql-server?view=sql-server-ver17" target="_blank"&gt;Enable and Disable Change Tracking - SQL Server | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;2) Quick backlog signal: commit table “oldest commit_time”&lt;/H3&gt;
&lt;P&gt;During the investigation, the team used a lightweight query to sanity-check backlog in the commit table:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT TOP (1) *
FROM sys.dm_tran_commit_table
ORDER BY commit_time ASC;&lt;/LI-CODE&gt;
&lt;P&gt;if the returned commit_time is close to the retention horizon,&amp;nbsp;&lt;STRONG&gt;auto-cleanup is likely keeping up&lt;/STRONG&gt; (this query doesn’t require DAC).&lt;/P&gt;
&lt;H3&gt;3) Detect orphaned rows in CT side tables (read-only script)&lt;/H3&gt;
&lt;P&gt;A key artifact from this case is the following T-SQL script, which calculates a cleanup point based on configured retention and then &lt;STRONG&gt;iterates over CT side tables&lt;/STRONG&gt; (sys.internal_tables where internal_type = 209) to identify rows whose sys_change_xdes_id no longer has a matching entry in sys.syscommittab at/below the cleanup point.&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;LI-CODE lang="sql"&gt;-- use &amp;lt;[DBName]&amp;gt; -- switch to the right database


SET NOCOUNT ON


-- find the invalid clean version based on configured retention
DECLARE &lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="2625139" data-lia-user-login="time" class="lia-mention lia-mention-user"&gt;time&lt;/a&gt; DATETIME, @csn BIGINT = 0, @minCleanupPoint BIGINT = 0
DECLARE @retention_period INT, @retention_period_units NVARCHAR(10)
SELECT @retention_period = retention_period,
@retention_period_units = retention_period_units
FROM sys.change_tracking_databases where database_id = DB_ID()
SELECT &lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="2625139" data-lia-user-login="time" class="lia-mention lia-mention-user"&gt;time&lt;/a&gt; = CASE WHEN @retention_period_units = 1 then DATEADD(minute, (-1 * @retention_period), GETUTCDATE())
WHEN @retention_period_units = 2 then DATEADD(hour, (-1 * @retention_period), GETUTCDATE())
ELSE DATEADD(day, (-1 * @retention_period), GETUTCDATE()) END

EXEC sp_changetracking_time_to_csn &lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="2625139" data-lia-user-login="time" class="lia-mention lia-mention-user"&gt;time&lt;/a&gt; = &lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="2625139" data-lia-user-login="time" class="lia-mention lia-mention-user"&gt;time&lt;/a&gt;, @csn = @csn OUTPUT


SELECT @minCleanupPoint = @csn
SELECT @minCleanupPoint as minCsn -- 688118


-- iterate over all the change tracking side tables
DECLARE @sideTable SYSNAME;
DECLARE ct_cursor CURSOR FAST_FORWARD FOR
SELECT name FROM sys.internal_tables WHERE internal_type = 209; -- internal_type = 209 is for change tracking side tables


OPEN ct_cursor;
FETCH NEXT FROM ct_cursor INTO @sideTable;


WHILE @@FETCH_STATUS = 0
BEGIN
-- find the minimum expired xdes id
declare @minXdesId BIGINT
SELECT @minXdesId = min(xdes_id) FROM sys.dm_tran_commit_table where commit_ts &amp;lt;= @minCleanupPoint
-- SELECT @minXdesId as minXdes

-- create temp table for storing orphaned xdes id
DROP TABLE IF EXISTS #OrphanedXdes;
    CREATE TABLE #OrphanedXdes
    (
        sys_change_xdes_id BIGINT NOT NULL
    );


DECLARE @sql NVARCHAR(MAX);
SET @sql = N'
    INSERT INTO #OrphanedXdes(sys_change_xdes_id)
    SELECT ct.sys_change_xdes_id
    FROM sys.' + QUOTENAME(@sideTable) + N' AS ct
    WHERE ct.sys_change_xdes_id &amp;lt; @minXdesId
      AND NOT EXISTS
      (
          SELECT 1
          FROM sys.syscommittab AS s
          WHERE s.xdes_id = ct.sys_change_xdes_id AND s.commit_ts &amp;lt;= @minCleanupPoint
      );';


    EXEC sys.sp_executesql
        @sql,
        N'@minXdesId BIGINT, @minCleanupPoint BIGINT',
        @minXdesId = @minXdesId,
@minCleanupPoint = @minCleanupPoint;


DECLARE @orphanedIdsCount BIGINT;
SET @sql = N'
SELECT @cnt = COUNT_BIG(sys_change_xdes_id)
FROM #OrphanedXdes;
';


EXEC sys.sp_executesql
@sql,
N'@cnt BIGINT OUTPUT',
@cnt = @orphanedIdsCount OUTPUT;


-- Raise error if any orphaned xdes exists
IF (@orphanedIdsCount &amp;gt; 0)
BEGIN
DECLARE @msg NVARCHAR(4000) =
@sideTable + N' : ' + CONVERT(NVARCHAR(30), @orphanedIdsCount);
RAISERROR (@msg, 16, 1) WITH NOWAIT;

DECLARE @newLine NVARCHAR(10) = CHAR(13) + CHAR(10)
PRINT (@newLine)
END


DROP TABLE IF EXISTS #OrphanedXdes;


    FETCH NEXT FROM ct_cursor INTO @sideTable;
END


CLOSE ct_cursor;
DEALLOCATE ct_cursor;


SET NOCOUNT OFF&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;P&gt;Here’s the high-level logic (excerpted/annotated from the script):&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Read retention settings from sys.change_tracking_databases&lt;/LI&gt;
&lt;LI&gt;Convert “retention window” to a cleanup CSN using sp_changetracking_time_to_csn&lt;/LI&gt;
&lt;LI&gt;For each CT side table (sys.internal_tables internal_type = 209):
&lt;UL&gt;
&lt;LI&gt;compare side-table sys_change_xdes_id vs. sys.syscommittab and count “orphaned” xdes ids&lt;/LI&gt;
&lt;LI&gt;emit a message when orphaned counts are present&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Tip:&lt;/STRONG&gt; This is &lt;STRONG&gt;read-only&lt;/STRONG&gt; diagnostic logic. In your environment, validate permissions and impact before running in production.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H3&gt;4) If auto-cleanup is unexpectedly disabled, re-enable and monitor&lt;/H3&gt;
&lt;P&gt;In the email thread, the team observed auto-cleanup was disabled in at least one environment and recommended re-enabling it, then monitoring the CT history table to confirm cleanup activity resumes.&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;5) Use auditing / Extended Events to identify unexpected object drops&lt;/H3&gt;
&lt;P&gt;When investigating why a “history table” disappeared, the team reviewed extended event data and noted evidence of a specific application context associated with the drop (shared in the meeting discussion).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is a key lesson: &lt;STRONG&gt;without auditing&lt;/STRONG&gt;, it can be difficult to determine who/what disabled auto-cleanup or dropped relevant objects; the email thread explicitly called this out.&lt;/P&gt;
&lt;H2&gt;Mitigation options discussed&lt;/H2&gt;
&lt;H3&gt;(often safest): disable &amp;amp; re-enable Change Tracking on affected tables&lt;/H3&gt;
&lt;P&gt;As an alternative to running manual deletion scripts, the troubleshooting recommended &lt;STRONG&gt;disabling and re-enabling Change Tracking&lt;/STRONG&gt; on the set of tables containing orphaned rows—described as a well-established and safer cleanup method that avoids needing elevated access to run cleanup scripts directly.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Trade-off:&lt;/STRONG&gt; disabling CT on a table removes existing change data from the corresponding side tables for that table.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;About auto-cleanup performance improvements (why “stay on auto” may be preferred)&lt;/H2&gt;
&lt;P&gt;The troubleshooting included discussion that&amp;nbsp;&lt;STRONG&gt;auto-cleanup is the area that continues to receive improvements&lt;/STRONG&gt; and that performance enhancements exist (for example, improved adaptive behavior in newer SQL Server versions). Microsoft Learn describes that SQL Server 2025 introduces an “adaptive shallow cleanup approach” for large side tables, enabled by default, and explains how cleanup behavior changes compared to prior versions.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;While Azure SQL Database implementation details differ from boxed SQL Server, the key operational takeaway from the discussion was: if possible, &lt;STRONG&gt;prefer auto-cleanup&lt;/STRONG&gt; and avoid manual cleanup unless you have a strong reason, because manual cleanup may lack telemetry and can be harder to reason about at scale.&lt;/P&gt;
&lt;H2&gt;Key takeaways&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Start with read-only validation&lt;/STRONG&gt;: use a backlog signal query and an orphan-detection script to quantify scope before making changes.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Auditability matters&lt;/STRONG&gt;: without auditing/trace evidence, identifying who disabled auto-cleanup or dropped related objects is difficult.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Snapshot Isolation can block cleanup progress&lt;/STRONG&gt;: long-running snapshot transactions may prevent safe cleanup from advancing. Keep snapshot transactions short where possible.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;A safe mitigation exists&lt;/STRONG&gt;&amp;nbsp;disable/re-enable Change Tracking on affected tables (with awareness of change data loss) can be safer than running deletion scripts.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;References&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Microsoft Learn: Change Tracking management overview (permissions, internal tables, behavior considerations). &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/track-changes/manage-change-tracking-sql-server?view=sql-server-ver17" target="_blank"&gt;Manage Change Tracking - SQL Server | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Microsoft Learn: Enable and Disable Change Tracking (retention + auto-cleanup configuration). &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/track-changes/enable-and-disable-change-tracking-sql-server?view=sql-server-ver17" target="_blank"&gt;Enable and Disable Change Tracking - SQL Server | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Sat, 28 Feb 2026 01:49:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/troubleshooting-change-tracking-cleanup-growth-and-orphaned-rows/ba-p/4497997</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-02-28T01:49:00Z</dc:date>
    </item>
    <item>
      <title>Cannot enable Change Data Capture (CDC) on Azure SQL Database: Msg 22830 + Error 40529 (SUSER_SNAME)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/cannot-enable-change-data-capture-cdc-on-azure-sql-database-msg/ba-p/4497995</link>
      <description>&lt;H2&gt;Issue&lt;/H2&gt;
&lt;P&gt;While enabling CDC at the database level on Azure SQL Database using:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;EXEC sys.sp_cdc_enable_db;
GO&lt;/LI-CODE&gt;
&lt;P&gt;the operation fails.&lt;/P&gt;
&lt;H2&gt;Error&lt;/H2&gt;
&lt;P&gt;The customer observed the following failure when running sys.sp_cdc_enable_db:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Msg 22830&lt;/STRONG&gt;: Could not update the metadata that indicates the database is enabled for Change Data Capture.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Failure occurred when executing &lt;STRONG&gt;drop user cdc&lt;/STRONG&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Error 40529&lt;/STRONG&gt;: &lt;EM&gt;"Built-in function 'SUSER_SNAME' in impersonation context is not supported in this version of SQL Server."&lt;/EM&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;What we checked (quick validation)&lt;/H2&gt;
&lt;P&gt;Before applying any changes, we confirmed CDC wasn’t partially enabled and no CDC artifacts were created:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;-- Is CDC enabled for this database?
SELECT name, is_cdc_enabled
FROM sys.databases
WHERE name = DB_NAME();&lt;/LI-CODE&gt;&lt;LI-CODE lang="sql"&gt;-- Does CDC schema exist?
SELECT name
FROM sys.schemas
WHERE name = 'cdc';&lt;/LI-CODE&gt;&lt;LI-CODE lang="sql"&gt;-- Does CDC user exist?
SELECT name
FROM sys.database_principals
WHERE name = 'cdc';&lt;/LI-CODE&gt;
&lt;P&gt;These checks were used during troubleshooting in the SR.&amp;nbsp;&lt;BR /&gt;(Also note: Microsoft Learn documents that enabling CDC creates the cdc schema/user and requires exclusive use of that schema/user.)&lt;/P&gt;
&lt;H2&gt;Cause&lt;/H2&gt;
&lt;P&gt;In this case, the failure aligned with a known Azure SQL Database CDC scenario: &lt;STRONG&gt;enabling CDC can fail if there is an active database-level trigger that calls SUSER_SNAME()&lt;/STRONG&gt;.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To identify active database-level triggers, we used:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT name, object_id
FROM sys.triggers
WHERE parent_class_desc = 'DATABASE'
AND is_disabled = 0;&lt;/LI-CODE&gt;
&lt;H2&gt;Resolution / Workaround&lt;/H2&gt;
&lt;P&gt;The customer resolved the issue by:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Identifying the active &lt;STRONG&gt;database-level trigger&lt;/STRONG&gt;.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Disabling&lt;/STRONG&gt; the trigger temporarily.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Enabling CDC at the &lt;STRONG&gt;database&lt;/STRONG&gt; level and then enabling CDC on the required &lt;STRONG&gt;tables&lt;/STRONG&gt;.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Re-enabling&lt;/STRONG&gt; the trigger after CDC was successfully enabled.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Post-resolution verification&lt;/H2&gt;
&lt;P&gt;After enabling CDC, you can validate the state using:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;-- Confirm CDC enabled at DB level
SELECT name, is_cdc_enabled
FROM sys.databases
WHERE name = DB_NAME();&lt;/LI-CODE&gt;
&lt;P&gt;And for table-level tracking, Microsoft Learn recommends checking the is_tracked_by_cdc column in sys.tables.&lt;/P&gt;
&lt;H2&gt;Notes / Requirements&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;To enable CDC for Azure SQL Database, &lt;STRONG&gt;db_owner&lt;/STRONG&gt; is required.&lt;/LI&gt;
&lt;LI&gt;Azure SQL Database uses a &lt;STRONG&gt;CDC scheduler&lt;/STRONG&gt; (instead of SQL Server Agent jobs) for capture/cleanup.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 28 Feb 2026 01:25:09 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/cannot-enable-change-data-capture-cdc-on-azure-sql-database-msg/ba-p/4497995</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-02-28T01:25:09Z</dc:date>
    </item>
    <item>
      <title>DORA exit planning for Azure SQL Database: a practical, “general guidance” blueprint</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/dora-exit-planning-for-azure-sql-database-a-practical-general/ba-p/4497992</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Why this matters:&lt;/STRONG&gt; Under the EU &lt;STRONG&gt;Digital Operational Resilience Act (DORA)&lt;/STRONG&gt;, many financial entities are strengthening requirements around &lt;STRONG&gt;ICT risk management&lt;/STRONG&gt;, &lt;STRONG&gt;third‑party risk oversight&lt;/STRONG&gt;, and—critically—&lt;STRONG&gt;exit planning / substitutability&lt;/STRONG&gt;. Microsoft provides resources to help customers navigate DORA, including a DORA compliance hub in the Microsoft Trust Center.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This post distills &lt;STRONG&gt;general guidance&lt;/STRONG&gt; based on a real-world support thread where a customer requested a &lt;STRONG&gt;formal advisory&lt;/STRONG&gt; describing an exit strategy for an Azure SQL Database workload (including a large database scenario).&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;EM&gt;(Note: The content here is intentionally generalized and not legal advice—always align with your compliance team and regulators.)&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;The customer asked Microsoft Support for a &lt;STRONG&gt;formal response&lt;/STRONG&gt; to support DORA regulatory expectations, focusing on &lt;STRONG&gt;data portability&lt;/STRONG&gt;, &lt;STRONG&gt;exit planning&lt;/STRONG&gt;, and &lt;STRONG&gt;substitution capabilities&lt;/STRONG&gt; for workloads running on &lt;STRONG&gt;Azure SQL Database&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;The support response framed the need as:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;a &lt;STRONG&gt;regulatory submission&lt;/STRONG&gt; use case under DORA,&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;where Microsoft can provide &lt;STRONG&gt;official references&lt;/STRONG&gt; and describe the &lt;STRONG&gt;technical capabilities&lt;/STRONG&gt; enabling portability and exit.&lt;/LI&gt;
&lt;LI&gt;while &lt;STRONG&gt;customers remain responsible&lt;/STRONG&gt; for defining, documenting, testing, and periodically validating their &lt;STRONG&gt;exit procedures&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Microsoft’s DORA resources: where to pull “regulatory artifacts” from&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;A key part of the Support Request response was pointing to Microsoft’s formal compliance resources:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Microsoft publishes DORA-related guidance and operational resilience materials via the &lt;STRONG&gt;Microsoft Trust Center &lt;/STRONG&gt;and&amp;nbsp;makes compliance documentation available via the &lt;STRONG&gt;Service Trust Portal&lt;/STRONG&gt; for supervisory/audit processes.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Microsoft also maintains a &lt;STRONG&gt;DORA compliance hub&lt;/STRONG&gt; in the Trust Center aimed at helping financial institutions meet DORA requirements.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Microsoft Learn provides an overview of DORA, scope, and key areas for customer consideration. &lt;A href="https://www.microsoft.com/en-us/trust-center/compliance/dora-compliance" target="_blank"&gt;Navigating DORA compliance | Microsoft Trust Center&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Practical takeaway:&lt;/STRONG&gt; For DORA evidence packs, align your narrative to the regulator’s questions, and use &lt;STRONG&gt;Trust Center / Service Trust Portal&lt;/STRONG&gt; materials as the “Microsoft-published” backbone, then attach your &lt;STRONG&gt;customer-owned exit runbooks and test evidence&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Data ownership and portability: the foundation of an exit plan&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In the ticket’s advisory, Microsoft Support emphasized:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Azure SQL Database is built on the SQL Server engine, and &lt;STRONG&gt;customers retain ownership of their data&lt;/STRONG&gt;.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;The service supports portability through &lt;STRONG&gt;SQL Server–compatible schemas, T‑SQL&lt;/STRONG&gt;, and &lt;STRONG&gt;documented export/restore mechanisms&lt;/STRONG&gt;, reducing dependency on proprietary formats.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;How to use this in a DORA exit narrative:&lt;/STRONG&gt;&lt;BR /&gt;Frame “reversibility” as &lt;STRONG&gt;standards-based data and schema portability&lt;/STRONG&gt; (SQL/T‑SQL + documented export/import). That’s exactly the type of substitutability narrative many regulators want to see.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Supported exit strategy building blocks (Azure SQL Database → on‑prem SQL Server)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The Support Request response described the exit approach at a&amp;nbsp;&lt;STRONG&gt;high level&lt;/STRONG&gt;, using supported, documented capabilities:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Exporting database schema and data&lt;/STRONG&gt; using SQL Server–compatible formats&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Restoring or importing&lt;/STRONG&gt; into an on‑prem SQL Server environment with functional equivalence&lt;/LI&gt;
&lt;LI&gt;Maintaining &lt;STRONG&gt;security controls&lt;/STRONG&gt; (auth, encryption in transit/at rest, integrity protections) during transition&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Validating restored data and application functionality&lt;/STRONG&gt; as part of exit testing&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;One concrete, Microsoft-documented portability method for Azure SQL Database is &lt;STRONG&gt;exporting to a BACPAC&lt;/STRONG&gt; (schema + data), which can later be imported into &lt;STRONG&gt;SQL Server&lt;/STRONG&gt;.&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;BACPAC: what Microsoft documentation explicitly calls out (and why it matters for “exit planning”)&lt;/H3&gt;
&lt;P&gt;Microsoft Learn documents:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A &lt;STRONG&gt;BACPAC&lt;/STRONG&gt; contains &lt;EM&gt;metadata and data&lt;/EM&gt; and can be stored in Azure Blob storage or local storage and later imported into Azure SQL Database, Azure SQL Managed Instance, or &lt;STRONG&gt;SQL Server&lt;/STRONG&gt;. &lt;A href="https://learn.microsoft.com/en-us/sql/tools/sql-database-projects/concepts/data-tier-applications/export-bacpac-file?view=sql-server-ver17" target="_blank"&gt;Export a BACPAC File - SQL Server | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;For transactional consistency, ensure no write activity during export or export from a transactionally consistent copy.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Blob-storage exports have a &lt;STRONG&gt;maximum BACPAC size of 200 GB&lt;/STRONG&gt;; larger exports should go to &lt;STRONG&gt;local storage using SqlPackage&lt;/STRONG&gt;.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;BACPAC is &lt;STRONG&gt;not intended as a backup/restore&lt;/STRONG&gt; mechanism; Azure SQL has built-in automated backups.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;DORA relevance:&lt;/STRONG&gt; BACPAC is a strong “portability evidence” artifact because it is explicitly positioned for “archiving” or “moving to another platform,” including SQL Server. &lt;A href="https://github.com/MicrosoftDocs/sql-docs/blob/live/azure-sql/database/database-export.md" target="_blank"&gt;sql-docs/azure-sql/database/database-export.md at live · MicrosoftDocs/sql-docs&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Large databases: why “one-button export” may not be your plan&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The Support Request thread highlighted a “large database” scenario and referenced that Microsoft documentation describes &lt;STRONG&gt;high-level migration patterns&lt;/STRONG&gt; such as &lt;STRONG&gt;offline export/import&lt;/STRONG&gt; and &lt;STRONG&gt;staged validation&lt;/STRONG&gt; for large databases.&lt;/P&gt;
&lt;P&gt;In practice, if your database is far beyond BACPAC’s typical constraints (for example, BACPAC export to blob capped at 200 GB), your exit plan should explicitly describe:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;a &lt;STRONG&gt;staged approach&lt;/STRONG&gt; (e.g., dry-run validation environment, phased cutover planning),&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;capacity planning&lt;/STRONG&gt; (network bandwidth, validation windows),&lt;/LI&gt;
&lt;LI&gt;and a testing cadence that produces regulator-friendly evidence.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The ticket response also emphasized that customers should plan for &lt;STRONG&gt;sufficient time and capacity for transfer and validation&lt;/STRONG&gt; (especially for large databases).&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Customer responsibilities under DORA (the part regulators care about most)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;A key statement from the Support Request advisory is worth repeating as general guidance:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Microsoft provides the technical capabilities enabling data portability and exit, but &lt;STRONG&gt;customers remain responsible&lt;/STRONG&gt; for defining, documenting, testing, and periodically validating exit procedures—including planning timelines, allocating sufficient capacity, executing test exits, and maintaining evidence for regulatory review.&amp;nbsp;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;This aligns well with DORA’s intent and Microsoft’s broader DORA guidance narrative: DORA requires operational resilience outcomes, and organizations must integrate cloud capabilities into their governance and controls.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;A simple DORA-ready “exit plan checklist” you can adapt&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Below is a&amp;nbsp;&lt;STRONG&gt;general&lt;/STRONG&gt; checklist you can use to structure your exit plan documentation and evidence pack—aligned with what was emphasized in the Support Request:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Scope &amp;amp; dependencies&lt;/STRONG&gt;&lt;BR /&gt;Identify the Azure SQL Database workloads, dependent applications, and data flows to be included in the exit plan. &lt;EM&gt;(Customer-owned documentation and evidence)&lt;/EM&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Portability mechanism(s)&lt;/STRONG&gt;&lt;BR /&gt;Reference documented portability options such as schema+data export mechanisms (e.g., BACPAC) where applicable.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Security controls during transition&lt;/STRONG&gt;&lt;BR /&gt;Document how auth and encryption controls are maintained during transfer and restoration validation.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Validation plan&lt;/STRONG&gt;&lt;BR /&gt;Define how you will validate data integrity and application functionality in the target environment.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Scale planning (large DBs)&lt;/STRONG&gt;&lt;BR /&gt;Document transfer capacity planning, timelines, and staged validation where needed.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Evidence &amp;amp; audit trail&lt;/STRONG&gt;&lt;BR /&gt;Store test outputs, run logs, and references to Microsoft Trust Center / Service Trust Portal materials used in submissions.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;From the SR’s formal advisory perspective, the message is consistent:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Azure SQL Database supports &lt;STRONG&gt;data portability&lt;/STRONG&gt; and &lt;STRONG&gt;exits planning&lt;/STRONG&gt; via &lt;STRONG&gt;SQL Server–compatible&lt;/STRONG&gt; design and &lt;STRONG&gt;documented export/import mechanisms&lt;/STRONG&gt;,&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Microsoft provides DORA-oriented compliance materials via the &lt;STRONG&gt;Trust Center&lt;/STRONG&gt; and &lt;STRONG&gt;Service Trust Portal.&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;and customers should own the &lt;STRONG&gt;exit runbook&lt;/STRONG&gt;, &lt;STRONG&gt;testing&lt;/STRONG&gt;, and &lt;STRONG&gt;evidence&lt;/STRONG&gt; required for regulatory review.&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Sat, 28 Feb 2026 01:12:22 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/dora-exit-planning-for-azure-sql-database-a-practical-general/ba-p/4497992</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-02-28T01:12:22Z</dc:date>
    </item>
    <item>
      <title>Offline vs. Immutable Backups for Azure SQL Database (General Guidance)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/offline-vs-immutable-backups-for-azure-sql-database-general/ba-p/4497990</link>
      <description>&lt;P&gt;Security and compliance teams often ask for&amp;nbsp;&lt;STRONG&gt;“offline backups”&lt;/STRONG&gt; and &lt;STRONG&gt;“immutable backups”&lt;/STRONG&gt; as part of ransomware resilience, audit readiness, and recovery strategy. In Azure SQL Database, these terms map to &lt;STRONG&gt;different capabilities and operating models&lt;/STRONG&gt;.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Immutable backups (WORM)&lt;/STRONG&gt; are supported &lt;STRONG&gt;natively for Long-Term Retention (LTR) backups&lt;/STRONG&gt; in Azure SQL Database, using &lt;STRONG&gt;time-based&lt;/STRONG&gt; and &lt;STRONG&gt;legal hold&lt;/STRONG&gt; immutability modes.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Offline backups&lt;/STRONG&gt; (customer-controlled copies stored under your own governance) typically require a &lt;STRONG&gt;customer-managed export/copy process&lt;/STRONG&gt;, because platform-managed backups (including LTR) are retained within Azure’s managed backup system.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This post explains the difference, what’s supported today, and practical design considerations to help you choose the right approach.&lt;/P&gt;
&lt;H2&gt;1) Start by clarifying the requirement: “offline” vs. “immutable”&lt;/H2&gt;
&lt;H3&gt;What “immutable backup” means in practice&lt;/H3&gt;
&lt;P&gt;An immutable backup is stored in a &lt;STRONG&gt;Write Once, Read Many (WORM)&lt;/STRONG&gt; state—&lt;STRONG&gt;non-modifiable and non-erasable&lt;/STRONG&gt; for a user-defined retention period, providing protection against accidental or malicious deletion or modification (including by privileged administrators).&lt;/P&gt;
&lt;P&gt;Azure SQL Database supports&amp;nbsp;&lt;STRONG&gt;backup immutability for LTR backups&lt;/STRONG&gt;, written to &lt;STRONG&gt;Azure immutable storage&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H3&gt;What “offline backup” usually implies&lt;/H3&gt;
&lt;P&gt;“Offline” is commonly used to mean &lt;STRONG&gt;customer-controlled copies&lt;/STRONG&gt; stored separately (often under separate access controls and retention governance). Azure SQL Database’s built-in backups are platform-managed; if your policy explicitly requires “offline/customer-controlled” artifacts, you typically implement an &lt;STRONG&gt;export/copy&lt;/STRONG&gt; process in addition to platform-managed backups.&lt;/P&gt;
&lt;H2&gt;2) What Azure SQL Database supports natively: LTR + immutability (WORM)&lt;/H2&gt;
&lt;H3&gt;Long-Term Retention (LTR) basics&lt;/H3&gt;
&lt;P&gt;LTR exists to retain backups beyond the short-term retention window. It relies on the &lt;STRONG&gt;full database backups automatically created by the Azure SQL service&lt;/STRONG&gt; and stores specified full backups in &lt;STRONG&gt;redundant Azure Blob storage&lt;/STRONG&gt; with a retention policy of &lt;STRONG&gt;up to 10 years&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;If an LTR policy is configured, automated backups are&amp;nbsp;&lt;STRONG&gt;copied to different blobs for long-term storage&lt;/STRONG&gt;. The copy process is a &lt;STRONG&gt;background job that has no performance impact&lt;/STRONG&gt; on the database workload.&lt;/P&gt;
&lt;H3&gt;Immutability modes for LTR backups&lt;/H3&gt;
&lt;P&gt;Azure SQL Database LTR backups support both:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Time-based immutability&lt;/STRONG&gt; (policy-driven) — enabled at the LTR policy level; once &lt;STRONG&gt;enabled and locked&lt;/STRONG&gt;, new LTR backups taken from that point forward inherit the settings and remain immutable until the retention period ends.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Legal hold immutability&lt;/STRONG&gt; — enabled/disabled on a &lt;STRONG&gt;specific existing backup&lt;/STRONG&gt;, independent of time-based settings; backups remain immutable until legal hold is explicitly removed.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;3) Key operational constraint to plan for (important!)&lt;/H2&gt;
&lt;P&gt;Microsoft documentation notes that when you configure immutability for an Azure SQL Database with an LTR policy, the associated &lt;STRONG&gt;logical server can be blocked from deletion&lt;/STRONG&gt; until all immutable backups are removed (time-based and/or legal hold).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is a &lt;STRONG&gt;big lifecycle-management consideration&lt;/STRONG&gt; (especially for non-prod environments or automation that deletes/recreates servers).&lt;/P&gt;
&lt;H2&gt;4) Cost model: what changes when you enable immutability?&lt;/H2&gt;
&lt;P&gt;According to the documentation, there’s &lt;STRONG&gt;no additional cost&lt;/STRONG&gt; for enabling immutability on LTR backups; however, &lt;STRONG&gt;backup storage charges continue to accrue as long as the immutable backup file exists&lt;/STRONG&gt;, even if it’s past the configured LTR expiration date (while it remains immutable).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For broader spend governance, Microsoft recommends using &lt;STRONG&gt;Cost Management features&lt;/STRONG&gt; to set budgets, monitor costs, and review forecasted costs and trends.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Practical interpretation (design guidance): immutability changes the&amp;nbsp;&lt;STRONG&gt;delete/modify behavior&lt;/STRONG&gt; and therefore can affect &lt;STRONG&gt;how long backups remain stored&lt;/STRONG&gt;, which can influence total backup storage cost over time. This aligns with the “charges continue to accrue as long as the immutable backup file exists” note.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;5) How to enable LTR and immutability (high-level, supported steps)&lt;/H2&gt;
&lt;H3&gt;Configure LTR retention (Azure portal)&lt;/H3&gt;
&lt;P&gt;Microsoft’s guidance for LTR policy management includes navigate to your &lt;STRONG&gt;server → Backups → Retention policies&lt;/STRONG&gt;, select the database(s), and set weekly/monthly/yearly retention periods (or 0 for none).&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Enable time-based immutability (Azure portal)&lt;/H3&gt;
&lt;P&gt;The time-based immutability article describes enabling it through the server’s &lt;STRONG&gt;Backups → Retention Policies → Configure Policies&lt;/STRONG&gt;, then checking &lt;STRONG&gt;Enable time-based immutability policy&lt;/STRONG&gt; and &lt;STRONG&gt;lock the time-based immutable policy&lt;/STRONG&gt; (backups aren’t immutable until locked).&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Note: The documentation also highlights that only backups taken&amp;nbsp;&lt;STRONG&gt;after enabling and locking&lt;/STRONG&gt; time-based immutability become immutable; for existing backups, use &lt;STRONG&gt;legal hold&lt;/STRONG&gt; immutability instead.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;6) If you truly need “offline” (customer-controlled) backups&lt;/H2&gt;
&lt;P&gt;Azure SQL Database platform-managed backups (including LTR) are retained within Azure’s managed backup system. If your requirement explicitly mandates customer-controlled “offline” copies, you typically implement a &lt;STRONG&gt;customer-managed export/backup approach&lt;/STRONG&gt; and store exported artifacts under your own governance and retention controls.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is commonly positioned as &lt;STRONG&gt;complementary&lt;/STRONG&gt; to platform-managed backups—use platform features for operational recovery plus LTR immutability for tamper-proof retention and add customer-managed exports only if “offline custody” is a strict requirement.&lt;/P&gt;
&lt;H2&gt;7) Compliance notes you can cite internally&lt;/H2&gt;
&lt;P&gt;The immutability documentation states Azure SQL Database backup immutability helps meet stringent regulatory requirements (examples listed include &lt;STRONG&gt;SEC Rule 17a-4(f), FINRA Rule 4511(c), and CFTC Rule 1.31(c)-(d)&lt;/STRONG&gt;). It also notes that the Cohasset report is available in the Microsoft Service Trust Center, and you can request a letter of attestation via Azure Support.&lt;/P&gt;
&lt;H2&gt;8) Decision checklist (quick guidance)&lt;/H2&gt;
&lt;P&gt;Use this to decide what to implement:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Choose native LTR immutability if you need:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Tamper-proof backups (WORM) for compliance/audit/ransomware resilience.&lt;/LI&gt;
&lt;LI&gt;A solution integrated into Azure SQL Database backup retention (LTR).&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Add customer-managed exports if you need:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Customer-controlled “offline” custody of backup artifacts under your own storage governance/retention controls.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Plan carefully for lifecycle automation if you enable immutability:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Logical server deletion can be blocked while immutable backups exist.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;References (Microsoft documentation)&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Backup immutability for LTR backups: &lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/backup-immutability?view=azuresql" target="_blank"&gt;Backup Immutability for Long-Term Retention Backups - Azure SQL Database | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Time-based immutability configuration: &lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/backup-immutability-time-based?view=azuresql&amp;amp;tabs=azure-portal" target="_blank"&gt;Configure Time Based Backup Immutability for Long-Term Retention Backups - Azure SQL Database | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Legal hold immutability configuration: &lt;A href="https://www.sqlshack.com/managing-retention-period-of-azure-sql-database-backup/" target="_blank"&gt;Managing Retention period of Azure SQL Database Backup&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;LTR overview (how it works, background copy/no perf impact): &lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/long-term-retention-overview?view=azuresql" target="_blank"&gt;Long-Term Retention Backups - Azure SQL Database &amp;amp; Azure SQL Managed Instance | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Manage LTR retention policies: &lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/long-term-backup-retention-configure?view=azuresql&amp;amp;tabs=portal" target="_blank"&gt;Azure SQL Database: Manage long-term backup retention - Azure SQL Database | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Plan and manage Azure SQL Database costs: &lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/cost-management?view=azuresql" target="_blank"&gt;Plan and Manage Costs - Azure SQL Database | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Sat, 28 Feb 2026 00:48:26 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/offline-vs-immutable-backups-for-azure-sql-database-general/ba-p/4497990</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-02-28T00:48:26Z</dc:date>
    </item>
    <item>
      <title>Dashboards with Grafana - Now in Azure Portal for PostgreSQL</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/dashboards-with-grafana-now-in-azure-portal-for-postgresql/ba-p/4497607</link>
      <description>&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;Monitoring Azure Database for PostgreSQL just got significantly simpler. With the new&lt;STRONG&gt; &lt;/STRONG&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/azure-monitor/visualize/visualize-use-grafana-dashboards" target="_blank" rel="noopener"&gt;Azure Monitor Dashboards with Grafana&lt;/A&gt;, you can visualize key metrics and logs directly inside Azure Portal - no Grafana servers to deploy, no configuration to manage, and no additional cost&lt;/P&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;In this post, we’ll walk through how these built-in Grafana dashboards help you troubleshoot faster, understand database behavior at a glance, and decide when you might still want Azure Managed Grafana for advanced scenarios.&lt;/P&gt;
&lt;H1&gt;Native Grafana Dashboards — No Setup, No Hosting, No Cost&lt;/H1&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;We are thrilled to announce that &lt;A class="lia-external-url" href="https://azure.microsoft.com/products/postgresql" target="_blank" rel="noopener"&gt;Azure Database for PostgreSQL&lt;/A&gt; users can now access &lt;STRONG&gt;prebuilt Grafana dashboards directly within the Azure portal&lt;/STRONG&gt; - with &lt;STRONG&gt;no additional cost or configuration required&lt;/STRONG&gt;. This integration eliminates the complexity of deploying and administering self-hosted or managed Grafana instances. Grafana’s powerful visualization capabilities are now embedded directly in the Azure experience&lt;BR /&gt;&lt;BR /&gt;From the moment you open the Azure Portal you have immediate access to dashboards for PostgreSQL. Simply navigate to Azure Database for PostgreSQL server in the Azure Portal and select “&lt;STRONG&gt;Dashboards with Grafana&lt;/STRONG&gt;” and choose &lt;EM&gt;Featured dashboards&lt;/EM&gt;. Within seconds, you have a rich, real-time view of your database server’s health - no custom queries or manual wiring required.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;SPAN class="lia-text-color-19"&gt;&lt;EM&gt;Figure 1: Azure Portal showing the “Dashboards with Grafana” blade , featuring the prebuilt monitoring dashboard tile.&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;Comprehensive PostgreSQL Metrics at a Glance&lt;/H3&gt;
&lt;img /&gt;
&lt;P class="lia-align-center" style="font-size: 18px; line-height: 1.7;"&gt;&lt;SPAN class="lia-text-color-19"&gt;&lt;EM&gt;Figure 2: Azure PostgreSQL Grafana dashboard showing resource utilization, performance metrics, and server configuration.&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;As shown above, the new Grafana dashboard provides &lt;STRONG&gt;at-a-glance visibility&lt;/STRONG&gt; into all the key metrics that matter for Azure Database for PostgreSQL. These dashboards are purpose-built to surface the health and performance of your database server, so you can immediately spot trends or issues.&lt;/P&gt;
&lt;H3&gt;Quick Configuration Snapshot&lt;/H3&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;SPAN class="lia-text-color-19"&gt;&lt;EM&gt;Figure 3: PostgreSQL server details, showing version, region, compute size, availability, and resource usage gauges.&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;Every monitoring session starts with instant answers to critical questions:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Is the server up?&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Is High Availability configured?&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;How much storage is available?&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;The summary panel provides:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Instance status (Up/Down)&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;High Availability and replica configuration&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Azure region, PostgreSQL version, and SKU&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Live resource usage (CPU, memory, storage)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;No extra clicks. No custom queries. Just clarity.&lt;/P&gt;
&lt;H2&gt;Metrics Coverage&lt;/H2&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;The prebuilt dashboards visualize telemetry emitted by Azure Database for PostgreSQL, including:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;&lt;STRONG&gt;Server availability&lt;/STRONG&gt; and status&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;&lt;STRONG&gt;Active connections&lt;/STRONG&gt; and connection failures&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;&lt;STRONG&gt;CPU&lt;/STRONG&gt; and &lt;STRONG&gt;memory&lt;/STRONG&gt; utilization&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;&lt;STRONG&gt;Storage usage&lt;/STRONG&gt; and WAL consumption&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;&lt;STRONG&gt;Disk I/O&lt;/STRONG&gt; (IOPS and throughput)&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;&lt;STRONG&gt;Network&lt;/STRONG&gt; ingress and egress&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;&lt;STRONG&gt;Transaction&lt;/STRONG&gt; rates, commits, and rollbacks&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;These metrics are collected via Azure Monitor platform metrics and refreshed at near-real-time intervals (depending on metric type). For a complete list, see the&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/postgresql/monitor/concepts-monitoring" target="_blank" rel="noopener"&gt;Azure Database for PostgreSQL monitoring&lt;/A&gt; documentation.&lt;/P&gt;
&lt;H1&gt;Metrics and Logs—Together&lt;/H1&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;Ever struggled to trace a spike in CPU to the actual query behind it? With PostgreSQL logs and metrics visualized side-by-side, you can now correlate the &lt;STRONG&gt;exact timestamp&lt;/STRONG&gt; of a CPU surge with detailed &lt;STRONG&gt;query logs&lt;/STRONG&gt; in seconds.&lt;/P&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;SPAN class="lia-text-color-19"&gt;&lt;EM&gt;Figure 4:&amp;nbsp;&lt;/EM&gt;&lt;EM data-start="1045" data-end="1202"&gt;CPU usage metrics co-relation with PostgreSQL log entries in Azure Monitor, highlighting slow query detection and log integration.&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;💡Note&lt;/STRONG&gt;: To view logs in Grafana, make sure diagnostic settings are enabled to send PostgreSQL logs to Azure Monitor Logs. You can configure this in the Azure Portal under your PostgreSQL resource &amp;gt; Monitoring &amp;gt; Diagnostic settings.&lt;/SPAN&gt;&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/postgresql/monitor/how-to-configure-and-access-logs" target="_blank" rel="noopener"&gt;Learn how&lt;/A&gt;.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;In the example above, high CPU usage (&lt;STRONG&gt;73.2%&lt;/STRONG&gt;) aligns precisely with poor-running queries against a large salesorderdetail_big table. This helps engineers instantly validate and pinpoint slow queries without jumping between tools.&lt;/P&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;The unified Metric + Logs view, you can:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Plot query errors over time&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Correlate failed logins with resource spikes&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Investigate locking or memory pressure using timestamps&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;Grafana &lt;STRONG&gt;Explore mode&lt;/STRONG&gt; is also available for deep-dive troubleshooting without altering dashboards.&lt;/P&gt;
&lt;H1&gt;First-Class Azure Integration&lt;/H1&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;This is not just embedded Grafana - it is &lt;STRONG&gt;first-class&lt;/STRONG&gt; Azure-native:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Dashboards are &lt;STRONG&gt;Azure resources&lt;/STRONG&gt;, scoped to subscriptions and resource groups&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Access is controlled using &lt;STRONG&gt;Azure RBAC&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Dashboards can be exported and deployed via &lt;STRONG&gt;ARM&lt;/STRONG&gt; templates&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Easy sharing and migration across environments&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;You get the flexibility of open-source Grafana with Azure’s enterprise-grade governance.&lt;/P&gt;
&lt;H1&gt;Getting Started&lt;/H1&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;To use the pre-built dashboard&lt;/P&gt;
&lt;OL class="lia-align-justify"&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Open the Azure portal&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Navigate to &lt;STRONG&gt;Azure Database for PostgreSQL&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Select &lt;STRONG&gt;Dashboards with Grafana&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Open the PostgreSQL featured dashboard&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;To customize a dashboard:&lt;/P&gt;
&lt;OL class="lia-align-justify"&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Open the prebuilt PostgreSQL dashboard&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Select &lt;STRONG&gt;Save As&lt;/STRONG&gt; to create a copy&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Modify panels or add new visualizations&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Connect additional data sources (metrics or logs)&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Save and share with your team&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;For advanced customization, refer to the &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/azure-monitor/visualize/visualize-use-grafana-dashboards" target="_blank" rel="noopener"&gt;Azure Monitor + Grafana Learn documentation&lt;/A&gt;.&lt;/P&gt;
&lt;H1&gt;When to Use Azure Managed Grafana?&lt;/H1&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;Dashboards with Grafana in the Azure portal cover most common PostgreSQL monitoring scenarios. &lt;A class="lia-external-url" href="https://azure.microsoft.com/products/managed-grafana" target="_blank" rel="noopener"&gt;Azure Managed Grafana&lt;/A&gt; is still the right choice when you need:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Extended plugin support (community and OSS plugins)&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Advanced authentication and provisioning APIs&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Fine-grained, multi-tenant access control&lt;/LI&gt;
&lt;LI style="font-size: 18px; line-height: 1.7;"&gt;Multi-cloud or hybrid data source connectivity&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;See the &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/azure-monitor/visualize/visualize-grafana-overview#solution-comparison" target="_blank" rel="noopener"&gt;detailed comparison&lt;/A&gt; to choose the right option.&lt;/P&gt;
&lt;H1&gt;Learn More&lt;/H1&gt;
&lt;UL&gt;
&lt;LI class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/postgresql/monitor/concepts-monitoring" target="_blank" rel="noopener"&gt;Azure PostgreSQL Monitoring Overview&lt;/A&gt;&lt;/LI&gt;
&lt;LI class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/azure-monitor/visualize/visualize-grafana-overview" target="_blank" rel="noopener"&gt;Visualize Azure Monitor Data with Grafana&lt;/A&gt;&lt;/LI&gt;
&lt;LI class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;&lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/azureobservabilityblog/announcing-general-availability-azure-monitor-dashboards-with-grafana/4468972" target="_blank" rel="noopener" data-lia-auto-title="GA Blog: Azure Monitor Dashboard with Grafana" data-lia-auto-title-active="0"&gt;GA Blog: Azure Monitor Dashboard with Grafana&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify" style="font-size: 18px; line-height: 1.7;"&gt;Start visualizing your &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/postgresql/" target="_blank" rel="noopener"&gt;Azure PostgreSQL&lt;/A&gt; data instantly—right where you already work.&lt;/P&gt;</description>
      <pubDate>Thu, 26 Feb 2026 19:19:45 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/dashboards-with-grafana-now-in-azure-portal-for-postgresql/ba-p/4497607</guid>
      <dc:creator>varun-dhawan</dc:creator>
      <dc:date>2026-02-26T19:19:45Z</dc:date>
    </item>
    <item>
      <title>Renaming an Azure SQL DB encrypted with DB level-CMK can render it Inaccessible</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/renaming-an-azure-sql-db-encrypted-with-db-level-cmk-can-render/ba-p/4497165</link>
      <description>&lt;H2&gt;Issue&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;An issue was brought to our attention recently by a customer where we noticed that the cx DB was Inaccessible. On further investigation, it was concluded that the database was encrypted with DB level CMK &amp;amp; the DB was renamed, which had made it inaccessible. Here’s the error message you may get:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&lt;SPAN data-contrast="none"&gt;Azure Portal shows&amp;nbsp;the Error:&lt;/SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&lt;SPAN data-contrast="none"&gt;“&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-8"&gt;&lt;EM&gt;Access to Azure Key Vault has been lost for this database. Existing data will be inaccessible until this issue is resolved&lt;/EM&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.”&lt;/SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&lt;SPAN data-contrast="none"&gt;Attempting Key revalidation in the portal produces:&lt;/SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&lt;SPAN data-contrast="none"&gt;"&lt;SPAN class="lia-text-color-8"&gt;&lt;EM&gt;AADSTS1000901: The provided certificate cannot be used for requesting tokens. The value of token_not_after on the certificate should be greater than the current time&lt;/EM&gt;&lt;/SPAN&gt;.&lt;/SPAN&gt; "&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Mitigation&lt;/H2&gt;
&lt;P&gt;Please s&lt;SPAN data-contrast="none"&gt;tart by&amp;nbsp;validating&amp;nbsp;the following:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Key Vault key is active, enabled, RSA 2048, no&amp;nbsp;expiration.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Managed identity exists and has&amp;nbsp;correct&amp;nbsp;RBAC role.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Firewall&amp;nbsp;and private endpoints are fine with no changes.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&lt;SPAN data-contrast="none"&gt;As mentioned in our Public documentation,&lt;STRONG&gt; after renaming an Azure SQL database, the identities on the DB must be &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&lt;STRONG&gt;reassigned.&lt;/STRONG&gt; &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;To reassign the identity, &lt;STRONG&gt;set the managed identity to None after your resource name&amp;nbsp;(Azure SQL DB Name)&amp;nbsp;changes and then apply the same user assigned managed identity to it.&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN data-contrast="none"&gt;Additional&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Questions:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&lt;STRONG&gt;After the primary DB is Online, do I have to fix the Geo-DR or Any Replica DBs?&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&lt;SPAN data-contrast="none"&gt;As mentioned in the Public Documentation- "Once the database is back online, previously configured server-level settings, including failover group configurations, tags, and database-level settings such as elastic pool configurations, read scale, auto pause, point-in-time restore history, long-term retention policy, and others are lost. Hence, it's recommended that customers implement a notification system to detect the loss of encryption key access &lt;STRONG&gt;within 30 minutes&lt;/STRONG&gt;. After the 30-minute window has expired, &lt;STRONG&gt;we advise validating all server and database level settings on the recovered database&lt;/STRONG&gt;." To reestablish primary-secondary link (After 30 Min duration), customers have to delete failover group, create geo-replica, create secondary db, recreate failover group and add the db to the failover group.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&lt;SPAN data-contrast="none"&gt;Deleting the Failover Group:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Creating the Geo-Replica:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Now Secondary DB is created successfully:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Now Recreate the Failover Group:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Add the DB to the Failover Group:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;DB added to the Failover Group&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;2. &lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Does&amp;nbsp;the requirement to reassign the managed identity applies exclusively to Azure SQL Database and Azure SQL Managed Instance, or if this behavior also affects any other managed database services offered in Azure or any other&amp;nbsp;service ?&amp;nbsp;(i.e:&amp;nbsp;postgres&amp;nbsp;flexible server, MySQL&amp;nbsp;etc&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;This issue is specific to “&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Database Level CMK in Azure SQL&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="none"&gt;”&amp;nbsp;which is our new and recent offering. Since the identities are assigned to a database resource, if you change the resource the customer owning the identity needs to make sure that the new resource has the new identity. &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(Azure SQL Managed Instance does not support database level CMK)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;This is NOT a problem/requirement for &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Server level CMK in Azure SQL.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Other Azure managed database services (for example, Azure Database for PostgreSQL Flexible Server, MySQL, or MariaDB) use different encryption and&amp;nbsp;keymanagement&amp;nbsp;implementations and do not&amp;nbsp;exhibit&amp;nbsp;this same&amp;nbsp;managedidentity&amp;nbsp;reassignment behavior after&amp;nbsp;rename&amp;nbsp;operations.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:256}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:256}"&gt;3. &lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Can an end-customer share notification of Loss of DB level CMK?&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:256}"&gt;&lt;SPAN data-contrast="none"&gt;Not exactly, but please&amp;nbsp;check the public documentation to&amp;nbsp;monitor&amp;nbsp;key status.&amp;nbsp;You may also&amp;nbsp;subscribe to various key vault alerts, which will&amp;nbsp;notify them&amp;nbsp;about&amp;nbsp;key expiry or any changes in key permissions.&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:256}"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:240}"&gt;References:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/transparent-data-encryption-byok-database-level-overview?view=azuresql&amp;amp;tabs=azurekeyvault#additional-considerations" target="_blank"&gt;Transparent data encryption (TDE) with database level customer-managed keys - Azure SQL Database | Microsoft Learn&lt;/A&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity?view=azuresql" target="_blank"&gt;Managed Identity in Microsoft Entra for Azure SQL - Azure SQL Database &amp;amp; Azure SQL Managed Instance | Microsoft Learn&lt;/A&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/key-vault/general/alert" target="_blank"&gt;Configure Azure Key Vault alerts | Microsoft Learn&lt;/A&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 25 Feb 2026 21:06:03 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/renaming-an-azure-sql-db-encrypted-with-db-level-cmk-can-render/ba-p/4497165</guid>
      <dc:creator>Tancy</dc:creator>
      <dc:date>2026-02-25T21:06:03Z</dc:date>
    </item>
    <item>
      <title>Web activity failure due to Invoking endpoint failed with HttpStatusCode - 403 -- help?</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-factory/web-activity-failure-due-to-invoking-endpoint-failed-with/m-p/4497130#M957</link>
      <description>&lt;P&gt;Hi,&lt;BR /&gt;I have an Azure Data Factory (ADF) instance that I am using to create a Pipeline to ingest external (cloud based) 3rd party data into my Azure SQL Server database. I am a novice with ADF and have only used it to ingest some external SQL data into my SQL database - it did work.&lt;BR /&gt;The external source I'm attempting to extract from uses an OAuth 2.0 API and an API is something I've not used before.&lt;BR /&gt;&lt;BR /&gt;Using Postman (never used this software before this attempt), I have passed the external source's base_url, client_id, and client_secret, and in return successfully received an access token. This tells me that the base_url, client_id, and client_secret values I passed are correct and accepted by the target source/application.&lt;BR /&gt;&lt;BR /&gt;Feeling encourage to implement the same values into ADF, I first created a Linked Service which with a successful test connection returned - see below. This Linked Service uses the same values as the Postman entry which granted an access token.&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I then created a Pipeline with a Web activity object within it. The General and User Properties don't have any configuration, only the Settings tab does which shown below. Again, the URL, Client ID and Client Secret configured here are the same as those used in Postman (and the Linked Service).&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I execute the Web object and it returns with a failure - see below.&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The error states the endpoint refused the request (for an access token). Is this accurate as I was able to receive an access token via Postman when using the same credentials?&amp;nbsp; I don't understand why via Postman I can received an access token but via ADF it errors. I'm wondering if I've completed the ADF parts incorrectly, or if there is more needed just to received an access token, or if it's something else?&lt;BR /&gt;Are you able to advise what's taking place here?&lt;BR /&gt;Thanks.&lt;/P&gt;</description>
      <pubDate>Wed, 25 Feb 2026 17:21:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-factory/web-activity-failure-due-to-invoking-endpoint-failed-with/m-p/4497130#M957</guid>
      <dc:creator>AzureNewbie1</dc:creator>
      <dc:date>2026-02-25T17:21:42Z</dc:date>
    </item>
    <item>
      <title>Nasdaq builds thoughtfully designed AI for board governance with PostgreSQL on Azure</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/nasdaq-builds-thoughtfully-designed-ai-for-board-governance-with/ba-p/4493078</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Authored by:&lt;STRONG&gt; &lt;/STRONG&gt;Charles Federssen, Partner Director of Product Management for PostgreSQL at Microsoft and Mohsin Shafqat, Senior Manager, Software Engineering at Nasdaq&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When people think of Nasdaq, they usually think of markets, trading floors, and financial data moving at extraordinary speed. But behind the scenes, Nasdaq also plays an equally critical role in how boards of directors govern, deliberate, and make decisions.&lt;/P&gt;
&lt;P&gt;Nasdaq Boardvantage® is the company’s governance platform, used by more than 4,400 organizations worldwide—including nearly half of the Fortune 100. It’s where directors review board books, collaborate in an environment designed with robust security, and prepare for meetings that often involve some of the most sensitive information a company has.&lt;/P&gt;
&lt;P&gt;In recent years, Nasdaq set out to modernize Nasdaq Boardvantage with AI, without compromising security and reliability. That journey was featured in a Microsoft Ignite session, “&lt;A class="lia-external-url" href="https://www.youtube.com/watch?v=BkOcPQntsk4" target="_blank" rel="noopener"&gt;Nasdaq Boardvantage: AI-Driven Governance on PostgreSQL and Foundry&lt;/A&gt;.” It offers a practical look at how Azure Database for PostgreSQL can support AI-driven applications where precision, isolation, and data control are non-negotiable.&lt;/P&gt;
&lt;H2&gt;Introducing AI where trust is everything&lt;/H2&gt;
&lt;P&gt;Board governance isn’t a typical productivity workload. Board packets can run 400 to 600 pages, meeting minutes are legal records, and any AI-generated insight must be confined to a customer’s own data.&lt;/P&gt;
&lt;P&gt;“Our customers trust us with some of their most strategic, sensitive data,” said Mohsin Shafqat, Senior Manager of Software Development at Nasdaq. That trust meant tackling several core challenges upfront, including:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;How do you minimize AI hallucinations in a governance context?&lt;/LI&gt;
&lt;LI&gt;How do you guarantee tenant isolation at scale?&lt;/LI&gt;
&lt;LI&gt;How do you keep data regional across a global customer base?&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;A cloud foundation built for governance&lt;/H2&gt;
&lt;P&gt;Before adding intelligence, Nasdaq decided to re-architect Nasdaq Boardvantage on Microsoft Azure, using Azure Kubernetes Service (AKS) to run containerized, multi-tenant workloads with strong isolation boundaries. Microsoft Foundry provides the managed foundation for deploying, governing, and operating AI models across this architecture, adding consistency, security, and control as intelligence is introduced.&lt;/P&gt;
&lt;P&gt;At the data layer, Azure Database for PostgreSQL and Azure Database for MySQL became the backbone for governance data. PostgreSQL, in particular, plays a central role in managing structured governance information alongside vector embeddings that support AI-driven features. Together, these services give Nasdaq the performance, security, and operational control required for a highly regulated, multi-tenant environment, while still moving quickly.&lt;/P&gt;
&lt;P&gt;Key architectural choices included:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Tenant isolation by design, with separate databases and storage&lt;/LI&gt;
&lt;LI&gt;Regional deployments to align with data residency requirements&lt;/LI&gt;
&lt;LI&gt;High availability and managed operations, so teams could focus on product innovation instead of infrastructure maintenance&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;PostgreSQL and pgvector: Powering context-aware AI&lt;/H2&gt;
&lt;P&gt;With that foundation in place, Nasdaq was ready to carefully introduce AI. One of the first AI capabilities was intelligent document summarization. Board materials that once took hours to review could now be condensed into concise, contextually accurate summaries.&lt;/P&gt;
&lt;P&gt;Under the hood, this required more than just calling an LLM. Nasdaq uses pgvector, natively supported in Azure Database for PostgreSQL, to store and query embeddings generated from board documents. This allows the platform to perform hybrid searches that combine traditional SQL queries with vector similarity to retrieve the most relevant context before sending anything to a language model.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Instead of treating AI as a black box, the team built a pipeline where:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Documents are processed with Azure Document Intelligence to preserve structure and meaning&lt;/LI&gt;
&lt;LI&gt;Content is chunked and embedded&lt;/LI&gt;
&lt;LI&gt;Embeddings are stored in PostgreSQL with pgvector&lt;/LI&gt;
&lt;LI&gt;Vector similarity searches retrieve precise context for each AI task&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Because this runs inside PostgreSQL, the same database benefits from Azure’s built-in high availability, security controls, and operational tooling – delivering tangible results, including a 25% reduction in overall board preparation time and internal testing shows 91–97% accuracy for AI-generated summaries and meeting minutes.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;From summaries to an AI Board Assistant&lt;/H2&gt;
&lt;P&gt;With summarization working in production, Nasdaq expanded further. The team is now building an AI-powered Board Assistant that will help directors prepare for upcoming meetings by surfacing trends, risks, and insights from prior discussions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This introduces a new level of scale. Years of board data across thousands of customers translate into millions of embeddings. PostgreSQL continues to anchor this architecture, storing vectors for semantic retrieval while MySQL supports complementary non-vector workloads. Across Nasdaq Boardvantage, users are advised to always review AI outputs, and no customer data is shared or used to train external models. “We designed AI for governance, not the other way around,” Shafqat said.&lt;/P&gt;
&lt;P&gt;More importantly, customers trust the system because security, isolation, and data control were engineered in from day one.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Looking ahead&lt;/H2&gt;
&lt;P&gt;Nasdaq’s work shows how Azure Database for PostgreSQL can support AI workloads that demand both intelligence and integrity. With PostgreSQL at the core, Nasdaq has built a governance platform that scales globally, respects regulatory boundaries, and introduces AI in a way that feels dependable and not experimental.&lt;/P&gt;
&lt;P&gt;What started as a modernization of Nasdaq Boardvantage is now influencing how Nasdaq approaches AI across the enterprise.&lt;/P&gt;
&lt;P&gt;To dive deeper into the architecture and hear directly from the engineers behind it, watch the Ignite session and check out these resources:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://www.youtube.com/watch?v=BkOcPQntsk4" target="_blank" rel="noopener"&gt;Watch the Ignite breakout session&lt;/A&gt; for a technical walkthrough of how Nasdaq Boardvantage is built, including PostgreSQL on Azure, pgvector, and Microsoft Foundry in production.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.microsoft.com/en/customers/story/25682-nasdaq-azure" target="_blank" rel="noopener"&gt;Read the case study&lt;/A&gt; to see how Nasdaq introduced AI into board governance and what changed for directors, administrators, and decision-making.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://ignite.microsoft.com/en-US/sessions/STUDIO34?source=sessions" target="_blank" rel="noopener"&gt;Watch the Ignite broadcast&lt;/A&gt; for a candid discussion on Azure Database for PostgreSQL, Azure HorizonDB, and what it takes to scale AI-driven governance.&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 25 Feb 2026 05:34:02 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/nasdaq-builds-thoughtfully-designed-ai-for-board-governance-with/ba-p/4493078</guid>
      <dc:creator>charlesfeddersenMS</dc:creator>
      <dc:date>2026-02-25T05:34:02Z</dc:date>
    </item>
  </channel>
</rss>

