<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>rss.livelink.threads-in-node</title>
    <link>https://techcommunity.microsoft.com/t5/azure-data/ct-p/AzureDatabases</link>
    <description>rss.livelink.threads-in-node</description>
    <pubDate>Thu, 16 Apr 2026 20:16:23 GMT</pubDate>
    <dc:creator>AzureDatabases</dc:creator>
    <dc:date>2026-04-16T20:16:23Z</dc:date>
    <item>
      <title>Understanding action_id discrepancies in Azure SQL Database Audit Logs (BCM vs AL / CR / DR)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/understanding-action-id-discrepancies-in-azure-sql-database/ba-p/4509932</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Overview&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;While working with Azure SQL Database auditing in enterprise environments, you may encounter an inconsistency in how the action_id field is captured across different PaaS SQL servers.&lt;/P&gt;
&lt;P&gt;In one such scenario, a customer observed:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;PaaS Server&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;action_id observed for similar DDL statements&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Server A&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;AL, CR, DR&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Server B&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;BCM only&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;This inconsistency impacted downstream compliance pipelines, as the audit data was expected to be captured and interpreted uniformly across all servers.&lt;/P&gt;
&lt;P&gt;This article explains:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;How Azure SQL Auditing works by default&lt;/LI&gt;
&lt;LI&gt;What causes BCM to appear instead of AL/CR/DR&lt;/LI&gt;
&lt;LI&gt;How to standardize audit logs across PaaS servers&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;How Azure SQL Database Auditing Works?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure SQL Database auditing uses a managed and fixed audit policy at the service level.&lt;/P&gt;
&lt;P&gt;When auditing is enabled at the server level, the default auditing policy includes the following action groups:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;BATCH_COMPLETED_GROUP&lt;/LI&gt;
&lt;LI&gt;SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP&lt;/LI&gt;
&lt;LI&gt;FAILED_DATABASE_AUTHENTICATION_GROUP&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These groups audit:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;All query execution activity&lt;/LI&gt;
&lt;LI&gt;Successful authentication attempts&lt;/LI&gt;
&lt;LI&gt;Failed authentication attempts&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;As a result, SQL batches — including DDL statements like CREATE, ALTER, or DROP on database objects — are captured under the BATCH_COMPLETED_GROUP and appear with action_id = BCM&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why AL, CR, and DR are not captured by default?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Audit action IDs such as AL, CR and DR are considered Security / DDL-level audit events.&lt;/P&gt;
&lt;P&gt;These events are not included in the default Azure SQL auditing policy.&lt;/P&gt;
&lt;P&gt;Instead, they are generated only when the corresponding Security-related AuditActionGroups are explicitly enabled.&lt;/P&gt;
&lt;P&gt;For example:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;AuditActionGroup&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Captures&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;DATABASE_OBJECT_CHANGE_GROUP&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;CREATE / ALTER / DROP on database objects&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;DATABASE_PRINCIPAL_CHANGE_GROUP&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;User / role changes&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;DATABASE_ROLE_MEMBER_CHANGE_GROUP&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Role membership updates&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;DDL operations such as CREATE / ALTER / DROP on database objects are captured under action groups like DATABASE_OBJECT_CHANGE_GROUP.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Observed Behavior in a Newly Created Test Server&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Running the following PowerShell command on a newly provisioned logical server showed only the default audit action groups enabled.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;(Get-AzSqlServerAudit -ResourceGroupName "RGName" -ServerName "ServerName").AuditActionGroup&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Therefore, DDL statements were audited but recorded as action_id = BCM&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Enabling AL / CR / DR Action IDs&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To capture DDL operations under their respective audit action IDs, configure the required security audit action groups at the SQL Server level.&lt;/P&gt;
&lt;P&gt;For example: In this customer scenario, we executed the following command:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Set-AzSqlServerAudit 
 -ResourceGroupName "RGName" 
 -ServerName "ServerName" 
 -AuditActionGroup 
 "DATABASE_PRINCIPAL_CHANGE_GROUP",
 "DATABASE_ROLE_MEMBER_CHANGE_GROUP",
 "DATABASE_OBJECT_CHANGE_GROUP"&lt;/LI-CODE&gt;
&lt;P&gt;After applying this configuration:&lt;/P&gt;
&lt;P&gt;DDL operations were captured in the audit logs as action_id = CR, AL and DR instead of BCM.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Ensuring Consistent Compliance Across PaaS Servers&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To standardize audit logging behavior across environments:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1: Compare AuditActionGroups&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Run the following command on all servers:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;(Get-AzSqlServerAudit -ResourceGroupName "&amp;lt;RG&amp;gt;" -ServerName "&amp;lt;ServerName&amp;gt;").AuditActionGroup&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2: Align AuditActionGroups&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Configure all server with same AuditActionGroup values. In this case, value used was below:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Set-AzSqlServerAudit 
 -ResourceGroupName "&amp;lt;RG&amp;gt;" 
 -ServerName "&amp;lt;ServerName&amp;gt;" 
 -AuditActionGroup `
 "DATABASE_PRINCIPAL_CHANGE_GROUP",
 "DATABASE_ROLE_MEMBER_CHANGE_GROUP",
 "DATABASE_OBJECT_CHANGE_GROUP"&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3: Validate&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Once aligned, similar SQL statements across all PaaS servers should now generate consistent action_id values in audit logs.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Accepted values for &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fdotnet%2Fapi%2Fmicrosoft.azure.commands.sql.auditing.model.auditactiongroups__%3B!!Nyu6ZXf5!v9WEcJe8GgZGJM30tzome-bmFQE-ncl9S0bmX2T264tPWUaZfMA-3MKTVCRyEKkCO5c2v7ldTdpKr0TinRc%24&amp;amp;data=05%7C02%7Csuagarwal%40microsoft.com%7C92497c36fc774c3a8f9c08de93b5231b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C639110603427303355%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=5o4%2FErXG1YicmzaR4FS2aEaVtRzW8yos2k3UROJx%2Fa4%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;AuditActionGroups. Ensure appropriate groups are enabled based on your organization’s compliance needs.&lt;/A&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Accepted values:&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;BATCH_STARTED_GROUP, BATCH_COMPLETED_GROUP, APPLICATION_ROLE_CHANGE_PASSWORD_GROUP, BACKUP_RESTORE_GROUP, DATABASE_LOGOUT_GROUP, DATABASE_OBJECT_CHANGE_GROUP, DATABASE_OBJECT_OWNERSHIP_CHANGE_GROUP, DATABASE_OBJECT_PERMISSION_CHANGE_GROUP, DATABASE_OPERATION_GROUP, DATABASE_PERMISSION_CHANGE_GROUP, DATABASE_PRINCIPAL_CHANGE_GROUP, DATABASE_PRINCIPAL_IMPERSONATION_GROUP, DATABASE_ROLE_MEMBER_CHANGE_GROUP, FAILED_DATABASE_AUTHENTICATION_GROUP, SCHEMA_OBJECT_ACCESS_GROUP, SCHEMA_OBJECT_CHANGE_GROUP, SCHEMA_OBJECT_OWNERSHIP_CHANGE_GROUP, SCHEMA_OBJECT_PERMISSION_CHANGE_GROUP, SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP, USER_CHANGE_PASSWORD_GROUP, LEDGER_OPERATION_GROUP, DBCC_GROUP, DATABASE_OWNERSHIP_CHANGE_GROUP, DATABASE_CHANGE_GROUP&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Links:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/powershell/module/az.sql/get-azsqlserveraudit?view=azps-15.4.0" target="_blank" rel="noopener"&gt;Get-AzSqlServerAudit (Az.Sql) | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fpowershell%2Fmodule%2Faz.sql%2Fset-azsqlserveraudit%3Fview%3Dazps-15.4.0__%3B!!Nyu6ZXf5!v9WEcJe8GgZGJM30tzome-bmFQE-ncl9S0bmX2T264tPWUaZfMA-3MKTVCRyEKkCO5c2v7ldTdpKoyceg0A%24&amp;amp;data=05%7C02%7Csuagarwal%40microsoft.com%7C92497c36fc774c3a8f9c08de93b5231b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C639110603427328075%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=vCQkjF8z2J1AZZBn7tM6t4eL1HeqGinFdAoAK0XhMCI%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Set-AzSqlServerAudit (Az.Sql) | Microsoft Learn&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 16 Apr 2026 11:41:59 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/understanding-action-id-discrepancies-in-azure-sql-database/ba-p/4509932</guid>
      <dc:creator>Sunaina_Agarwal</dc:creator>
      <dc:date>2026-04-16T11:41:59Z</dc:date>
    </item>
    <item>
      <title>March 2026 Recap: Azure Database for PostgreSQL</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/march-2026-recap-azure-database-for-postgresql/ba-p/4511432</link>
      <description>&lt;P&gt;Hello Azure community,&lt;/P&gt;
&lt;P&gt;March was packed with major feature announcements for &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/postgresql/" target="_blank" rel="noopener"&gt;Azure Database for PostgreSQL&lt;/A&gt;. From the general availability of SSDv2, cascading read replicas, to online migration and new monitoring capabilities for logical replication slots to help ensure slots are preserved, this update brings a range of improvements to performance, scale, and reliability.&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Features&lt;/H1&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="#community--1-ssdv2" target="_self" rel="noopener"&gt;SSDv2 - Generally Available&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="#community--1-replica" target="_self" rel="noopener"&gt;Cascading Read replica - Generally Available&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="#community--1-migrate" target="_self" rel="noopener"&gt;Online migration using PgOutput plugin - Generally Available&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="#community--1-alloydb" target="_self" rel="noopener"&gt;Google AlloyDB as a migration source - Generally Available&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="#community--1-edb" target="_self" rel="noopener"&gt;EDB Extended Server as a migration source - Generally Available&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="#community--1-repslot" target="_self" rel="noopener"&gt;Logical replication slot synchronization metrics - Preview&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="#community--1-defender" target="_self" rel="noopener"&gt;Defender Security Assessments - Preview&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="#community--1-vscode" target="_self" rel="noopener"&gt;New enhancements in the PostgreSQL VS Code Extension&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="#community--1-minorversion" target="_self" rel="noopener"&gt;Latest PostgreSQL minor versions: 18.3, 17.9, 16.13, 15.17, 14.22&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="#community--1-extension" target="_self" rel="noopener"&gt;New extension support for PostgreSQL 18 on Azure Database for PostgreSQL&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="#community--1-guide" target="_self" rel="noopener"&gt;Guide on PostgreSQL Buffer Cache Analysis, query rewriting and elastic clusters&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2 id="ssdv2"&gt;SSDv2 - Generally Available&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Premium SSD v2 is now generally available for Azure Database for PostgreSQL Flexible Server&lt;/STRONG&gt;, delivering significant performance and cost-efficiency improvements for I/O‑intensive workloads. It offers up to &lt;STRONG&gt;4× higher IOPS&lt;/STRONG&gt;, lower latency, and improved price‑performance.&lt;/P&gt;
&lt;P&gt;With &lt;STRONG&gt;independent scaling of storage and performance&lt;/STRONG&gt;, you only pay for what you need. Premium SSD v2 supports storage scaling up to &lt;STRONG&gt;64&lt;/STRONG&gt;&lt;STRONG&gt; TiB&lt;/STRONG&gt;, with performance reaching &lt;STRONG&gt;80,000 IOPS&lt;/STRONG&gt; and &lt;STRONG&gt;1,200&lt;/STRONG&gt;&lt;STRONG&gt; MiB/s throughput&lt;/STRONG&gt;, without tying performance to disk size. IOPS and throughput can be adjusted instantly, with no downtime.&lt;/P&gt;
&lt;P&gt;Additionally, built‑in baseline performance at no additional cost ensures consistent performance even for smaller deployments, making Premium SSD v2 a strong choice for modern, high‑demand PostgreSQL applications.&lt;/P&gt;
&lt;P&gt;For details about the Premium SSD v2 release, see the &lt;A href="https://techcommunity.microsoft.com/blog/adforpostgresql/premium-ssd-v2-is-now-generally-available-for-azure-database-for-postgresql/4508445?previewMessage=true" target="_blank" rel="noopener"&gt;GA Announcement Blog&lt;/A&gt; and &lt;A href="https://learn.microsoft.com/azure/postgresql/compute-storage/concepts-storage-premium-ssd-v2" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&lt;/P&gt;
&lt;H2 id="replica"&gt;Cascading read replica - Generally available&lt;/H2&gt;
&lt;P&gt;Cascading read replicas are now generally available, giving customers greater flexibility to create read replicas from &lt;STRONG&gt;existing read replicas&lt;/STRONG&gt;. This capability supports up to &lt;STRONG&gt;two levels of replication&lt;/STRONG&gt; and up to &lt;STRONG&gt;30 read replicas&lt;/STRONG&gt; in total, with each read replica able to host up to &lt;STRONG&gt;five cascading replicas&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;With cascading read replicas, you can more effectively distribute read traffic across multiple replicas, deploy regional or hierarchical read replicas closer to end users, reduce read latency, and improve overall query performance for read‑heavy workloads. In addition, we’ve rolled out switchover support for both intermediate and cascading read replicas, making it easier to manage replica topologies. Learn more about cascading read replicas through&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/postgresql/read-replica/concepts-read-replicas#create-cascading-read-replicas-preview" target="_blank" rel="noopener"&gt;our documentation&lt;/A&gt; and a &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/adforpostgresql/cascading-read-replicas-now-generally-available/4510610" target="_blank" rel="noopener" data-lia-auto-title="detailed blog walkthrough." data-lia-auto-title-active="0"&gt;detailed blog walkthrough.&lt;/A&gt;&lt;/P&gt;
&lt;H2 id="migrate"&gt;Online migration using PgOutput plugin - Generally Available&lt;/H2&gt;
&lt;P&gt;The new addition of the &lt;STRONG&gt;PgOutput plugin&lt;/STRONG&gt; helps make your Online migration to Azure more &lt;STRONG&gt;robust and seamless&lt;/STRONG&gt;. The native "Out-of-the-Box" support that PgOutout offers is more suited for Online Production migrations compared to other logical decoding plugins. PgOutput offers &lt;STRONG&gt;higher throughput&lt;/STRONG&gt; and &lt;STRONG&gt;superior performance&lt;/STRONG&gt; compared to other logical decoding plugins ensuring your Online migration has very limited downtime. PgOutput also offers fine-grained filtering using Publications where you can migrate specific tables and filter by specific operations.&lt;/P&gt;
&lt;P&gt;For more details about this update, see the &lt;A href="https://learn.microsoft.com/azure/postgresql/migrate/migration-service/concepts-required-user-permissions#online-migration-using-pgoutput---required-publication-permissions" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;.&lt;/P&gt;
&lt;H2 id="alloydb"&gt;Google AlloyDB as a migration source - Generally Available&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Google AlloyDB&lt;/STRONG&gt; is now supported as a source in Azure Database for PostgreSQL Migration Service. You can use this capability to migrate your AlloyDB workloads directly to Azure Database for PostgreSQL, using either&lt;STRONG&gt; offline&lt;/STRONG&gt; or &lt;STRONG&gt;online &lt;/STRONG&gt;migration options. This support helps you move your PostgreSQL databases to Azure with confidence, while taking advantage of Azure’s flexibility and scalability.&lt;/P&gt;
&lt;P&gt;To know more about this feature, visit our &lt;A href="https://learn.microsoft.com/azure/postgresql/migrate/migration-service/tutorial-migration-service-alloy-db-offline?tabs=portal" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;.&lt;/P&gt;
&lt;H2 id="edb"&gt;EDB Extended Server as a migration source - Generally Available&lt;/H2&gt;
&lt;P&gt;Azure Database for PostgreSQL Migration Service now supports &lt;STRONG&gt;EDB&lt;/STRONG&gt; Extended Server as a migration source. This enables you to migrate EDB Extended Server workloads to Azure Database for PostgreSQL using both offline and online migration methods. With this addition, you can transition PostgreSQL databases to Azure smoothly and benefit from the scale and flexibility of the Azure platform.&lt;/P&gt;
&lt;P&gt;For more details about this update, see the &lt;A href="https://learn.microsoft.com/azure/postgresql/migrate/migration-service/tutorial-migration-service-enterprise-db-extended-server-offline?tabs=portal" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;.&lt;/P&gt;
&lt;H2 id="repslot"&gt;Logical replication slot sync status metric - Preview&lt;/H2&gt;
&lt;P&gt;You can now monitor whether your logical replication slots are &lt;STRONG&gt;failover‑ready &lt;/STRONG&gt;using the new l&lt;STRONG&gt;ogical_replication_slot_sync_status&lt;/STRONG&gt; metric, now in preview. This metric provides a simple binary signal indicating whether logical replication slots are &lt;STRONG&gt;synchronized&lt;/STRONG&gt; across High availability (HA) &lt;STRONG&gt;primary and standby nodes&lt;/STRONG&gt;. It helps you quickly assess failover readiness without digging into replication internals especially valuable for CDC pipelines such as Debezium and Kafka, where data continuity during failover is critical.&lt;/P&gt;
&lt;P&gt;Learn more about &lt;A class="lia-external-url" href="https://aka.ms/pg-flex-replication-metrics" target="_blank" rel="noopener"&gt;logical replication metrics in the documentation&lt;/A&gt;.&lt;/P&gt;
&lt;H2 id="defender"&gt;Defender Security Assessments - Preview&lt;/H2&gt;
&lt;P&gt;In March, we introduced two new Microsoft Defender for Cloud CSPM security recommendations for Azure Database for PostgreSQL Flexible Server, now available in &lt;A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/release-notes-recommendations-alerts#recommendations-alerts-and-incidents-updates" target="_blank" rel="noopener"&gt;public preview&lt;/A&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Geo-redundant backups should be enabled&lt;/STRONG&gt; for PostgreSQL Servers&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;require_secure_transport should be set to "on"&lt;/STRONG&gt; for PostgreSQL Servers&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These integrated assessments continuously evaluate database configuration settings against security best practices, helping customers proactively identify and manage security posture risks for their Azure PostgreSQL servers while maintaining alignment with internal and industry standards.&lt;/P&gt;
&lt;P&gt;Additional security posture assessments for Azure PostgreSQL will be introduced as they become available.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To learn more, refer to the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/recommendations-reference-data#geo-redundant-backups-should-be-enabled-for-postgresql-servers" target="_blank" rel="noopener"&gt;reference table for all data security recommendations in Microsoft Defender for Cloud.&lt;/A&gt;&lt;/P&gt;
&lt;H2 id="vscode"&gt;New enhancements in the PostgreSQL VS Code Extension&lt;/H2&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://github.com/microsoft/vscode-pgsql/blob/main/CHANGELOG.md" target="_blank" rel="noopener"&gt;The March release (v1.20)&lt;/A&gt; of the &lt;A class="lia-external-url" href="https://marketplace.visualstudio.com/items?itemName=ms-ossdata.vscode-pgsql" target="_blank" rel="noopener"&gt;PostgreSQL VS Code extension&lt;/A&gt; delivers new server management capabilities, enhanced query plan analysis, visual improvements, and a batch of bug fixes.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Clone Server: &lt;/STRONG&gt;You can now clone an Azure PostgreSQL Flexible Server directly from within the extension. The clone operation is available from the server management UI, allowing you to duplicate a server configuration including region, SKU, and settings without leaving VS Code.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Entra ID Authentication for AI-Powered Schema Conversion: &lt;/STRONG&gt;The Oracle-to-PostgreSQL migration experience now supports Microsoft Entra ID authentication for Azure OpenAI connectivity, replacing API key–based authentication. This enables enterprise-grade identity management and access control for AI-powered schema conversion workflows.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Query Plan Visualization Improvements: &lt;/STRONG&gt;The Copilot-powered “Analyze with Copilot” feature for query plans has been improved with more relevant optimization recommendations and smoother SQL attachment handling during plan analysis.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Apache AGE Graph Visualizer Enhancements: &lt;/STRONG&gt;The graph visualizer received a visual refresh with modernized edge rendering, a color-coded legend, and a new properties pane for exploring element details.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Object Explorer Deep Refresh: &lt;/STRONG&gt;The Object Explorer now supports refreshing expanded nodes in place, so newly created tables and objects appear immediately without needing to disconnect and reconnect.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Settings Management: &lt;/STRONG&gt;The extension now supports both global user settings and local .vscode/settings.json, providing more robust connection settings management across configuration sources.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Bug Fixes: &lt;/STRONG&gt;This release includes numerous bug fixes across script generation (DDL for triggers, materialized views, and functions), IntelliSense (foreign table support), JSON data export, query execution, and server connectivity.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 id="minorversion"&gt;Latest PostgreSQL minor versions: 18.3, 17.9, 16.13, 15.17, 14.22&lt;/H2&gt;
&lt;P&gt;Azure PostgreSQL now supports the latest PostgreSQL minor versions:&amp;nbsp;&lt;STRONG&gt;18.3, 17.9, 16.13, 15.17, and 14.22&lt;/STRONG&gt;. These updates are applied automatically during planned maintenance windows, ensuring your databases stay up to date with critical fixes and reliability improvements, with no manual action required. This is an out-of-cycle release that addresses regressions identified in the previous update. The release includes fixes across replication, JSON functions, query correctness, indexing, and extensions like &lt;EM&gt;pg_trgm&lt;/EM&gt;, improving overall stability and correctness of database operations.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;For details about the minor release, see the &lt;A href="https://www.postgresql.org/about/news/postgresql-183-179-1613-1517-and-1422-released-3246/" target="_blank" rel="noopener"&gt;PostgreSQL announcement&lt;/A&gt;.&lt;/P&gt;
&lt;H2 id="extension"&gt;New extension support for PostgreSQL 18 on Azure Database for PostgreSQL&lt;/H2&gt;
&lt;P&gt;Azure Database for PostgreSQL running PostgreSQL 18 now supports extensions that enable&lt;STRONG&gt; graph querying,&lt;/STRONG&gt; &lt;STRONG&gt;in&lt;/STRONG&gt;‑&lt;STRONG&gt;database AI integration&lt;/STRONG&gt;, &lt;STRONG&gt;external storage access&lt;/STRONG&gt;, and &lt;STRONG&gt;scalable vector similarity search&lt;/STRONG&gt;, expanding the types of workloads that can be handled directly within PostgreSQL.&lt;/P&gt;
&lt;P&gt;Newly supported extensions include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;AGE (Apache AGE v1.7.0): &lt;/STRONG&gt;Adds native graph data modeling and querying capabilities to PostgreSQL using openCypher, enabling hybrid relational–graph workloads within the same database.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;azure_ai: &lt;/STRONG&gt;Enables direct invocation of Microsoft Foundry models from PostgreSQL using SQL, allowing AI inference and embedding generation to be integrated into database workflows.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;azure_storage: &lt;/STRONG&gt;Provides native integration with Azure Blob Storage, enabling PostgreSQL to read from and write to external storage for data ingestion, export, and hybrid data architectures.&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/postgresql/extensions/how-to-use-pgdiskann" target="_blank" rel="noopener"&gt;pg_diskann&lt;/A&gt;&lt;STRONG&gt;: &lt;/STRONG&gt;Introduces disk‑based approximate nearest neighbor (ANN) indexing for high-performance vector similarity search at scale, optimized for large vector datasets with constrained memory.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Together, these extensions allow PostgreSQL on Azure to support multi-model, AI‑assisted, and data‑intensive workloads while preserving compatibility with the open‑source PostgreSQL ecosystem.&lt;/P&gt;
&lt;H2 id="guide"&gt;Guide on PostgreSQL buffer cache analysis, query rewriting&lt;/H2&gt;
&lt;P&gt;We have rolled out two new blogs on PostgreSQL buffer cache analysis and PostgreSQL query rewriting and subqueries. These blogs help you better understand how PostgreSQL behaves under the hood and how to apply practical performance optimizations whether you’re diagnosing memory usage, reducing unnecessary disk I/O, or reshaping queries to get more efficient execution plans as your workloads scale.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;PostgreSQL Buffer Cache Analysis&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This blog focuses on &lt;A href="https://techcommunity.microsoft.com/blog/adforpostgresql/postgresql-buffer-cache-analysis/4501264" target="_blank" rel="noopener" data-lia-auto-title="understanding PostgreSQL memory behavior through shared_buffers" data-lia-auto-title-active="0"&gt;understanding PostgreSQL memory behavior through shared_buffers&lt;/A&gt;, the database’s primary buffer cache. Using native statistics and the pg_buffercache extension, it provides a data‑driven approach to evaluate cache efficiency, identify when critical tables and indexes are served from memory, and detect cases where disk I/O may be limiting performance. The guide offers a repeatable methodology to support informed tuning decisions as workloads scale.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;PostgreSQL Query Rewriting and Subqueries&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This blog explores &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/adforpostgresql/postgresql-query-rewriting-and-subqueries/4499819" target="_blank" rel="noopener" data-lia-auto-title="how query structure directly impacts PostgreSQL execution plans and performance" data-lia-auto-title-active="0"&gt;how query structure directly impacts PostgreSQL execution plans and performance&lt;/A&gt;. It walks through common anti‑patterns and practical rewrites such as replacing correlated subqueries with set‑based joins, using semi‑joins, and pre‑aggregating large tables to reduce unnecessary work and enable more efficient execution paths. Each scenario includes clear explanations, example rewrites, and self‑contained test scripts you can run.&lt;/P&gt;
&lt;H1&gt;Azure Postgres Learning Bytes 🎓&lt;/H1&gt;
&lt;H4&gt;How to create and store vector embeddings in Azure Database for PostgreSQL&lt;/H4&gt;
&lt;P&gt;Vector embeddings sit at the core of many modern AI applications from semantic search and recommendations to RAG‑based experiences. But once you generate embeddings, an important question follows: how do you generate and store them in your existing database server?&lt;/P&gt;
&lt;P&gt;With Azure Database for PostgreSQL, you can generate and store vector embeddings directly alongside your application data. By using the `&lt;EM&gt;azure_ai`&lt;/EM&gt; extension, PostgreSQL can seamlessly integrate with Azure OpenAI to create embeddings and store them in your database. This learning byte walks you through a step‑by‑step guide to generating and storing vector embeddings in Azure Database for PostgreSQL.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1: Enable the Azure AI extension&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure Database for PostgreSQL supports the &lt;STRONG&gt;azure_ai &lt;/STRONG&gt;extension, which allows you to call &lt;STRONG&gt;Azure OpenAI&lt;/STRONG&gt; service.&lt;/P&gt;
&lt;P&gt;Connect to your database and run:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE EXTENSION IF NOT EXISTS azure_ai;&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2: Create (or use existing) Azure OpenAI resource&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;You need an &lt;STRONG&gt;Azure OpenAI&lt;/STRONG&gt; resource in your subscription with an embedding model deployed.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;In the Azure portal, create an &lt;STRONG&gt;Azure OpenAI&lt;/STRONG&gt; resource.&lt;/LI&gt;
&lt;LI&gt;Deploy an embedding model (for example, text-embedding-3-small).&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Azure OpenAI provides the &lt;STRONG&gt;endpoint URL&lt;/STRONG&gt; and &lt;STRONG&gt;API key&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3: Get endpoint and API key&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Go to your &lt;STRONG&gt;Azure OpenAI resource&lt;/STRONG&gt; in the Azure portal.&lt;/LI&gt;
&lt;LI&gt;Select &lt;STRONG&gt;Keys and Endpoint&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Copy:
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;Endpoint&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;API Key (Key 1 or Key 2)&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4: Configure Azure AI extension with OpenAI details&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Store the endpoint and key securely inside PostgreSQL&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT 
  azure_ai.set_setting(
    'azure_openai.endpoint', 'https://&amp;lt;your-endpoint&amp;gt;.openai.azure.com'
  );
SELECT 
  azure_ai.set_setting(
    'azure_openai.subscription_key', 
    '&amp;lt;your-api-key&amp;gt;'
  );
&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Step 5: Generate an embedding&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT 
  LEFT(
    azure_openai.create_embeddings(
      'text-embedding-3-small', 'Sample text for PostgreSQL Lab'
    ):: text, 
    100
  ) AS vector_preview;
&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Step 6: Add a vector column&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Add a vector column to store embeddings (example uses 1536‑dimensional vectors):&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;ALTER TABLE 
  &amp;lt; table - name &amp;gt; 
ADD 
  COLUMN embedding VECTOR(1536);&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Step 7: Store the embedding&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Update your table with the generated embedding:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;UPDATE 
  &amp;lt; table - name &amp;gt; 
SET 
  embedding = azure_openai.create_embeddings(
    'text-embedding-3-small', content
  );
&lt;/LI-CODE&gt;
&lt;H1&gt;Conclusion&lt;/H1&gt;
&lt;P&gt;That’s a wrap for our March 2026 recap. This month brought a set of meaningful updates focused on making Azure Database for PostgreSQL more performant, reliable, and scalable whether you’re modernizing workloads, scaling globally, or strengthening your security posture.&lt;/P&gt;
&lt;P&gt;We’ll be back soon with more exciting announcements and key feature enhancements for Azure Database for PostgreSQL, so stay tuned! Your feedback is important to us, have suggestions, ideas, or questions? We’d love to hear from you:&amp;nbsp;&lt;A href="https://aka.ms/pgfeedback" target="_blank" rel="noopener"&gt;https://aka.ms/pgfeedback&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;Follow us here for the latest announcements, feature releases, and best practices:&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/category/azuredatabases/blog/adforpostgresql" target="_blank" rel="noopener" data-lia-auto-title="Microsoft Blog for PostgreSQL" data-lia-auto-title-active="0"&gt;Microsoft Blog for PostgreSQL&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2026 18:58:45 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/march-2026-recap-azure-database-for-postgresql/ba-p/4511432</guid>
      <dc:creator>gauri-kasar</dc:creator>
      <dc:date>2026-04-15T18:58:45Z</dc:date>
    </item>
    <item>
      <title>Tutorial: Building AI Agents That Talk to Your Azure Database for MySQL</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/tutorial-building-ai-agents-that-talk-to-your-azure-database-for/ba-p/4504995</link>
      <description>&lt;P&gt;What if you could ask your database a question in plain English and get the answer instantly, without writing a single line of SQL?&lt;/P&gt;
&lt;P&gt;In this tutorial, you'll build a Python-based AI agent that connects to Azure Database for MySQL server and uses OpenAI's function calling to translate natural language questions into SQL queries, execute them, and return human-readable answers. The agent can explore your schema, answer business questions, and even self-correct when it writes invalid SQL.&lt;/P&gt;
&lt;P&gt;What you'll build:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;An Azure Database for MySQL server with sample data&lt;/LI&gt;
&lt;LI&gt;A Python AI agent with three &lt;STRONG&gt;tools&lt;/STRONG&gt;: &lt;EM&gt;list_tables&lt;/EM&gt;, &lt;EM&gt;describe_table&lt;/EM&gt;, and &lt;EM&gt;run_sql_query&lt;/EM&gt;
&lt;UL&gt;
&lt;LI&gt;In the context of AI agents, &lt;STRONG&gt;tools &lt;/STRONG&gt;are functions the agent can call to interact with external systems like querying a database, fetching a file, or calling an API. Here, our agent has three tools that let it explore and query your MySQL database.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;An interactive chat interface where you ask questions and the agent auto-generates and runs SQL&lt;/LI&gt;
&lt;/OL&gt;
&lt;img /&gt;
&lt;H3&gt;&lt;STRONG&gt;Prerequisites&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Before you begin, make sure you have:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;An Azure account — &lt;A href="https://azure.microsoft.com/free" target="_blank" rel="noopener"&gt;Sign up for free&lt;/A&gt; (includes 12 months of free MySQL hosting)&lt;/LI&gt;
&lt;LI&gt;An OpenAI API key — &lt;A href="https://platform.openai.com/api-keys" target="_blank" rel="noopener"&gt;Get one here&lt;/A&gt; (you'll need a few dollars of credit)&lt;/LI&gt;
&lt;LI&gt;Python 3.10+ — &lt;A href="https://www.python.org/downloads/" target="_blank" rel="noopener"&gt;Download here&lt;/A&gt; (check "Add to PATH" during install)&lt;/LI&gt;
&lt;LI&gt;A code editor — &lt;A href="https://code.visualstudio.com/" target="_blank" rel="noopener"&gt;VS Code&lt;/A&gt; recommended&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Optional:&lt;/STRONG&gt; You can download the complete project from &lt;A class="lia-external-url" href="https://github.com/FarahAbdo/mysql-ai-agent" target="_blank" rel="noopener"&gt;this GitHub repository&lt;/A&gt;, or follow the step‑by‑step instructions below to build it from scratch.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;STRONG&gt;Step 1 — Create the Azure Database for MySQL server&lt;/STRONG&gt;&lt;/H3&gt;
&lt;OL&gt;
&lt;LI&gt;Go to the &lt;A href="https://portal.azure.com/" target="_blank" rel="noopener"&gt;Azure Portal&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Search for "Azure Database for MySQL server" and click + Create&lt;/LI&gt;
&lt;LI&gt;Configure the following settings:&lt;/LI&gt;
&lt;/OL&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Setting&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Value&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Resource group&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;EM&gt;rg-mysql-ai-agent&lt;/EM&gt; (create new)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Server name&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;EM&gt;mysql-ai-agent&lt;/EM&gt; (or any unique name)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Region&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Your nearest region&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;MySQL version&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;8.4&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Workload type&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Dev/Test (Burstable B1ms — free for 12 months)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Admin username&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;EM&gt;mysqladmin&lt;/EM&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Password&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;A strong password — save it!&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; 4. ✅ Check "Add firewall rule for current IP address"&lt;/P&gt;
&lt;P&gt;⚠️ Important: If you skip the firewall settings, you won't be able to connect from Cloud Shell or your local machine.&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; 5. Click Review + create → Create and wait 3–5 minutes&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once deployment finishes, navigate to your server and note the hostname from the Connection details:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;mysql-ai-agent.mysql.database.azure.com&lt;/STRONG&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H3&gt;&lt;STRONG&gt;Step 2 — Load Sample Data&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Open Azure Cloud Shell by clicking the &lt;STRONG&gt;&amp;gt;_&lt;/STRONG&gt; icon in the portal's top toolbar. Select Bash if prompted.&lt;/P&gt;
&lt;P&gt;Connect to your MySQL server. You can copy the exact connection command from the "Connect from browser or locally" section on your server's overview page in the Azure portal:&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;mysql -h mysql-ai-agent.mysql.database.azure.com -u mysqladmin -p&lt;/LI-CODE&gt;
&lt;P&gt;&lt;BR /&gt;Enter your password when prompted (the cursor won't move — just type and press Enter).&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Now paste the following SQL to create a sample sales database:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE DATABASE demo_sales;
USE demo_sales;
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE TABLE customers (
    id INT AUTO_INCREMENT PRIMARY KEY,
    name VARCHAR(100),
    email VARCHAR(100),
    city VARCHAR(50),
    signup_date DATE
);

CREATE TABLE orders (
    id INT AUTO_INCREMENT PRIMARY KEY,
    customer_id INT,
    product VARCHAR(100),
    amount DECIMAL(10,2),
    order_date DATE,
    FOREIGN KEY (customer_id) REFERENCES customers(id)
);

INSERT INTO customers (name, email, city, signup_date) VALUES
('Sara Ahmed', 'sara@example.com', 'Cairo', '2024-06-15'),
('John Smith', 'john@example.com', 'London', '2024-08-22'),
('Priya Patel', 'priya@example.com', 'Mumbai', '2025-01-10');

INSERT INTO orders (customer_id, product, amount, order_date) VALUES
(1, 'Azure Certification Voucher', 150.00, '2025-03-01'),
(2, 'MySQL Workbench Pro License', 99.00, '2025-03-10'),
(1, 'Power BI Dashboard Template', 45.00, '2025-04-05'),
(3, 'Data Analysis Course', 200.00, '2025-05-20');&lt;/LI-CODE&gt;
&lt;P&gt;Verify the data:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT * FROM customers;
SELECT * FROM orders;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Type &lt;EM&gt;&lt;STRONG&gt;exit&lt;/STRONG&gt;&lt;/EM&gt; to leave MySQL.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Step 3 — Set Up the Python Project&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Open a terminal on your local machine and create the project:&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;mkdir mysql-ai-agent
cd mysql-ai-agent
python -m venv venv&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Activate the virtual environment:&lt;/P&gt;
&lt;P&gt;Windows (PowerShell):&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;venv\Scripts\Activate.ps1&lt;/LI-CODE&gt;
&lt;P&gt;macOS/Linux:&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;source venv/bin/activate&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Install the required packages:&lt;/P&gt;
&lt;LI-CODE lang="shell"&gt;pip install openai mysql-connector-python python-dotenv&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Step 4 — Configure Environment Variables&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Create a file named .env in your project folder:&lt;/P&gt;
&lt;LI-CODE lang="editorconfig"&gt;OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxxxxxxxxxxxx
MYSQL_HOST=mysql-ai-agent.mysql.database.azure.com
MYSQL_USER=mysqladmin
MYSQL_PASSWORD=YourPasswordHere
MYSQL_DATABASE=demo_sales&lt;/LI-CODE&gt;
&lt;P&gt;🔒 Security: Never commit this file to Git. Add&lt;STRONG&gt;&lt;EM&gt; .env&lt;/EM&gt;&lt;/STRONG&gt; to your &lt;EM&gt;&lt;STRONG&gt;.gitignore&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Step 5 — Build the Agent&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Open VS Code, create a new file called &lt;STRONG&gt;mysql_agent.py &lt;/STRONG&gt;in your mysql-ai-agent folder, and paste the following code. Let's walk through each section.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;5.1 — Imports and Database Connection&lt;/STRONG&gt;&lt;/H4&gt;
&lt;LI-CODE lang="python"&gt;import os
import json
import mysql.connector
from openai import OpenAI
from dotenv import load_dotenv

load_dotenv()

def get_db_connection():
    return mysql.connector.connect(
        host=os.getenv("MYSQL_HOST"),
        user=os.getenv("MYSQL_USER"),
        password=os.getenv("MYSQL_PASSWORD"),
        database=os.getenv("MYSQL_DATABASE"),
        ssl_disabled=False
    )&lt;/LI-CODE&gt;
&lt;P&gt;This loads your secrets from &lt;EM&gt;&lt;STRONG&gt;.env &lt;/STRONG&gt;&lt;/EM&gt;and creates a reusable MySQL connection function with SSL encryption.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;5.2 — Define the Three Tools&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;These are the functions the AI agent can call:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;def list_tables():
    conn = get_db_connection()
    cursor = conn.cursor()
    cursor.execute("SHOW TABLES")
    tables = [row[0] for row in cursor.fetchall()]

    cursor.close()
    conn.close()

    return json.dumps({"tables": tables})


def describe_table(table_name):
    conn = get_db_connection()
    cursor = conn.cursor()
    cursor.execute(f"DESCRIBE `{table_name}`")
    columns = []

    for row in cursor.fetchall():
        columns.append({
            "field": row[0],
            "type": row[1],
            "null": row[2],
            "key": row[3]
        })

    cursor.close()
    conn.close()

    return json.dumps({"table": table_name, "columns": columns})


def run_sql_query(query):
    if not query.strip().upper().startswith("SELECT"):
        return json.dumps({"error": "Only SELECT queries are allowed."})

    conn = get_db_connection()
    cursor = conn.cursor()

    try:
        cursor.execute(query)
        columns = [desc[0] for desc in cursor.description]
        rows = cursor.fetchall()
        results = []

        for row in rows:
            results.append(dict(zip(columns, row)))
        return json.dumps({"columns": columns, "rows": results}, default=str)
    except mysql.connector.Error as e:
        return json.dumps({"error": str(e)})
    finally:
        cursor.close()
        conn.close()&lt;/LI-CODE&gt;
&lt;P&gt;A few things to note:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;run_sql_query&lt;/STRONG&gt; only allows SELECT statements — this is a safety guardrail that prevents the AI from modifying data&lt;/LI&gt;
&lt;LI&gt;The &lt;STRONG&gt;try/except &lt;/STRONG&gt;block is critical — if the AI generates invalid SQL (e.g., a bad &lt;STRONG&gt;GROUP BY&lt;/STRONG&gt;), the error message is returned to OpenAI, and the model automatically corrects its query and retries. Without this, the script would crash.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;&lt;STRONG&gt;5.3 — Register Tools with OpenAI&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;This tells OpenAI what tools the agent has access to:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;tools = [
    {
        "type": "function",
        "function": {
            "name": "list_tables",
            "description": "List all tables in the connected MySQL database.",
            "parameters": {"type": "object", "properties": {}, "required": []}
        }
    },
    {
        "type": "function",
        "function": {
            "name": "describe_table",
            "description": "Get the schema (columns and types) of a specific table.",
            "parameters": {
                "type": "object",
                "properties": {
                    "table_name": {"type": "string", "description": "Name of the table to describe"}
                },
                "required": ["table_name"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "run_sql_query",
            "description": "Execute a read-only SQL SELECT query and return results.",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {"type": "string", "description": "The SQL SELECT query to execute"}
                },
                "required": ["query"]
            }
        }
    }
]


def call_tool(name, args):
    if name == "list_tables":
        return list_tables()
    elif name == "describe_table":
        return describe_table(args["table_name"])
    elif name == "run_sql_query":
        return run_sql_query(args["query"])
    else:
        return json.dumps({"error": f"Unknown tool: {name}"})&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;5.4 — The Agent Loop&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;This is the core logic. It sends the user's message to OpenAI, processes any tool calls, and loops until the model produces a final text response:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;def chat(user_message, conversation_history):
    client = OpenAI()
    conversation_history.append({"role": "user", "content": user_message})
    print(f"\n{'='*60}")
    print(f"🧑 You: {user_message}")
    print(f"{'='*60}")

    while True:
        response = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=conversation_history,
            tools=tools,
            tool_choice="auto"
        )

        assistant_message = response.choices[0].message
        if assistant_message.tool_calls:
            conversation_history.append(assistant_message)
            for tool_call in assistant_message.tool_calls:
                fn_name = tool_call.function.name
                fn_args = json.loads(tool_call.function.arguments)
                print(f"  🔧 Calling tool: {fn_name}({json.dumps(fn_args)})")
                result = call_tool(fn_name, fn_args)
                print(f"  ✅ Tool returned: {result[:200]}...")

                conversation_history.append({
                    "role": "tool",
                    "tool_call_id": tool_call.id,
                    "content": result
                })
        else:
            final_answer = assistant_message.content
            conversation_history.append({"role": "assistant", "content": final_answer})
            print(f"\n🤖 Agent:\n{final_answer}")
            return conversation_history&lt;/LI-CODE&gt;
&lt;P&gt;The &lt;STRONG&gt;while True&lt;/STRONG&gt; loop is what makes self-correction possible. When a tool returns an error, the model sees it in the conversation and generates a corrected tool call in the next iteration.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;5.5 — Main Entry Point&lt;/STRONG&gt;&lt;/H4&gt;
&lt;LI-CODE lang="python"&gt;if __name__ == "__main__":
    print("\n" + "=" * 60)
    print("  🤖 MySQL AI Agent")
    print("  Powered by OpenAI + Azure Database for MySQL")
    print("  Type 'quit' to exit")
    print("=" * 60)

    system_message = {
        "role": "system",
        "content": (
            "You are a helpful data analyst agent connected to an Azure Database for MySQL. "
            "You have 3 tools: list_tables, describe_table, and run_sql_query. "
            "ALWAYS start by listing tables and describing their schema before writing queries. "
            "Only generate SELECT statements. Never write INSERT, UPDATE, DELETE, or DROP. "
            "Present query results in clean, readable tables. "
            "If the user asks a question, figure out the right SQL to answer it."
        )
    }
    conversation_history = [system_message]
    while True:
        user_input = input("\n🧑 You: ").strip()
        if user_input.lower() in ("quit", "exit", "q"):
            print("\n👋 Goodbye!")
            break

        if not user_input:
            continue

        conversation_history = chat(user_input, conversation_history)&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Your final project folder should look like this:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Step 6 — Run and Test the Agent&lt;/STRONG&gt;&lt;/H3&gt;
&lt;LI-CODE lang="shell"&gt;python mysql_agent.py&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;H4&gt;&lt;STRONG&gt;Test:&amp;nbsp;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;STRONG&gt;Prompt: Which product generated the most revenue and who bought it?&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;&lt;STRONG&gt;How Self-Correction Works&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;One of the most powerful aspects of this agent is its ability to recover from SQL errors automatically. Azure Database for MySQL has &lt;EM&gt;&lt;STRONG&gt;sql_mode=only_full_group_by &lt;/STRONG&gt;&lt;/EM&gt;enabled by default, which rejects queries where non-aggregated columns aren't in the GROUP BY clause.&lt;/P&gt;
&lt;P&gt;When the AI generates an invalid query, here's what happens:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;The &lt;STRONG&gt;run_sql_query&lt;/STRONG&gt; function catches the MySQL error&lt;/LI&gt;
&lt;LI&gt;It returns the error message as the tool result&lt;/LI&gt;
&lt;LI&gt;OpenAI sees the error in the conversation context&lt;/LI&gt;
&lt;LI&gt;The model generates a corrected query automatically&lt;/LI&gt;
&lt;LI&gt;The agent retries — and succeeds&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Without the &lt;STRONG&gt;try/except&lt;/STRONG&gt; error handling, the entire script would crash. This is a key design pattern for production AI agents.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Security Best Practices&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;When building AI agents that interact with databases, security is critical:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Read-only enforcement — The &lt;STRONG&gt;run_sql_query&lt;/STRONG&gt; function rejects anything that isn't a &lt;STRONG&gt;SELECT &lt;/STRONG&gt;statement&lt;/LI&gt;
&lt;LI&gt;SSL encryption — All connections use &lt;STRONG&gt;ssl_disabled=False&lt;/STRONG&gt;, ensuring data in transit is encrypted&lt;/LI&gt;
&lt;LI&gt;Environment variables — Credentials are stored in &lt;STRONG&gt;.env&lt;/STRONG&gt;, never hardcoded&lt;/LI&gt;
&lt;LI&gt;Principle of least privilege — For production, create a dedicated MySQL user with &lt;STRONG&gt;SELECT&lt;/STRONG&gt;-only permissions:&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="sql"&gt;CREATE USER 'ai_agent'@'%' IDENTIFIED BY 'AgentPass123!';
GRANT SELECT ON demo_sales.* TO 'ai_agent'@'%';
FLUSH PRIVILEGES;&lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI&gt;Network isolation — For production workloads, consider using Azure Private Link instead of public access.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;In this tutorial, you built a Python AI agent that connects to Azure Database for MySQL and answers natural language questions by auto-generating SQL - complete with self-correction and security guardrails. Clone the&amp;nbsp;&lt;A class="lia-external-url" href="https://github.com/FarahAbdo/mysql-ai-agent" target="_blank" rel="noopener"&gt;GitHub repo&lt;/A&gt;, spin up your own server, and start experimenting!&lt;/P&gt;
&lt;P&gt;If you'd like to connect to Azure Database for MySQL using the Model Context Protocol (MCP), see &lt;A href="https://techcommunity.microsoft.com/blog/adformysql/unlocking-ai-driven-data-access-azure-database-for-mysql-support-via-the-azure-m/4454623" target="_blank" rel="noopener"&gt;Unlocking AI-Driven Data Access: Azure Database for MySQL Support via the Azure MCP Server&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;If you have any feedback or questions about the information provided above, please leave a comment below.&amp;nbsp;Thank you!&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2026 13:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/tutorial-building-ai-agents-that-talk-to-your-azure-database-for/ba-p/4504995</guid>
      <dc:creator>FarahAbdou</dc:creator>
      <dc:date>2026-04-15T13:00:00Z</dc:date>
    </item>
    <item>
      <title>Azure SQL Migration Starts in SSMS—All in One Flow</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-data-migration-blog/azure-sql-migration-starts-in-ssms-all-in-one-flow/ba-p/4511586</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Migrate SQL Server Using SQL Server Management Studio (SSMS)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Migrating SQL Server workloads to Azure doesn’t have to be complex or fragmented. SQL Server Management Studio (SSMS) provides a guided, end-to-end migration experience that works seamlessly for both Azure Arc–enabled and standalone SQL Server instances.&lt;/P&gt;
&lt;P&gt;In this blog, we’ll walk through how SSMS helps you &lt;STRONG&gt;assess, choose, and migrate&lt;/STRONG&gt; your SQL Server workloads using the right Azure target and migration method.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;One Tool, Multiple Migration Paths&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;SSMS acts as the central starting point for SQL Server migration:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Run readiness and compatibility assessments&lt;/LI&gt;
&lt;LI&gt;Discover recommended Azure targets&lt;/LI&gt;
&lt;LI&gt;Launch guided migrations using the right method for your scenario&lt;/LI&gt;
&lt;LI&gt;Seamlessly connect to the Azure portal when needed&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The experience adapts based on whether your SQL Server is Azure Arc–enabled or not.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Migrating Non‑Arc SQL Server (Local or On‑Premises)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;If your SQL Server is not Arc-enabled, SSMS still gives you everything you need—starting locally and moving to Azure when appropriate.&lt;/P&gt;
&lt;P&gt;Step 1: Run SQL Server Assessment in SSMS&lt;/P&gt;
&lt;P&gt;SSMS performs a local assessment to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;UL&gt;
&lt;LI&gt;Evaluate database readiness&lt;/LI&gt;
&lt;LI&gt;Identify potential blocking issues&lt;/LI&gt;
&lt;LI&gt;Recommend supported Azure targets&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Step 2: Provision the Target via Azure SQL Hub&lt;/P&gt;
&lt;P&gt;From SSMS, you’re guided to Azure SQL Hub in the Azure portal to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;UL&gt;
&lt;LI&gt;Configure your migration target&lt;/LI&gt;
&lt;LI&gt;Select the right Azure SQL deployment option&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Step 3: Choose the Right Migration Method&lt;/P&gt;
&lt;P&gt;SSMS then walks you through guided migration experiences, tailored to your target:&lt;/P&gt;
&lt;P&gt;🟦 Azure SQL Managed Instance&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;UL&gt;
&lt;LI&gt;MI Link – Near real-time migration with minimal downtime&lt;/LI&gt;
&lt;LI&gt;Backup &amp;amp; Restore – Simple, deterministic migrations&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;🟦 Azure SQL Database / Azure SQL VM / Other Targets&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;UL&gt;
&lt;LI&gt;Azure Database Migration Service (DMS)&lt;BR /&gt;A guided, end-to-end migration experience supporting multiple Azure targets&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;SSMS helps ensure you’re choosing the right migration method for the right workload, without needing to switch tools manually.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Migrating Azure Arc–Enabled SQL Server&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;If your SQL Server is connected to Azure via Azure Arc, SSMS provides a streamlined experience that bridges on-premises management and Azure services.&lt;/P&gt;
&lt;P&gt;What happens in SSMS?&lt;/P&gt;
&lt;P&gt;When you start migration from SSMS for an Arc-enabled SQL Server:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;UL&gt;
&lt;LI&gt;SSMS detects the Arc connection&lt;/LI&gt;
&lt;LI&gt;You’re redirected to the Azure portal for the migration flow&lt;/LI&gt;
&lt;LI&gt;Azure provides a centralized, cloud-based migration experience with deeper integration&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This model is ideal if your organization is already using Azure Arc for governance, inventory, and centralized management.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Upgrade Readiness and Compatibility Checks&lt;/P&gt;
&lt;P&gt;Planning to upgrade SQL Server or ensure compatibility with newer versions and Azure targets?&lt;/P&gt;
&lt;P&gt;SSMS includes Upgrade Assessment, which helps you:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;UL&gt;
&lt;LI&gt;Identify deprecated or breaking changes&lt;/LI&gt;
&lt;LI&gt;Understand feature compatibility&lt;/LI&gt;
&lt;LI&gt;Reduce risks before upgrading or migrating&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This is especially useful when modernizing older SQL Server versions or preparing for cloud migration.&lt;/P&gt;
&lt;P&gt;Putting It All Together&lt;/P&gt;
&lt;P&gt;With SSMS, SQL Server migration becomes:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;UL&gt;
&lt;LI&gt;Guided – No guesswork on tools or methods&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;UL&gt;
&lt;LI&gt;Flexible – Supports Arc and non‑Arc environments&lt;/LI&gt;
&lt;LI&gt;Integrated – Local assessment + Azure portal workflows&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;Modern – Designed for hybrid and cloud-first strategies&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Whether you’re migrating a single database or planning a large-scale modernization, SSMS provides a unified starting point to get you there with confidence.&lt;/P&gt;
&lt;P&gt;What’s Next?&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Start with Assessment in SSMS&lt;/LI&gt;
&lt;LI&gt;Choose your Azure target&lt;/LI&gt;
&lt;LI&gt;Migrate using MI Link, Backup/Restore, or DMS&lt;/LI&gt;
&lt;LI&gt;Validate compatibility using Upgrade Assessment&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;SSMS isn’t just a management tool anymore—it’s your launchpad for SQL Server modernization.&lt;/P&gt;
&lt;P&gt;Learn More - &amp;nbsp;&lt;A href="https://learn.microsoft.com/ssms/migrate/migrate-sql-server-azure-sql?tabs=sql-standard" target="_blank"&gt;Migrate SQL Server to Azure SQL | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2026 12:26:12 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-data-migration-blog/azure-sql-migration-starts-in-ssms-all-in-one-flow/ba-p/4511586</guid>
      <dc:creator>neelball</dc:creator>
      <dc:date>2026-04-15T12:26:12Z</dc:date>
    </item>
    <item>
      <title>Combining pgvector and Apache AGE - knowledge graph &amp; semantic intelligence in a single engine</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/combining-pgvector-and-apache-age-knowledge-graph-semantic/ba-p/4508781</link>
      <description>&lt;P&gt;&lt;EM&gt;Inspired by &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/adforpostgresql/graphrag-and-postgresql-integration-in-docker-with-cypher-query-and-ai-agents-ve/4503586" target="_blank" rel="noopener" data-lia-auto-title="GraphRAG and PostgreSQL Integration in Docker with Cypher Query and AI Agents" data-lia-auto-title-active="0"&gt;GraphRAG and PostgreSQL Integration in Docker with Cypher Query and AI Agents&lt;/A&gt;, which demonstrated how Apache AGE brings Cypher based graph querying into PostgreSQL for GraphRAG pipelines. This post takes that idea further combining AGE's graph traversal with pgvector's semantic search to build a unified analytical engine where vectors and graphs reinforce each other in a single PostgreSQL instance.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;This post targets workloads where entity types, relationship semantics, and schema cardinality are known before ingestion. Embeddings are generated from structured attribute fields; graph edges are typed and written by deterministic ETL. No LLM is involved at any stage. You should use this approach when you have structured data and need operational query performance, and deterministic, auditable, sub-millisecond retrieval.&lt;/EM&gt;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;The problem nobody talks about the multi database/ multi hop tax&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;If you run technology for a large enterprise, you already know the data problem. It is not that you do not have enough data. It is that your data lives in too many places, connected by too many fragile pipelines, serving too many conflicting views of the same reality.&lt;/P&gt;
&lt;P&gt;Here is a pattern that repeats across industries. One team needs to find entities "similar to" a reference item — not by exact attribute match, but by semantic meaning derived from unstructured text like descriptions, reviews, or specifications. That is a vector similarity problem.&lt;/P&gt;
&lt;P&gt;Another team needs to traverse relationships trace dependency chains, map exposure paths, or answer questions like "if this node is removed, what downstream nodes are affected?" That is a graph traversal problem.&lt;/P&gt;
&lt;P&gt;Meanwhile, the authoritative master data of IDs, attributes, pricing, transactional history already lives in Postgres.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Now you are operating three databases. Three bills. Three sets of credentials. Three backup strategies. A fragile ETL layer stitching entity IDs across systems, breaking silently whenever someone adds a new attribute to the master table. And worst of all, nobody can ask a question that spans all three systems without custom application code.&lt;/P&gt;
&lt;P&gt;Azure PostgreSQL database can already do all three jobs. Two extensions&amp;nbsp;&lt;STRONG&gt;pgvector&lt;/STRONG&gt;&amp;nbsp;for vector similarity search and&amp;nbsp;&lt;STRONG&gt;Apache AGE&lt;/STRONG&gt;&amp;nbsp;extension for graph traversal bringing these capabilities natively into the database. No new infrastructure. No sync pipelines. No multi database tax!&lt;/P&gt;
&lt;P&gt;This post walks through exactly how to combine them, why each piece matters at scale, and what kinds of queries become possible when you stop treating vectors and graphs as separate concerns.&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;The architecture: Two extensions, One engine&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;pgvector&amp;nbsp;adds a native&amp;nbsp;vector&amp;nbsp;data type and distance operators (&amp;lt;=&amp;gt;,&amp;nbsp;&amp;lt;-&amp;gt;,&amp;nbsp;&amp;lt;#&amp;gt;) with HNSW and IVFFlat index support.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://aka.ms/pg-diskann-blog" target="_blank" rel="noopener"&gt;pg_diskann&lt;/A&gt;&amp;nbsp;adds a third index type that keeps the index on disk instead of in memory, enabling large scale vector search without proportional RAM.&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;&lt;EM&gt;example 1&lt;/EM&gt;&lt;/STRONG&gt;&lt;/U&gt;&lt;EM&gt;&lt;U&gt; &lt;/U&gt;- to run a product similarity query such as the one below which corelates products sold across multiple markets which are related (cosine similarity).&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; The limit clause in sub query limits the similarity search to closest 1 product recommendation&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; High similarity score of &amp;gt; 0.75 (aka 75% similarity in embeddings)&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;-- Table DDL - for illuatration purposes only 
CREATE TABLE IF NOT EXISTS products (
    id              SERIAL PRIMARY KEY,
    sku             TEXT UNIQUE NOT NULL,
    name            TEXT NOT NULL,
    brand           TEXT NOT NULL,
    category        TEXT NOT NULL,
    subcategory     TEXT,
    market          TEXT NOT NULL,
    region          TEXT,
    description     TEXT,
    ingredients     TEXT,
    avg_rating      FLOAT DEFAULT 0.0,
    review_count    INT DEFAULT 0,
    price_usd       FLOAT,
    launch_year     INT,
    status          TEXT DEFAULT 'active',         
    embedding       vector(384)               
);&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT     us.name                                                    AS us_product,
           us.brand                                                   AS us_brand,
           in_p.name                                                  AS india_match,
           in_p.brand                                                 AS india_brand,
           Round((1 - (us.embedding &amp;lt;=&amp;gt; in_p.embedding))::NUMERIC, 4) AS similarity
FROM       products us
cross join lateral
           (
                    SELECT   name,
                             brand,
                             embedding
                    FROM     products
                    WHERE    market = 'India'
                    AND      category = us.category
                    ORDER BY embedding &amp;lt;=&amp;gt; us.embedding limit 1 ) in_p
WHERE      us.market = 'US'
AND        us.category = 'Skincare'
AND        us.avg_rating &amp;gt;= 4.0
AND        round((1 - (us.embedding &amp;lt;=&amp;gt; in_p.embedding))::NUMERIC, 4)&amp;gt; 0.75
ORDER BY   similarity DESC limit 20;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;AGE adds a&amp;nbsp;cypher()&amp;nbsp;function that executes cypher queries against a labeled property graph stored in the database managed and maintained under the&amp;nbsp;ag_catalog&amp;nbsp;schema. Vertices and edges become first class PostgreSQL rows with&amp;nbsp;agtype properties.&lt;/P&gt;
&lt;P&gt;The age extension supports&amp;nbsp;MATCH,&amp;nbsp;CREATE,&amp;nbsp;MERGE,&amp;nbsp;WITH, and aggregations.&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;&lt;EM&gt;example 2&lt;/EM&gt;&lt;/STRONG&gt;&lt;/U&gt;&lt;EM&gt; - to run a product similarity query such as the one below which returns common products sold via multiple retail channels.&lt;/EM&gt;&lt;/P&gt;
&lt;LI-CODE lang="cypher"&gt;SET search_path = ag_catalog, "$user", public;

SELECT * FROM cypher('cpg_graph', $$
    MATCH (p:Product)-[:SOLD_AT]-&amp;gt;(walmart:RetailChannel {name: 'Walmart'})
    MATCH (p)-[:SOLD_AT]-&amp;gt;(target:RetailChannel {name: 'Target'})
    MATCH (b:Brand)-[:MANUFACTURES]-&amp;gt;(p)
    RETURN b.name     AS brand,
           p.name     AS product,
           p.category AS category,
           p.market   AS market,
           p.price_usd AS price
    ORDER BY p.category, b.name
$$) AS (brand agtype, product agtype, category agtype,
        market agtype, price agtype);
&lt;/LI-CODE&gt;
&lt;P&gt;The critical point and takeaway here is that both extensions participate in the same query planner and executor. A CTE that calls pgvector's &amp;lt;=&amp;gt; operator can feed results into a cypher() call in the next CTE all within a single transaction, sharing all available processes and control the database has to offer.&lt;/P&gt;
&lt;P&gt;Finally, you are looking at code that looks like -&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE EXTENSION IF NOT EXISTS vector;
CREATE EXTENSION IF NOT EXISTS age;

SET search_path = ag_catalog, "$user", public;
SELECT create_graph('knowledge_graph');
&lt;/LI-CODE&gt;
&lt;H2&gt;&lt;STRONG&gt;The bridge: pgvector → Apache AGE&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;This is the architectural centrepiece where the mechanism that turns vector similarity scores into traversable graph edges. Without this “bridge” pgvector and AGE are two isolated extensions.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Why bridge at all?&lt;/STRONG&gt;&lt;/H4&gt;
&lt;LI-CODE lang="sql"&gt;pgvector answers: "What is similar to X?" 
AGE answers: "What is connected to Y, and how?" 
&lt;/LI-CODE&gt;
&lt;P&gt;These are fundamentally different questions operating on fundamentally different data structures. pgvector works on a flat vector space and every query is a distance calculation against an ANN index.&lt;/P&gt;
&lt;P&gt;AGE works on a labelled property graph where every query is a pattern match across typed nodes and edges.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;What if now the question is – What is like X and connected to Y and how?&lt;/LI-CODE&gt;
&lt;P&gt;This is where the bridge gets activated comes into life.&lt;/P&gt;
&lt;P&gt;This takes cosine similarity distance scores from pgvector and writes them as&amp;nbsp;&lt;EM&gt;SIMILAR_TO&lt;/EM&gt;&amp;nbsp;edges in the AGE property graph turning a distance computation into a traversable relationship.&lt;/P&gt;
&lt;P&gt;Once similarity is an edge, cypher queries can then combine it with structural edges&amp;nbsp; in a single declarative pattern.&lt;/P&gt;
&lt;LI-CODE lang="cypher"&gt;for ind_prod_id, us_prod_id, similarity in pairs:
    execute_cypher(cur, f"""
        MATCH (a:Product {{product_id: { ind_prod_id }}}),
              (b:Product {{product_id: { us_prod_id }}})
        CREATE (a)-[:SIMILAR_TO {{score: {score:.4f},
                                  method: 'pgvector_cosine'}}]-&amp;gt;(b)
        CREATE (b)-[:SIMILAR_TO {{score: {score:.4f},
                                  method: 'pgvector_cosine'}}]-&amp;gt;(a)
    """)
&lt;/LI-CODE&gt;
&lt;P&gt;The cypher()&amp;nbsp;function translates Cypher into DML against&amp;nbsp;ag_catalog&amp;nbsp;tables under the hood, these are plain PostgreSQL heap inserts just like another row.&lt;/P&gt;
&lt;P&gt;The score property is the edge weight on the SIMILAR_TO relationship. Its value is the similarity score computed from pgvector using cosine similarity, so a higher score means the two products are more semantically similar.&lt;/P&gt;
&lt;P&gt;The method property is metadata on that same edge. It records how the score was produced. In this case, pgvector_cosine is just a string label indicating that the relationship was derived using pgvector based cosine similarity.&lt;/P&gt;
&lt;P&gt;Cosine similarity is symmetric, but property graph traversal is directional i.e. MATCH (a)-[:SIMILAR_TO]-&amp;gt;(b) won't find the reverse path unless both directional edges exist.&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;&lt;STRONG&gt;Why this combination matters&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;One backup strategy. One monitoring stack. One connection pool. One failover target. One set of credentials. One database restore considerations - for teams already running Az PostgreSQL databases in production this adds capabilities without adding any net new infrastructure.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Unified cost model&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;The planner assigns cost estimates to index scan for both execution engines using the same cost framework it uses for B-tree lookups and sequential scans. It can decide whether to use the HNSW index or fall back to a sequential scan based on table statistics and server parameters.&lt;/P&gt;
&lt;P&gt;As you have learnt so far, there is no separate storage or database engine to learn.&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Bringing all this knowledge together&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Examples 1 and 2 were all about native vector search and native graph search example in a classic product catalog scenario, respectively. Now, let’s bring this to life - &lt;EM&gt;What if now the question is – What is like X and connected to Y and how?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;In this use case - pgvector finds the cross market matches (as shown in example 1), then Cypher checks which of those matches are sold at both Walmart and Target:&lt;/EM&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SET search_path = ag_catalog, "$user", public;

-- Cross-market matching (pgvector) → Retail channel overlap (graph)
WITH cross_market AS (
    SELECT us.id    AS us_id,
           us.name  AS us_product,
           us.brand AS us_brand,
           in_p.id    AS india_id,
           in_p.name  AS india_match,
           in_p.brand AS india_brand,
           ROUND((1 - (us.embedding &amp;lt;=&amp;gt; in_p.embedding))::numeric, 4) AS similarity
    FROM products us
    CROSS JOIN LATERAL (
        SELECT id, name, brand, embedding
        FROM products
        WHERE market = 'India'
          AND category = us.category
        ORDER BY embedding &amp;lt;=&amp;gt; us.embedding
        LIMIT 1
    ) in_p
    WHERE us.market = 'US'
      AND us.category = 'Skincare'
      AND us.avg_rating &amp;gt;= 4.0
      AND ROUND((1 - (us.embedding &amp;lt;=&amp;gt; in_p.embedding))::numeric, 4) &amp;gt; 0.75
),
dual_channel AS (
    SELECT (pid::text)::int AS product_id,
           brand::text       AS brand
    FROM cypher('cpg_graph', $$
        MATCH (p:Product)-[:SOLD_AT]-&amp;gt;(w:RetailChannel {name: 'Walmart'})
        MATCH (p)-[:SOLD_AT]-&amp;gt;(t:RetailChannel {name: 'Target'})
        MATCH (b:Brand)-[:MANUFACTURES]-&amp;gt;(p)
        RETURN p.product_id AS pid,
               b.name       AS brand
    $$) AS (pid agtype, brand agtype)
)
SELECT cm.us_product,
       cm.us_brand,
       cm.india_match,
       cm.india_brand,
       cm.similarity,
       CASE WHEN dc.product_id IS NOT NULL
            THEN 'Yes' ELSE 'No'
       END AS india_match_at_walmart_and_target
FROM cross_market cm
LEFT JOIN dual_channel dc ON dc.product_id = cm.india_id
ORDER BY cm.similarity DESC
LIMIT 20;
&lt;/LI-CODE&gt;
&lt;H2&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;The Azure PostgreSQL database ecosystem has quietly assembled the components for a unified semantic + structural analytics engine in form of extensions.&lt;/P&gt;
&lt;P&gt;pgvector with pg_diskann delivers production grade approximate nearest-neighbour search with ANN indexes.&lt;/P&gt;
&lt;P&gt;Apache AGE delivers cypher based property graph traversal. Together with a “bridge,” they enable query patterns that are impossible in either system alone and they do it within the ACID guarantees, operational tooling, and SQL vocabulary knowledge you already have.&lt;/P&gt;
&lt;P&gt;Stop paying for three databases when one will do!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2026 11:59:07 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/combining-pgvector-and-apache-age-knowledge-graph-semantic/ba-p/4508781</guid>
      <dc:creator>Raunak</dc:creator>
      <dc:date>2026-04-15T11:59:07Z</dc:date>
    </item>
    <item>
      <title>Monitoring Azure SQL Data Sync Errors Using PowerShell</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/monitoring-azure-sql-data-sync-errors-using-powershell/ba-p/4511206</link>
      <description>&lt;P&gt;Azure SQL Data Sync is a powerful service that enables data synchronization between multiple databases across Azure SQL Database and on‑premises SQL Server environments. It supports hybrid architectures and distributed applications by allowing selected data to synchronize bi‑directionally between hub and member databases using a hub‑and‑spoke topology.&lt;/P&gt;
&lt;P&gt;However, one of the most common operational challenges faced by support engineers and customers using Azure SQL Data Sync is:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;❗ Lack of proactive monitoring for sync failures or errors&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;By default, Azure SQL Data Sync does not provide native alerting mechanisms that notify administrators when synchronization operations fail or encounter issues. This can result in silent data drift or synchronization delays that may go unnoticed in production environments.&lt;/P&gt;
&lt;P&gt;In this blog, we’ll walk through how to monitor Azure SQL Data Sync activity and detect synchronization errors using Azure PowerShell commands.&lt;/P&gt;
&lt;H2&gt;Why Monitoring Azure SQL Data Sync Matters&lt;/H2&gt;
&lt;P&gt;Azure SQL Data Sync works by synchronizing data between:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Hub Database (must be Azure SQL Database)&lt;/LI&gt;
&lt;LI&gt;Member Databases (Azure SQL Database or SQL Server)&lt;/LI&gt;
&lt;LI&gt;Sync Metadata Database (stores sync configuration and logs)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;All synchronization activity—including errors, failures, and successes—is logged internally within the Sync Metadata Database and exposed through Azure SQL Sync Group logs.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Monitoring these logs enables:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Detection of sync failures&lt;/LI&gt;
&lt;LI&gt;Identification of schema mismatches&lt;/LI&gt;
&lt;LI&gt;Validation of sync completion&lt;/LI&gt;
&lt;LI&gt;Troubleshooting of sync group issues&lt;/LI&gt;
&lt;LI&gt;Verification of last successful sync activity&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Prerequisites&lt;/H2&gt;
&lt;P&gt;Before monitoring Azure SQL Data Sync activity, ensure the following:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Azure PowerShell module (&lt;STRONG&gt;Az.Sql&lt;/STRONG&gt;) is installed&lt;/LI&gt;
&lt;LI&gt;You have access to the Azure SQL Data Sync resources&lt;/LI&gt;
&lt;LI&gt;Proper authentication and subscription context are configured&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Install and import the required module if not already available:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Install Azure PowerShell module if not already installed
Install-Module -Name Az -Repository PSGallery -Force

# Import the SQL module
Import-Module Az.Sql&lt;/LI-CODE&gt;
&lt;P&gt;Authenticate to Azure:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Login to Azure
Connect-AzAccount -TenantId "&amp;lt;tenant-id&amp;gt;"

# Set subscription context
Set-AzContext -SubscriptionId "&amp;lt;subscription-id&amp;gt;"&lt;/LI-CODE&gt;
&lt;P&gt;These commands enable access to Azure SQL Sync Group monitoring operations.&lt;/P&gt;
&lt;H2&gt;Monitoring Sync Group Status&lt;/H2&gt;
&lt;P&gt;To retrieve Sync Group details, define the required variables:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Define variables
$resourceGroup = "rg-datasync-demo"
$serverName = "&amp;lt;hub-server-name&amp;gt;"
$databaseName = "HubDatabase"
$syncGroupName = "SampleSyncGroup"

# Get sync group details
Get-AzSqlSyncGroup -ResourceGroupName $resourceGroup `
-ServerName $serverName `
-DatabaseName $databaseName `
-SyncGroupName $syncGroupName | Format-List&lt;/LI-CODE&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Note:&lt;BR /&gt;The LastSyncTime property returned by Get-AzSqlSyncGroup may sometimes display a value such as &lt;STRONG&gt;1/1/0001&lt;/STRONG&gt;, even when synchronization operations are completing successfully.&lt;BR /&gt;To obtain accurate synchronization timestamps, it is recommended to use Sync Group Logs instead.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;Monitoring Sync Activity Using Logs (Recommended)&lt;/H2&gt;
&lt;P&gt;To monitor synchronization activity and retrieve detailed sync status, use:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Get sync logs for the last 24 hours
$startTime = (Get-Date).AddHours(-24).ToString("yyyy-MM-ddTHH:mm:ssZ")
$endTime = (Get-Date).ToString("yyyy-MM-ddTHH:mm:ssZ")

Get-AzSqlSyncGroupLog -ResourceGroupName $resourceGroup `
-ServerName $serverName `
-DatabaseName $databaseName `
-SyncGroupName $syncGroupName `
-StartTime $startTime `
-EndTime $endTime&lt;/LI-CODE&gt;
&lt;P&gt;This command retrieves:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Sync operation timestamps&lt;/LI&gt;
&lt;LI&gt;Sync status&lt;/LI&gt;
&lt;LI&gt;Error messages&lt;/LI&gt;
&lt;LI&gt;Activity details&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Sync Group Logs provide more reliable monitoring information than the Sync Group status output alone.&lt;/P&gt;
&lt;H2&gt;Retrieving the Last Successful Sync Time&lt;/H2&gt;
&lt;P&gt;To determine the most recent successful synchronization operation:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Get the most recent successful sync timestamp
$startTime = (Get-Date).AddDays(-7).ToString("yyyy-MM-ddTHH:mm:ssZ")
$endTime = (Get-Date).ToString("yyyy-MM-ddTHH:mm:ssZ")

Get-AzSqlSyncGroupLog -ResourceGroupName $resourceGroup `
-ServerName $serverName `
-DatabaseName $databaseName `
-SyncGroupName $syncGroupName `
-StartTime $startTime `
-EndTime $endTime |
Where-Object { $_.Details -like "*completed*" -or $_.Type -eq "Success" } |
Select-Object -First 1 Timestamp, Type, Details&lt;/LI-CODE&gt;
&lt;P&gt;This helps administrators validate whether synchronization is occurring as expected across the sync topology.&lt;/P&gt;
&lt;H2&gt;Filtering for Synchronization Errors&lt;/H2&gt;
&lt;P&gt;To identify failed or problematic sync operations:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Get only error logs
Get-AzSqlSyncGroupLog -ResourceGroupName $resourceGroup `
-ServerName $serverName `
-DatabaseName $databaseName `
-SyncGroupName $syncGroupName `
-StartTime $startTime `
-EndTime $endTime |
Where-Object { $_.LogLevel -eq "Error" }&lt;/LI-CODE&gt;
&lt;P&gt;Filtering logs by error type allows for:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Rapid identification of failed sync attempts&lt;/LI&gt;
&lt;LI&gt;Analysis of failure causes&lt;/LI&gt;
&lt;LI&gt;Early detection of data consistency risks&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Key Takeaways&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Azure SQL Data Sync does not provide native alerting for sync failures&lt;/LI&gt;
&lt;LI&gt;Sync Group Logs offer detailed monitoring of sync operations&lt;/LI&gt;
&lt;LI&gt;Get-AzSqlSyncGroupLog provides accurate timestamps and status&lt;/LI&gt;
&lt;LI&gt;Monitoring logs enables detection of silent sync failures&lt;/LI&gt;
&lt;LI&gt;PowerShell can be used to proactively monitor synchronization health&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;References&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://github.com/mhaywardhill/azure-sql-datasync-error-monitoring/tree/main/monitoring" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Azure SQL Data Sync Error Monitoring GitHub Repository&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/sql-data-sync-data-sql-server-sql-database?view=azuresql" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;What is SQL Data Sync for Azure?&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 14 Apr 2026 13:27:59 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/monitoring-azure-sql-data-sync-errors-using-powershell/ba-p/4511206</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-04-14T13:27:59Z</dc:date>
    </item>
    <item>
      <title>Cascading Read Replicas Now Generally Available!</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/cascading-read-replicas-now-generally-available/ba-p/4510610</link>
      <description>&lt;P&gt;We’re excited to announce the &lt;STRONG&gt;General Availability of &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/postgresql/read-replica/concepts-read-replicas#create-cascading-read-replicas-preview" target="_blank"&gt;cascading read replicas in Azure Database for PostgreSQL&lt;/A&gt;&lt;/STRONG&gt;. This capability allows you to create read replicas for your Azure Database for PostgreSQL instance not only from a primary server, but also from existing read replicas, enabling &lt;STRONG&gt;multi‑level replication chains&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;Coordinating read‑heavy database workloads across multiple regions can be challenging, especially when you’re trying to deliver low‑latency read response experiences to users spread across different geographic locations. One effective way to address this is by placing read replicas closer to where your users are, allowing applications to serve read requests with significantly reduced latency and improved performance.&lt;/P&gt;
&lt;H2&gt;What are cascading read replicas?&lt;/H2&gt;
&lt;P&gt;With cascading read replicas, you can scale read‑intensive workloads more effectively, distribute read traffic efficiently, and support advanced deployment topologies such as globally distributed applications. Each read replica can act as a source for additional replicas, forming a &lt;STRONG&gt;tree‑like replication structure&lt;/STRONG&gt;. For example, if your primary server is deployed in one region, you can create direct replicas in nearby regions and then cascade additional replicas to more distant locations. This approach helps spread read traffic evenly while minimizing latency for users around the world. We support up to 2 levels of replication with this feature. Level 1 will be all the read replicas and level 2 will be cascading read replicas.&lt;/P&gt;
&lt;H2&gt;Why use cascading read replicas?&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Improved scalability&lt;/STRONG&gt;&lt;BR /&gt;Cascading read replicas support multi‑level replication, making it easier to handle high volumes of read traffic without overloading a single instance by scaling up to 30 read replicas.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Geographic distribution&lt;/STRONG&gt;&lt;BR /&gt;By placing replicas closer to your global user base, you can significantly reduce read latency and deliver faster, more responsive application experiences.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Efficient read traffic distribution&lt;/STRONG&gt;&lt;BR /&gt;Distributing read workloads across multiple replicas helps balance load, improving overall performance and reliability.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Additionally, cascading read replicas offer operational flexibility. If you observe replication lag, you can &lt;STRONG&gt;perform a switchover operation between a cascading read replica with its source or intermediate replica&lt;/STRONG&gt;, helping you maintain optimal performance and availability for your replicas.&lt;/P&gt;
&lt;H2&gt;How does replication work with cascading read replicas?&lt;/H2&gt;
&lt;P&gt;The primary server acts as a source for the read replica. Data is asynchronously replicated to these replicas. When we add cascading replicas, the previous replicas act as a data source for replication.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;In the diagram above, “primary-production-server” is the primary server with three read replicas. One of these replicas, “readreplica01”, serves as the source for another read replica, “readreplica11” which is a cascading read replica.&lt;/P&gt;
&lt;P&gt;With cascading read replicas, you can add up to five read replicas per source and replicate data across two levels, as shown in the diagram. This allows you to create up to 30 read replicas in total five read replicas directly from the primary server, and up to 25 additional replicas at the second level (each second-level replica can have up to five read replicas).&lt;/P&gt;
&lt;P&gt;If you notice replication lag between an intermediate read replica and a cascading read replica, you can use a switchover operation to swap “readreplica01” and “readreplica11”, helping reduce the impact of lag.&lt;/P&gt;
&lt;P&gt;To learn more about cascading read replicas, please refer to our documentation: &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/postgresql/read-replica/concepts-read-replicas#create-cascading-read-replicas" target="_blank"&gt;Cascading read replicas&lt;/A&gt;&lt;/P&gt;
&lt;H2&gt;Deploying cascading read replicas on Azure portal&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;Navigate to the “Replication” tab and then click on “Create replica” highlighted in red as shown below:&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;After creating a read replica as the below screenshot shows that you have 1 read replica that is attached to the primary instance.&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;Click on the created replica and navigate to the replication tab, source server is “read-replica-01” and we will be creating a cascading read replica under this.&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Once cascading read replica is created you can see the role of “read-replica-01” has now changed to Source, Replica. You can perform site swap operation by clicking on the promote button for cascading read replica.&lt;img /&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Deploy cascading read replica with terraform:&lt;/H2&gt;
&lt;P&gt;Before you start, make sure you have:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;An existing &lt;STRONG&gt;primary PostgreSQL Flexible Server&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;At least &lt;STRONG&gt;one read replica&lt;/STRONG&gt; already created from the primary&lt;/LI&gt;
&lt;LI&gt;AzureRM provider with latest version&lt;/LI&gt;
&lt;LI&gt;Proper permissions on the Azure subscription and resource group&lt;/LI&gt;
&lt;/UL&gt;
&lt;OL&gt;
&lt;LI&gt;Configure the AzureRM Provider: Start by configuring the AzureRM provider in your Terraform project.&lt;LI-CODE lang="shell-session"&gt;terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~&amp;gt; 3.80"
    }
  }
}

provider "azurerm" {
  features {}
}&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;Reference the existing read replica server using the data block to reference the replica server.&lt;/P&gt;
&lt;LI-CODE lang="shell-session"&gt;data "azurerm_postgresql_flexible_server" "source_replica" {
  name                = "my-read-replica-1"
  resource_group_name = "my-resource-group"
}&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;Now create a new PostgreSQL Flexible Server and point it to the replica using create_source_server_id.&lt;/P&gt;
&lt;LI-CODE lang="shell-session"&gt;resource "azurerm_postgresql_flexible_server" "cascading_replica" {
  name                   = "my-cascading-replica"
  resource_group_name    = "my-resource-group"
  location               = data.azurerm_postgresql_flexible_server.source_replica.location
  version                = data.azurerm_postgresql_flexible_server.source_replica.version

  delegated_subnet_id    = data.azurerm_postgresql_flexible_server.source_replica.delegated_subnet_id
  private_dns_zone_id    = data.azurerm_postgresql_flexible_server.source_replica.private_dns_zone_id

  create_mode            = "Replica"
  create_source_server_id = data.azurerm_postgresql_flexible_server.source_replica.id

  storage_mb             = 32768
  sku_name               = "Standard_D4s_v3"

  depends_on = [
    data.azurerm_postgresql_flexible_server.source_replica
  ]
}&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;Apply the Terraform Configuration&lt;/P&gt;
&lt;LI-CODE lang="shell-session"&gt;terraform init
terraform plan
terraform apply&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Key Considerations&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;Cascading read replicas allow for up to 5 read replicas and two levels of replication.&lt;/LI&gt;
&lt;LI&gt;Creating cascading read replicas is supported in PostgreSQL version 14 and above.&lt;/LI&gt;
&lt;LI&gt;Promote operation is not supported for intermediate read replicas with cascading read replicas.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;Cascading read replicas in Azure Database for PostgreSQL offer a scalable way to distribute your read traffic across the same and different regions, reducing the read workload on primary database. For globally distributed applications, this can improve read latency as well as resilience and performance. This design supports horizontal scaling as your application demand grows, ensuring you can handle a high volume of read requests without compromising speed. Get started with this feature today and scale your read workloads.&lt;/P&gt;</description>
      <pubDate>Mon, 13 Apr 2026 17:23:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/cascading-read-replicas-now-generally-available/ba-p/4510610</guid>
      <dc:creator>gauri-kasar</dc:creator>
      <dc:date>2026-04-13T17:23:08Z</dc:date>
    </item>
    <item>
      <title>Fixing “There is not enough space on the disk” during Azure Data Sync initial sync (On‑prem ➜ Azure)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/fixing-there-is-not-enough-space-on-the-disk-during-azure-data/ba-p/4510866</link>
      <description>&lt;P&gt;When you run an&amp;nbsp;&lt;STRONG&gt;initial&lt;/STRONG&gt; (first-time) sync from an on‑premises SQL Server database to &lt;STRONG&gt;Azure SQL Database&lt;/STRONG&gt; using &lt;STRONG&gt;SQL Data Sync&lt;/STRONG&gt;, the local agent may fail with a disk-space error—even when the disk “looks” like it has free space. The reason is that the initial sync can generate &lt;STRONG&gt;large temporary files&lt;/STRONG&gt; in the Windows &lt;STRONG&gt;TEMP&lt;/STRONG&gt; location used by the Data Sync Agent.&lt;/P&gt;
&lt;P&gt;This post explains the symptom, what’s happening under the hood, and the most practical mitigation:&amp;nbsp;&lt;STRONG&gt;move the Data Sync Agent’s TEMP/TMP to a drive with sufficient space&lt;/STRONG&gt; and restart the service.&lt;/P&gt;
&lt;H2&gt;Symptom&lt;/H2&gt;
&lt;P&gt;During an initial sync (commonly on-premises ➜ Azure), the sync fails while applying a batch file.&lt;/P&gt;
&lt;H2&gt;Error&lt;/H2&gt;
&lt;P&gt;You may see an error similar to:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Sync failed with the exception:&lt;BR /&gt;“An unexpected error occurred when applying batch file … .batch. See the inner exception for more details. Inner exception: &lt;STRONG&gt;There is not enough space on the disk&lt;/STRONG&gt; …”&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Microsoft Learn also calls out “disk insufficient space” scenarios for SQL Data Sync and points to the&amp;nbsp;&lt;STRONG&gt;%TEMP%&lt;/STRONG&gt; directory as the key location to check.&lt;/P&gt;
&lt;H2&gt;What’s actually happening (Root Cause)&lt;/H2&gt;
&lt;H3&gt;1) Initial sync uses temp files on the agent machine&lt;/H3&gt;
&lt;P&gt;During initialization, the local agent can &lt;STRONG&gt;load data and store it as temp files&lt;/STRONG&gt; in the system temp folder. This is explicitly called out in the Azure SQL Data Sync scalability guidance.&lt;/P&gt;
&lt;H3&gt;2) The agent can generate&amp;nbsp;&lt;EM&gt;more&lt;/EM&gt; than “just the batch files”&lt;/H3&gt;
&lt;P&gt;In practice, you’ll often see:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Batch files&lt;/STRONG&gt; (e.g., sync_*.batch)&lt;/LI&gt;
&lt;LI&gt;Extra temp files under folders like &lt;STRONG&gt;MAT_ / MATS_&lt;/STRONG&gt; that are used for internal processing (commonly described as “sorting”/intermediate work).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Internal field experience shared in the Data Sync support channel highlights that the MAT/MATS files can be &lt;STRONG&gt;much larger&lt;/STRONG&gt; than the batch files—sometimes &lt;STRONG&gt;8–10× larger&lt;/STRONG&gt; than the data being synced for that table (especially during initialization).&lt;/P&gt;
&lt;H3&gt;3) Why “I still have free disk space” can be misleading&lt;/H3&gt;
&lt;P&gt;If your Data Sync Agent’s TEMP points to a system drive (often C:), it can fill quickly with temp batches + MAT/MATS files during the first sync—particularly for large tables or many tables being initialized. The Azure SQL Data Sync “large scale” guidance recommends ensuring the temp folder has enough space &lt;EM&gt;before&lt;/EM&gt; starting initialization and notes you &lt;STRONG&gt;can move TEMP/TMP&lt;/STRONG&gt; to another drive.&lt;/P&gt;
&lt;H2&gt;Mitigation (Recommended)&lt;/H2&gt;
&lt;H3&gt;Option A — Move TEMP/TMP to a larger drive (recommended)&lt;/H3&gt;
&lt;P&gt;The &lt;STRONG&gt;Microsoft Azure Blog&lt;/STRONG&gt; guidance for large-scale initialization is clear: move the temp folder by setting TEMP and TMP environment variables and &lt;STRONG&gt;restart the sync service&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H4&gt;Key point: change the variables for the&amp;nbsp;&lt;STRONG&gt;same account running the Data Sync Agent service&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Environment variables exist at &lt;STRONG&gt;user scope&lt;/STRONG&gt; and &lt;STRONG&gt;machine scope&lt;/STRONG&gt;, and the effective TEMP location depends on which account the agent service runs under.&lt;/P&gt;
&lt;P&gt;A simple PowerShell approach (run elevated) is to read and set the variables at the appropriate scope. (Example shown below uses the standard .NET environment APIs.)&lt;/P&gt;
&lt;LI-CODE lang=""&gt;# Run in Administrator mode
# Get current values
[Environment]::GetEnvironmentVariable("TEMP","User")
[Environment]::GetEnvironmentVariable("TEMP","Machine")
# Set new values (examples)
[Environment]::SetEnvironmentVariable("TEMP","D:\TempUser","User")
[Environment]::SetEnvironmentVariable("TMP" ,"D:\TempUser","User")
# or machine scope:
[Environment]::SetEnvironmentVariable("TEMP","D:\TempMachine","Machine")
[Environment]::SetEnvironmentVariable("TMP" ,"D:\TempMachine","Machine")&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Important:&lt;/STRONG&gt; After updating TEMP/TMP, &lt;STRONG&gt;restart the SQL Data Sync agent service&lt;/STRONG&gt; so it picks up the new environment settings.&lt;/P&gt;
&lt;H3&gt;Option B — If you can’t log in as the service account: update TEMP/TMP in the registry for that account&lt;/H3&gt;
&lt;P&gt;If you need to change TEMP/TMP for a specific account without interactive logon, you can update the &lt;STRONG&gt;user environment variables&lt;/STRONG&gt; stored in the registry.&lt;/P&gt;
&lt;P&gt;General Windows guidance indicates:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;User environment variables live under HKEY_CURRENT_USER\Environment (and for other users, under that user’s SID hive loaded under HKEY_USERS).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;A common approach is:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Identify the service account SID (example commands such as WMIC are often used in practice).&lt;/LI&gt;
&lt;LI&gt;Open Registry Editor&lt;/LI&gt;
&lt;LI&gt;Navigate to:&lt;BR /&gt;HKEY_USERS\&amp;lt;SID&amp;gt;\Environment&lt;/LI&gt;
&lt;LI&gt;Update TEMP and TMP to a path on a drive with sufficient space.&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;Restart the Data Sync service.&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;Option C — Clean up leftover sync temp files (when sync is NOT running)&lt;/H3&gt;
&lt;P&gt;In some cases, the “disk out of space” condition is caused by &lt;STRONG&gt;leftover sync files&lt;/STRONG&gt; that were not removed (for example, if something had files open during deletion). Microsoft Learn suggests manually deleting sync files from %temp% and cleaning subdirectories &lt;STRONG&gt;only when sync is not in progress&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;Validation checklist (after the change)&lt;/H2&gt;
&lt;P&gt;After moving TEMP/TMP and restarting the service, confirm:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;New temp path is being used&lt;/STRONG&gt;&lt;BR /&gt;Initiate sync and check that new sync_*.batch / temp artifacts appear under the new folder.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Sufficient free space exists for initialization&lt;/STRONG&gt;&lt;BR /&gt;Especially for large tables, ensure the chosen drive can accommodate temp growth during the first sync. &lt;A href="https://azure.microsoft.com/en-us/blog/sync-sql-data-in-large-scale-using-azure-sql-data-sync/" target="_blank"&gt;&amp;nbsp;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Rerun initial sync&lt;/STRONG&gt;&lt;BR /&gt;Retry the initial sync after making the change.&amp;nbsp;&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Classification&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Symptom type:&lt;/STRONG&gt; Agent side / initialization failure&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Primary root cause:&lt;/STRONG&gt; Insufficient disk space on the &lt;STRONG&gt;TEMP&lt;/STRONG&gt; location used by the Data Sync Agent during initial sync temp-file generation&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Fix type:&lt;/STRONG&gt; Configuration / operational (move TEMP/TMP to a larger drive + restart agent service)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;a { text-decoration: none; color: #464feb; } tr th, tr td { border: 1px solid #e6e6e6; } tr th { background-color: #f5f5f5; }&lt;/P&gt;
&lt;H2&gt;Helpful references&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/sql-data-sync-troubleshoot?view=azuresql" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Troubleshoot SQL Data Sync (Microsoft Learn)&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/blog/sync-sql-data-in-large-scale-using-azure-sql-data-sync/" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Sync SQL data in large scale using Azure SQL Data Sync (Microsoft Azure Blog)&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/sql-data-sync-agent-overview?view=azuresql" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Data Sync Agent for SQL Data Sync (Microsoft Learn)&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 13 Apr 2026 14:31:07 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/fixing-there-is-not-enough-space-on-the-disk-during-azure-data/ba-p/4510866</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-04-13T14:31:07Z</dc:date>
    </item>
    <item>
      <title>Unlock real-time intelligence with Azure Managed Redis</title>
      <link>https://techcommunity.microsoft.com/t5/azure-managed-redis/unlock-real-time-intelligence-with-azure-managed-redis/ba-p/4503075</link>
      <description>&lt;P&gt;Modern applications are no longer just transactional—they’re intelligent, conversational, and agent-driven. As organizations build copilots and autonomous agents, infrastructure requirements evolve. Applications must reason over context, retrieve knowledge instantly, and operate in real time.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A href="https://azure.microsoft.com/products/managed-redis/" target="_blank" rel="noopener"&gt;Azure Managed Redis&lt;/A&gt;&lt;/STRONG&gt; is a fully-managed, enterprise-grade, in-memory data platform powered by Redis Enterprise and operated by Microsoft. It combines sub-millisecond performance, high availability, and advanced capabilities such as vector similarity search and multimodal data support.&lt;/P&gt;
&lt;P&gt;In this post, we’ll explore four key industry use cases for Azure Managed Redis—starting with building AI agents on modern agent frameworks.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;1) Building AI agents on agent frameworks&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Agentic applications require fast memory, contextual retrieval, and low-latency orchestration. Azure Managed Redis provides the real-time data foundation for AI agents built on &lt;A href="https://techcommunity.microsoft.com/blog/azure-managed-redis/supercharging-ai-agents-with-memory-on-azure-managed-redis/4457407" target="_blank" rel="noopener"&gt;Microsoft Agent Framework&lt;/A&gt;, as well as LangChain and Azure AI services like Azure AI Foundry.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why Azure Managed Redis for AI agents?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;AI agents rely on:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Vector similarity search&lt;/STRONG&gt; for retrieval-augmented generation (RAG)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Semantic caching&lt;/STRONG&gt; to avoid repeated LLM calls&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Short-term memory&lt;/STRONG&gt; for conversation state&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Long-term memory&lt;/STRONG&gt; for preferences&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Real-time context updates&lt;/STRONG&gt; from tools and APIs&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Azure Managed Redis enables all of these within a single in-memory data engine.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Common Agent Scenarios&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Customer support agents&lt;/STRONG&gt; retrieving knowledge base articles instantly&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Marketing copilots&lt;/STRONG&gt; generating campaign insights using cached data&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Developer assistants&lt;/STRONG&gt; accessing documentation with vector search&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Operational agents&lt;/STRONG&gt; responding to real-time telemetry&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;With vector indexing and JSON support, Redis can store embeddings alongside structured application data. Agents can retrieve relevant context in milliseconds, dramatically reducing latency and cost while improving response quality.&lt;/P&gt;
&lt;P&gt;For organizations adopting Microsoft’s AI ecosystem, Azure Managed Redis integrates seamlessly with Azure networking, identity, and compute services—making it a natural fit for production-grade agent architectures.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;2) High-performance caching for modern applications&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Caching remains one of the most powerful patterns for improving performance and scalability.&lt;/P&gt;
&lt;P&gt;Applications backed by systems such as Azure SQL Database or Azure Database for PostgreSQL benefit from a high-speed caching layer to reduce repeated reads and minimize backend load.&lt;/P&gt;
&lt;P&gt;Azure Managed Redis enables:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Microsecond data retrieval&lt;/LI&gt;
&lt;LI&gt;Reduced database pressure&lt;/LI&gt;
&lt;LI&gt;Improved application throughput&lt;/LI&gt;
&lt;LI&gt;Lower infrastructure costs&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Industry examples&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Retail &amp;amp; e-commerce&lt;/STRONG&gt;: Cache product catalogs and pricing during peak events&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Financial services&lt;/STRONG&gt;: Cache market data and pricing models&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Gaming&lt;/STRONG&gt;: Store leaderboards and player states&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;By inserting Redis between your application and your system of record, you gain immediate responsiveness without sacrificing durability.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;3) Real-time analytics and streaming workloads&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;For organizations that depend on real-time insight, in-memory processing is critical.&lt;/P&gt;
&lt;P&gt;Azure Managed Redis supports data structures such as Streams, Sorted Sets, JSON, and TimeSeries to enable near real-time analytics.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key Scenarios&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Fraud detection&lt;/STRONG&gt; in financial services&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;IoT telemetry ingestion&lt;/STRONG&gt; and analysis&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Live dashboards&lt;/STRONG&gt; for logistics and operations&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Dynamic pricing engines&lt;/STRONG&gt; in retail&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Because processing happens in memory, latency remains predictable—even at scale.&lt;/P&gt;
&lt;P&gt;Streaming data can be ingested into Redis, enriched, scored, and surfaced to applications or dashboards immediately—without waiting for batch systems.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;4) Session management at scale&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;As applications scale horizontally, managing session state becomes increasingly complex.&lt;/P&gt;
&lt;P&gt;Azure Managed Redis provides a centralized, highly available session store that enables:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Shared state across distributed app instances&lt;/LI&gt;
&lt;LI&gt;Automatic expiration with TTL&lt;/LI&gt;
&lt;LI&gt;Seamless failover and high availability&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Common Scenarios&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;E-commerce platforms&lt;/STRONG&gt; preserving shopping carts&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Media platforms&lt;/STRONG&gt; tracking user preferences&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Enterprise SaaS applications&lt;/STRONG&gt; managing authentication tokens&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;By externalizing session state to Redis, applications become stateless and cloud-native—ready for autoscaling and global distribution.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Enterprise-grade foundation for intelligent applications&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure Managed Redis delivers:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;99.999% availability SLA&lt;/LI&gt;
&lt;LI&gt;Zone redundancy by default&lt;/LI&gt;
&lt;LI&gt;Automatic patching and updates&lt;/LI&gt;
&lt;LI&gt;Advanced security integration such as &lt;A href="https://learn.microsoft.com/en-us/azure/redis/entra-for-authentication" target="_blank" rel="noopener"&gt;Entra ID authentication&lt;/A&gt; and private endpoints&lt;/LI&gt;
&lt;LI&gt;Native compatibility with modern Azure architectures&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Whether you're building AI agents, modernizing transactional applications, or enabling real-time analytics, Azure Managed Redis provides the in-memory data foundation required for intelligent, responsive systems.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Getting started&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Ready to start building real-time and agent-driven applications with Azure Managed Redis? Explore the resources below to learn more, deploy your first instance, and experiment with AI-powered scenarios.&lt;/P&gt;
&lt;P&gt;Learn more:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/azure/redis/" target="_blank" rel="noopener"&gt;Azure Managed Redis Documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/azure/ai-foundry/" target="_blank" rel="noopener"&gt;Azure AI Foundry Overview&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/azure/redis/" target="_blank" rel="noopener"&gt;Redis Enterprise Capabilities on Azure&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Try it yourself:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://portal.azure.com" target="_blank" rel="noopener"&gt;Create an Azure Managed Redis instance&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/azure/redis/quickstart-create-redis" target="_blank" rel="noopener"&gt;Redis Quickstart (Azure)&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/azure/ai-services/openai/" target="_blank" rel="noopener"&gt;Build AI apps with Azure OpenAI + vector search&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Explore samples and architectures:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://github.com/redis-developer/langchain-redis" target="_blank" rel="noopener"&gt;Redis + LangChain RAG Sample&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/azure/architecture/" target="_blank" rel="noopener"&gt;Azure AI + Redis Vector Search Reference Architecture&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://redis.io/docs/interact/search-and-query/vector-search/" target="_blank" rel="noopener"&gt;Redis Vector Similarity Search Examples&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/Azure/azure-ai-samples" target="_blank" rel="noopener"&gt;Azure AI Foundry Samples&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Connect with the community:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-database-support-blog/bg-p/AzureDatabaseSupportBlog" target="_blank" rel="noopener"&gt;Azure Tech Community – Azure Databases&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/redis/redis" target="_blank" rel="noopener"&gt;Azure Managed Redis GitHub Discussions&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 08 Apr 2026 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-managed-redis/unlock-real-time-intelligence-with-azure-managed-redis/ba-p/4503075</guid>
      <dc:creator>Matthew_Burrows</dc:creator>
      <dc:date>2026-04-08T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Troubleshooting Azure SQL Data Sync Failure: SQL Error 8106 During Bulk Insert</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/troubleshooting-azure-sql-data-sync-failure-sql-error-8106/ba-p/4509665</link>
      <description>&lt;P&gt;Azure SQL Data Sync is widely used to maintain consistency across distributed databases in hub–member topologies. However, synchronization may occasionally fail due to schema mismatches between participating databases — even when everything appears correctly configured at first glance.&lt;/P&gt;
&lt;P&gt;In this post, we’ll walk through a real-world troubleshooting scenario involving a Data Sync failure caused by a schema inconsistency related to an &lt;STRONG&gt;IDENTITY column&lt;/STRONG&gt;, and how it was mitigated.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Sample Error:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-teams="true"&gt;&lt;EM&gt;sync_7726d6cb22124c0f901192c434f49106bd618f8ab16343b2adc03250f8367ff4\3953fb7d-1dba-4656-8150-83153d5d019b.batch. See the inner exception for more details. Inner exception: Failed to execute the command 'BulkInsertCommand' for table 'schema.table_name'; the transaction was rolled back. Ensure that the command syntax is correct. Inner exception: SqlException ID: e19b3677-d67e-4c8e-bc49-13d3df61ad0e, Error Code: -2146232060 - SqlError Number:8106, Message: SQL error with code 8106 For more information, provide tracing ID ‘92e76130-f80a-4372-9a48-ec0ede8b0288’ to customer support."&lt;/EM&gt;&amp;nbsp; &amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;Scenario Overview&lt;/H2&gt;
&lt;P&gt;A synchronization operation began failing for a specific table within an Azure SQL Data Sync group. The failure was observed during the sync process when applying changes using a batch file.&lt;/P&gt;
&lt;P&gt;The error surfaced as part of a failed &lt;STRONG&gt;&lt;EM&gt;BulkInsertCommand&lt;/EM&gt;&lt;/STRONG&gt; execution on a synced table, causing the transaction to roll back.&lt;/P&gt;
&lt;P&gt;Further investigation revealed the following SQL exception:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;SqlError Number: 8106&lt;/STRONG&gt;&lt;BR /&gt;&lt;EM&gt;Table does not have the identity property. Cannot perform SET operation.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;Initial Troubleshooting Steps&lt;/H2&gt;
&lt;P&gt;Before identifying the root cause, the following actions were taken:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The affected table was removed from the sync group.&lt;/LI&gt;
&lt;LI&gt;A sync operation was triggered.&lt;/LI&gt;
&lt;LI&gt;The table was re-added to the sync group.&lt;/LI&gt;
&lt;LI&gt;Sync was triggered again.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Despite performing these steps, the issue persisted with the same error.&lt;/P&gt;
&lt;P&gt;This indicated that the failure was not related to sync metadata or temporary configuration inconsistencies.&lt;/P&gt;
&lt;H2&gt;Root Cause Analysis&lt;/H2&gt;
&lt;P&gt;After reviewing the table definitions across the sync topology, it was discovered that:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;The synchronized table had an&amp;nbsp;&lt;STRONG&gt;IDENTITY column defined on one side of the topology (Hub or Member)&lt;/STRONG&gt; but &lt;STRONG&gt;not on the other&lt;/STRONG&gt;.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;This schema mismatch led to the sync service attempting to apply SET IDENTITY_INSERT operations during the bulk insert phase — which failed on the database where the column lacked the identity property.&lt;/P&gt;
&lt;P&gt;Azure SQL Data Sync relies on consistent schema definitions across all participating databases. Any deviation — particularly involving identity columns — can interrupt data movement operations.&lt;/P&gt;
&lt;H2&gt;Mitigation Approach&lt;/H2&gt;
&lt;P&gt;To resolve the issue, the following corrective steps were applied:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Remove the affected table from the sync group and save the configuration.&lt;/LI&gt;
&lt;LI&gt;Refresh the sync schema.&lt;/LI&gt;
&lt;LI&gt;Recreate the table to include the appropriate IDENTITY property.&lt;/LI&gt;
&lt;LI&gt;Add the corrected table back to the sync group.&lt;/LI&gt;
&lt;LI&gt;Trigger a new sync operation.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;These steps ensured that the table definitions were aligned across all sync participants, allowing the synchronization process to proceed successfully.&lt;/P&gt;
&lt;H2&gt;Best Practices to Avoid Similar Issues&lt;/H2&gt;
&lt;P&gt;To prevent identity-related sync failures in Azure SQL Data Sync:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;✅ Ensure table schemas are &lt;STRONG&gt;identical&lt;/STRONG&gt; across all participating databases before onboarding them into a sync group.&lt;/LI&gt;
&lt;LI&gt;✅ Pay special attention to:
&lt;UL&gt;
&lt;LI&gt;IDENTITY properties&lt;/LI&gt;
&lt;LI&gt;Primary keys&lt;/LI&gt;
&lt;LI&gt;Data types&lt;/LI&gt;
&lt;LI&gt;Nullable constraints&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;✅ Always validate schema consistency when:
&lt;UL&gt;
&lt;LI&gt;Adding new tables to a sync group&lt;/LI&gt;
&lt;LI&gt;Modifying existing table definitions&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Final Thoughts&lt;/H2&gt;
&lt;P&gt;Schema mismatches — especially those involving identity columns — are a common but often overlooked cause of Data Sync failures. By ensuring consistent table definitions across your hub and member databases, you can significantly reduce the risk of synchronization errors and maintain reliable data movement across regions.&lt;/P&gt;</description>
      <pubDate>Wed, 08 Apr 2026 15:55:41 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/troubleshooting-azure-sql-data-sync-failure-sql-error-8106/ba-p/4509665</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-04-08T15:55:41Z</dc:date>
    </item>
    <item>
      <title>Premium SSD v2 Is Now Generally Available for Azure Database for PostgreSQL</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/premium-ssd-v2-is-now-generally-available-for-azure-database-for/ba-p/4508445</link>
      <description>&lt;P class="lia-align-left"&gt;We are excited to announce the General Availability (GA) of &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/postgresql/compute-storage/concepts-storage-premium-ssd-v2" target="_blank" rel="noopener"&gt;Premium SSD v2&lt;/A&gt; for Azure Database for PostgreSQL flexible server. With Premium SSD v2, you can achieve&amp;nbsp;&lt;STRONG&gt;up to 4× higher IOPS, significantly lower latency, and better price-performance&lt;/STRONG&gt; for I/O-intensive PostgreSQL workloads. With independent scaling of storage and performance, you can now eliminate overprovisioning and unlock predictable, high-performance PostgreSQL at scale.&lt;/P&gt;
&lt;P class="lia-align-left"&gt;This release is especially impactful for &lt;STRONG&gt;OLTP, SaaS, and high‑concurrency applications&lt;/STRONG&gt; that require consistent performance and reliable scaling under load.&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;In this post, we will cover:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-align-left"&gt;
&lt;LI&gt;&lt;STRONG&gt;Why Premium SSD v2&lt;/STRONG&gt;: Core capabilities such as flexible disk sizing, higher performance, and independent scaling of capacity and I/O.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Premium SSD v2 vs. Premium SSD:&lt;/STRONG&gt; A side‑by‑side overview of what’s new and what’s improved.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Pricing&lt;/STRONG&gt;: Pricing estimates.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Performance&lt;/STRONG&gt;: Benchmarking results across two workload scenarios.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Migration options:&lt;/STRONG&gt; How to move from Premium SSD to Premium SSD v2 using restore and read‑replica approaches.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Availability and support&lt;/STRONG&gt;: Regional availability, supported features, current limitations, and how to get started.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H1&gt;&lt;SPAN class="lia-text-color-15"&gt;Why Premium SSD v2?&lt;/SPAN&gt;&lt;/H1&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL class="lia-align-left"&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-21"&gt;&lt;STRONG&gt;Flexible Disk Size&lt;/STRONG&gt; - &lt;/SPAN&gt;Storage can be provisioned from&lt;STRONG&gt; 32 GiB to 64 TiB&lt;/STRONG&gt; in &lt;STRONG&gt;1 GiB&lt;/STRONG&gt; increments, allowing you to pay only for required capacity without scaling disk size for performance.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;High Performance&lt;/STRONG&gt; -Achieve up to&lt;STRONG&gt; 80,000 IOPS &lt;/STRONG&gt;and &lt;STRONG&gt;1,200 MiB/s &lt;/STRONG&gt;throughput on a single disk, enabling high-throughput OLTP and mixed workloads.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Adapt instantly to workload changes:&amp;nbsp;&lt;/STRONG&gt; With Premium SSD v2, performance is no longer tied to disk size. Independently tune IOPS and throughput without downtime, ensuring your database keeps up with real-time demand.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Free baseline performance:&lt;/STRONG&gt; Premium SSD v2 includes built-in baseline performance at no additional cost. Disks up to&amp;nbsp;&lt;STRONG&gt;399&lt;/STRONG&gt;&lt;STRONG&gt; GiB&lt;/STRONG&gt; automatically include &lt;STRONG&gt;3,000 IOPS and 125&lt;/STRONG&gt;&lt;STRONG&gt; MiB/s&lt;/STRONG&gt;, while disks sized &lt;STRONG&gt;400&lt;/STRONG&gt;&lt;STRONG&gt;&amp;nbsp;GiB and larger&lt;/STRONG&gt; include &lt;STRONG&gt;up to 12,000 IOPS and 500&lt;/STRONG&gt;&lt;STRONG&gt;&amp;nbsp;MiB/s&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H1&gt;&lt;SPAN class="lia-text-color-15"&gt;Premium SSD v2 vs. Premium SSD: What’s new?&lt;/SPAN&gt;&lt;/H1&gt;
&lt;/DIV&gt;
&lt;P class="lia-clear-both lia-align-left"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H1&gt;&lt;SPAN class="lia-text-color-15"&gt;Pricing&lt;/SPAN&gt;&lt;/H1&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;Pricing for Premium SSD v2 is similar to Premium SSD, but will vary depending on the storage, IOPS, and bandwidth configuration set for a Premium SSD v2 disk. Pricing information is available on the&amp;nbsp;&lt;A href="https://azure.microsoft.com/pricing/details/postgresql/flexible-server/?msockid=0627d05ab15a6c403105c639b0d06d2c" target="_blank" rel="noopener"&gt;pricing page&lt;/A&gt; or &lt;A href="https://azure.microsoft.com/pricing/calculator/?msockid=0627d05ab15a6c403105c639b0d06d2c" target="_blank" rel="noopener"&gt;pricing calculator&lt;/A&gt;.&lt;/P&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H1&gt;&lt;SPAN class="lia-text-color-15"&gt;Performance&lt;/SPAN&gt;&lt;/H1&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;Premium SSD v2 is designed for IO‑intensive workloads that require sub‑millisecond disk latencies, high IOPS, and high throughput at a lower cost. To demonstrate the performance impact, we ran &lt;STRONG&gt;pgbench&lt;/STRONG&gt; on Azure Database for PostgreSQL using the test profile below.&lt;/P&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H4&gt;&lt;SPAN class="lia-text-color-15"&gt;Test Setup&lt;/SPAN&gt;&lt;/H4&gt;
&lt;/DIV&gt;
&lt;P class="lia-align-left"&gt;To minimize external variability and ensure a fair comparison:&lt;/P&gt;
&lt;UL class="lia-align-left"&gt;
&lt;LI&gt;Client virtual machines and the database server were deployed in the same availability zone in the East US region.&lt;/LI&gt;
&lt;LI&gt;Compute, region, and availability zones were kept identical.&lt;/LI&gt;
&lt;LI&gt;The only variable changed was the storage tier.&lt;/LI&gt;
&lt;LI&gt;TPC-B benchmark using pgbench with a database size of 350 GiB.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-left"&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-15"&gt;Test Scenario 1: Breaking the IOPS Ceiling with Premium SSD v2&lt;/SPAN&gt;&lt;/H2&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;Premium SSD v2 eliminates the traditional storage bottleneck by scaling linearly up to &lt;STRONG&gt;80,000 IOPS&lt;/STRONG&gt;, while Premium SSD plateaus early due to fixed performance limits. To demonstrate this, we configured each storage tier with its maximum supported IOPS and throughput while keeping all other variables constant. Premium SSD v2 achieves up to &lt;STRONG&gt;4x higher IOPS at nearly half the cost&lt;/STRONG&gt;, without requiring large disk sizes.&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN lia-align-left"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp; &amp;nbsp;&lt;STRONG&gt;&lt;EM&gt;Note:&lt;/EM&gt; &lt;/STRONG&gt;&lt;EM&gt;Premium SSD requires a 32 TiB disk to reach 20K IOPS, while SSD v2 achieves 80K IOPS even on a 160 GiB disk though we used 1 TiB disk in this test for a bigger scaling factor for pgbench test&lt;/EM&gt;&lt;STRONG&gt;&lt;EM&gt;.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;We ran pgbench across five workload profiles, ranging from 32 to 256 concurrent clients, with each test running for 20 minutes. The results go beyond incremental improvements and highlight a material shift in how applications scale with Premium SSD v2.&lt;/P&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H3&gt;&lt;SPAN class="lia-text-color-15"&gt;Throughput Scaling&lt;/SPAN&gt;&lt;/H3&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;EM&gt;As concurrency increases, Premium SSD quickly reaches its IOPS limits while Premium SSD v2 continues to scale.&lt;/EM&gt;&lt;/P&gt;
&lt;UL class="lia-align-left"&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;EM&gt;At 32 clients&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;: Premium SSD v2 achieved &lt;STRONG&gt;10,562 TPS&lt;/STRONG&gt; vs &lt;STRONG&gt;4,123 TPS&lt;/STRONG&gt; on Premium SSD representing a &lt;STRONG&gt;156%&lt;/STRONG&gt; performance improvement.&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;EM&gt;At 256 clients&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;: At higher load, Premium SSD v2 achieved over &lt;STRONG&gt;43,000 TPS&amp;nbsp;&lt;/STRONG&gt;representing a &lt;STRONG&gt;279% improvement&lt;/STRONG&gt; compared to the &lt;STRONG&gt;11,465 TPS&lt;/STRONG&gt; observed on Premium SSD.&lt;/EM&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/STRONG&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H3&gt;&lt;SPAN class="lia-text-color-15"&gt;Latency Stability&lt;/SPAN&gt;&lt;/H3&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;Throughput is an indication of how much work is done while latency reflects how quickly users experience it. Premium SSD v2 maintains consistently low latency even as workload increases.&lt;/P&gt;
&lt;UL class="lia-align-left"&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;EM&gt;Reduced Wait Times&lt;/EM&gt;&lt;/STRONG&gt;: &lt;STRONG&gt;61–74%&lt;/STRONG&gt; lower latency across all test phases.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;EM&gt;Consistency under Load&lt;/EM&gt;&lt;/STRONG&gt;: Premium SSD latency increased to &lt;STRONG&gt;22.3&lt;/STRONG&gt;&lt;STRONG&gt; ms&lt;/STRONG&gt;, while Premium SSD v2 maintained a &lt;STRONG&gt;latency of 5.8&lt;/STRONG&gt;&lt;STRONG&gt; ms,&lt;/STRONG&gt; remaining stable even under peak load.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H3&gt;&lt;SPAN class="lia-text-color-15"&gt;IOPS Behavior&lt;/SPAN&gt;&lt;/H3&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;The table below illustrates the IOPS behavior observed during benchmarking for both storage tiers.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN lia-align-left"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN lia-align-left"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Dimension&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Premium SSD&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Premium SSD v2&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-align-center" rowspan="2"&gt;
&lt;P&gt;&lt;STRONG&gt;IOPS &lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td rowspan="2"&gt;
&lt;P&gt;Lower baseline performance, Hits limits early&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;~2× higher IOPS&lt;/STRONG&gt; at low concurrency&lt;STRONG&gt;,&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Up to 4× higher IOPS at peak load&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;IOPS Plateau&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Throughput stalls at &lt;STRONG&gt;~20k IOPS&lt;/STRONG&gt; for 64 clients -256 clients&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Scales from &lt;STRONG&gt;~29k IOPS (32 clients)&lt;/STRONG&gt; to &lt;STRONG&gt;~80k IOPS (256 clients)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Additional Clients&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Adding clients does not increase throughput&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Additional clients continue to drive higher throughput&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Primary Bottleneck&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Storage becomes the bottleneck&lt;/STRONG&gt; early&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No single bottleneck observed&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Scaling Behavior&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Stops scaling early&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;True linear scaling&lt;/STRONG&gt; with workload demand&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Resource Utilization&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Disk saturation leaves CPU and memory underutilized&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Balanced utilization&lt;/STRONG&gt; across IOPS, CPU, and memory&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Key Takeaway&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Storage limits performance before compute is fully used&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Unlocks higher throughput and lower latency&lt;/STRONG&gt; by fully utilizing compute resources&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-15"&gt;Test Scenario 2: Better P&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-15"&gt;erformance at same price&lt;/SPAN&gt;&lt;/H2&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;At the same price point, Premium SSD v2 delivers higher throughput and lower latency than Premium SSD without requiring any application changes. To demonstrate this, we ran multiple pgbench tests using two workload configurations 8 clients / 8 threads and 32 clients / 32 threads with each run lasting 20 minutes. Results were consistent across all runs, with Premium SSD v2 consistently outperforming Premium SSD. &lt;EM&gt;Both configurations cost &lt;STRONG&gt;$578/month,&lt;/STRONG&gt; the only difference is storage performance.&lt;/EM&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN lia-align-left"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;img /&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H6&gt;&amp;nbsp;&lt;/H6&gt;
&lt;/DIV&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H6&gt;&amp;nbsp;&lt;/H6&gt;
&lt;/DIV&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H6&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;EM&gt;&lt;STRONG&gt;Results&lt;/STRONG&gt;:&lt;/EM&gt;&lt;/SPAN&gt;&lt;/H6&gt;
&lt;/DIV&gt;
&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;Moderate concurrency (8 clients)&lt;/STRONG&gt;&lt;BR /&gt;Premium SSD v2 delivered approximately &lt;STRONG&gt;154% higher throughput (Transactions Per Second)&lt;/STRONG&gt; than Premium SSD (&lt;STRONG&gt;1,813 &lt;EM&gt;TPS &lt;/EM&gt;&lt;/STRONG&gt;vs. &lt;STRONG&gt;715 TPS&lt;/STRONG&gt;), while average latency decreased by about &lt;STRONG&gt;60%&lt;/STRONG&gt; (from &lt;STRONG&gt;~11.1 ms&lt;/STRONG&gt; to &lt;STRONG&gt;~4.4 ms&lt;/STRONG&gt;).&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;High concurrency (32 clients)&lt;/STRONG&gt;&lt;BR /&gt;The performance gap increases as concurrency grows, Premium SSD v2 delivered about &lt;STRONG&gt;169% higher throughput&lt;/STRONG&gt; than Premium SSD (&lt;STRONG&gt;3,643 TPS&lt;/STRONG&gt; vs. &lt;STRONG&gt;~1,352 TPS&lt;/STRONG&gt;) and reduced average latency by around &lt;STRONG&gt;67%&lt;/STRONG&gt; (from &lt;STRONG&gt;~26.3 ms&lt;/STRONG&gt; to &lt;STRONG&gt;~8.7 ms&lt;/STRONG&gt;).&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;/DIV&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;/DIV&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H3&gt;&lt;SPAN class="lia-text-color-15"&gt;IOPS Behavior&lt;/SPAN&gt;&lt;/H3&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL class="lia-align-left"&gt;
&lt;LI&gt;In the 8‑client, 8‑thread test, Premium SSD reached its IOPS ceiling early, operating at 100% utilization, while Premium SSD v2 retained approximately 30% headroom under the same workload delivering &lt;STRONG&gt;8,037 IOPS vs 3,761&lt;/STRONG&gt; IOPS with Premium SSD.&lt;/LI&gt;
&lt;LI&gt;When the workload increased to 32 clients and 32 threads, both tiers approached their IOPS limits however, Premium SSD v2 sustained a significantly higher performance ceiling, delivering approximately &lt;STRONG&gt;2.75x higher IOPS (13,620 vs. 4,968) &lt;/STRONG&gt;under load.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Key Takeaway:&lt;/STRONG&gt; With Premium SSD v2, you do not need to choose between cost and performance you get both. At the same price, applications&amp;nbsp; run faster, scale further, and maintain lower latency without any code changes.&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-15"&gt;Migrate from Premium SSD to Premium SSD v2&lt;/SPAN&gt;&lt;/H2&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;Migrating is simple and fast. You can migrate from Premium SSD to Premium SSD v2 using the two strategies below with minimal downtime. These methods are generally quicker than logical migration strategies, such as exporting and restoring data using pg_dump and pg_restore.&lt;/P&gt;
&lt;UL class="lia-align-left"&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/postgresql/compute-storage/concepts-storage-migrate-ssd-to-ssd-v2?tabs=portal-restore-custom-point" target="_blank" rel="noopener"&gt;Restore from Premium SSD to Premium SSD v2&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="http://learn.microsoft.com/azure/postgresql/compute-storage/concepts-storage-replicate-ssd-to-ssd-v2" target="_blank" rel="noopener"&gt;Migrate using Read Replicas&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-left"&gt;When migrating from Premium SSD to Premium SSD v2, using &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/postgresql/read-replica/concepts-read-replicas-virtual-endpoints#using-virtual-endpoints-for-consistent-hostname-during-point-in-time-recovery-pitr-or-snapshot-restore" target="_blank" rel="noopener"&gt;a virtual endpoint&lt;/A&gt; helps keep downtime to a minimum and allows applications to continue operating without requiring configuration changes after the migration.&lt;/P&gt;
&lt;P class="lia-align-left"&gt;After the migration completes, you can stop the original server until your backup requirements are met. Once the required backup retention period has elapsed and all new backups are available on the new server, the original server can be safely deleted.&lt;/P&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H1&gt;&lt;SPAN class="lia-text-color-15"&gt;Region Availability &amp;amp; Features Supported&lt;/SPAN&gt;&lt;/H1&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;Premium SSD v2 is available in &lt;STRONG&gt;48 regions&lt;/STRONG&gt; worldwide for Azure Database for PostgreSQL – Flexible Server. For the most up‑to‑date information on regional availability, supported features, and current limitations, refer to the official Premium SSD v2 &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/postgresql/compute-storage/concepts-storage-premium-ssd-v2#supported-features" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;.&lt;/P&gt;
&lt;DIV class="lia-align-left"&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-15"&gt;Getting Started:&lt;/SPAN&gt;&lt;/H2&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;To learn more, review the official &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/postgresql/compute-storage/concepts-storage" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt; for storage configuration available with Azure Database for PostgreSQL. Your feedback is important to us, have suggestions, ideas, or questions? We would love to hear from you:&amp;nbsp;&lt;A href="https://aka.ms/pgfeedback" target="_blank" rel="noopener"&gt;https://aka.ms/pgfeedback&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Tue, 07 Apr 2026 17:29:16 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/premium-ssd-v2-is-now-generally-available-for-azure-database-for/ba-p/4508445</guid>
      <dc:creator>kabharati</dc:creator>
      <dc:date>2026-04-07T17:29:16Z</dc:date>
    </item>
    <item>
      <title>Azure SQL is Deprecating the “No Minimum TLS” (MinTLS None) Configuration</title>
      <link>https://techcommunity.microsoft.com/t5/azure-sql-blog/azure-sql-is-deprecating-the-no-minimum-tls-mintls-none/ba-p/4508933</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;As part of &lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/blog/azuresqlblog/preparing-for-the-deprecation-of-tls-1-0-and-1-1-in-azure-databases/4426512" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;the retirement of&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;lower&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;TLS&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;versions&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;1.0 and 1.1&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and&amp;nbsp;the&amp;nbsp;enforcement of 1.2&amp;nbsp;as the&amp;nbsp;new&amp;nbsp;default&amp;nbsp;minimum&amp;nbsp;TLS version,&amp;nbsp;we&amp;nbsp;will be&amp;nbsp;removing&amp;nbsp;the&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; &lt;STRONG&gt;No Minimum TLS (MinTLS = “None” or "0")&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;&amp;nbsp;option&lt;/STRONG&gt;&amp;nbsp;and updating&amp;nbsp;these&amp;nbsp;configurations to TLS 1.2.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;No Minimum TLS &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;allowed Azure SQL Database and Azure SQL Managed Instance resources to accept client connections using any TLS protocol version&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;and&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;SPAN data-contrast="auto"&gt;unencrypted connections.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Over the past year, Azure has retired TLS 1.0 and 1.1 for all Azure databases, due to known security vulnerabilities in these older protocols. As of August 31, 2025,&amp;nbsp;creating servers configured with versions 1.0 and 1.1 was&amp;nbsp;disallowed&amp;nbsp;and&amp;nbsp;migration&amp;nbsp;to 1.2&amp;nbsp;began. With&amp;nbsp;legacy TLS&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;versions being&amp;nbsp;deprecated&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;,&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;TLS 1.2&amp;nbsp;will become the&amp;nbsp;secure default&amp;nbsp;minimum&amp;nbsp;TLS version&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; for new Azure SQL DB and MI configurations and for all client-server connections, rendering the MinTLS = None setting obsolete. As a result, the MinTLS = None configuration option will be deprecated for new servers, and existing servers configured with No Minimum TLS will be upgraded to 1.2. &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:279}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;What is changing?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:360}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;After July 15, 2026, we will disallow minimum TLS value "None"&lt;/STRONG&gt;,&amp;nbsp;for the creation of new SQL DB and MI resources using PowerShell, Azure CLI, &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;and any other REST based interface&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;This configuration&amp;nbsp;option&amp;nbsp;has&amp;nbsp;already&amp;nbsp;been removed&amp;nbsp;from&amp;nbsp;the Portal&lt;/STRONG&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;during the deprecation of TLS versions 1.0 and 1.1.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559739&amp;quot;:360}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Creating new Azure SQL Database and Managed Instance servers with&lt;STRONG&gt; M&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;inTLS = None &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;(which was previously considered the default)&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;&amp;nbsp;will no longer be a supported configuration&lt;/STRONG&gt;.&amp;nbsp;&amp;nbsp;If the server parameter value for the&amp;nbsp;minimum&amp;nbsp;TLS is left&amp;nbsp;blank,&amp;nbsp;it will default to&amp;nbsp;minimum&amp;nbsp;TLS version&amp;nbsp;1.2.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559739&amp;quot;:360}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Attempts to create an Azure SQL server with MinTLS = None will fail&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;with an “Invalid operation” error&amp;nbsp;and&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;downgrades to None will be disallowed&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;While&lt;STRONG&gt;&amp;nbsp;attempts&amp;nbsp;to connect with&amp;nbsp;TLS 1.0, 1.1 or&amp;nbsp;unencrypted&amp;nbsp;connections will fail&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;with&amp;nbsp;“Error: 47072/171 on Gateway.”&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Effective date (retirement milestone)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;MinTLS = None (0)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;MinTLS&amp;nbsp;left blank&amp;nbsp;(defaults to supported&amp;nbsp;minimum)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Before 8/31/25&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Any + Unencrypted&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Any + Unencrypted&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;After 8/31/25&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;1.2 +&amp;nbsp;Unencrypted&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;1.2&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;After July 15, 2026&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Invalid operation error&amp;nbsp;(for&amp;nbsp;new server&amp;nbsp;creates)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Downgrades will be disallowed&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;TLS error: 47072/171 (for&amp;nbsp;unencrypted&amp;nbsp;connections)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;1.2&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;In summary,&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;STRONG&gt;after&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;July 15, 2026&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, Azure SQL Database and Azure SQL Managed Instance&amp;nbsp;will require&amp;nbsp;all client connections to use TLS 1.2 or higher&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;unencrypted connections will be denied&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;.&lt;/STRONG&gt; The minimum TLS version setting will no longer accept the value &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;"None" for new or existing servers&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and servers currently configured with this value will&amp;nbsp;be upgraded to&amp;nbsp;explicitly enforce&amp;nbsp;TLS 1.2.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:360,&amp;quot;335559740&amp;quot;:279}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Who is&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;impacted&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;For most Azure SQL customers,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;there is no action&amp;nbsp;required&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;.&lt;/STRONG&gt; Most clients already use TLS 1.2 or higher. After July 15, 2026, if your Azure SQL Database or Managed Instance is still configured with No Minimum TLS &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;and using 1.0, 1.1 or unencrypted connections&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, it will automatically update to TLS 1.2 to reflect the current minimum protocol enforcement in client-server connectivity.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;We do recommend you&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;verify your client applications&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;– especially any older or third-party client drivers – to ensure they can communicate with TLS 1.2 or above. In&amp;nbsp;some&amp;nbsp;rare cases,&amp;nbsp;very old&amp;nbsp;applications, such&amp;nbsp;as an outdated JDBC driver or older .NET framework version,&amp;nbsp;may need an&amp;nbsp;update or need to enable TLS 1.2.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Conclusion&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This deprecation is part of Azure’s broader security strategy to &lt;STRONG&gt;ensure&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;encrypted connections are&amp;nbsp;secure by modern encryption standards&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;. TLS&amp;nbsp;version&amp;nbsp;1.2 is&amp;nbsp;more&amp;nbsp;secure than older versions and is now the industry standard (required by regulations like PCI DSS and HIPAA).&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;This change&amp;nbsp;eliminates&amp;nbsp;the use of&amp;nbsp;unencrypted connections&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;which ensure all database connections meet current security standards.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559739&amp;quot;:360}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;If you’ve already migrated to TLS 1.2 (as most customers have), you will most likely not notice any change, except that the No Minimum TLS option will disappear from configurations.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Apr 2026 20:27:03 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-sql-blog/azure-sql-is-deprecating-the-no-minimum-tls-mintls-none/ba-p/4508933</guid>
      <dc:creator>talawren</dc:creator>
      <dc:date>2026-04-06T20:27:03Z</dc:date>
    </item>
    <item>
      <title>Azure SQL Audit Columns missing? Understanding AzureDiagnostics 500-Column Limit and Spill</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-audit-columns-missing-understanding-azurediagnostics/ba-p/4508825</link>
      <description>&lt;H3&gt;👋 Introduction&lt;/H3&gt;
&lt;P&gt;Have you ever written a perfectly valid KQL query against the AzureDiagnostics table — only to find that one or more columns return &lt;STRONG&gt;empty results or don't exist at all&lt;/STRONG&gt;? You double-check the column name, verify the resource filter, and everything looks right — yet the data simply isn't there.&lt;/P&gt;
&lt;P&gt;This is one of the more &lt;STRONG&gt;silent and confusing behaviors&lt;/STRONG&gt; of the AzureDiagnostics table in Azure Monitor Log Analytics, and it doesn't just affect one specific column — &lt;STRONG&gt;it can affect any column&lt;/STRONG&gt;, including commonly queried ones like data_sensitivity_information_s, deadlock_xml_s, query_hash_s, error_message_s, and others.&lt;/P&gt;
&lt;P&gt;In this post, I'll explain exactly why this happens, how to detect it, how to write resilient KQL queries that always retrieve your data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;🔍 The Scenario: Two Queries, Two Different Outcomes&lt;/H3&gt;
&lt;P&gt;Here's a real-world example from a customer case involving Azure SQL audit logs and data sensitivity classification.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Query 1 — The "Non-Working" Query:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;AzureDiagnostics
| where Category == 'SQLSecurityAuditEvents'
| where ResourceId == '/SUBSCRIPTIONS/&amp;lt;SubscriptionID&amp;gt;/RESOURCEGROUPS/&amp;lt;RG&amp;gt;/PROVIDERS/MICROSOFT.SQL/SERVERS/&amp;lt;ServerName&amp;gt;/DATABASES/&amp;lt;DBName&amp;gt;'
| project
    event_time_t,
    statement_s,
    succeeded_s,
    affected_rows_d,
    server_principal_name_s,
    client_ip_s,
    application_name_s,
    additional_information_s,
    data_sensitivity_information_s
| order by event_time_t desc
| take 100&lt;/LI-CODE&gt;
&lt;P&gt;data_sensitivity_information_s returns &lt;STRONG&gt;empty&lt;/STRONG&gt; — or the column doesn't exist at all in the schema.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Not only restricted to one column as you see. Another column "affected_rows_d" failed to resolve.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Query 2 — The "Working" Query&lt;/STRONG&gt;:&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;AzureDiagnostics
| where Category == "SQLSecurityAuditEvents"
| where ResourceId == "/SUBSCRIPTIONS/&amp;lt;SubscriptionID&amp;gt;/RESOURCEGROUPS/&amp;lt;RG&amp;gt;/PROVIDERS/MICROSOFT.SQL/SERVERS/&amp;lt;ServerName&amp;gt;/DATABASES/&amp;lt;DBName&amp;gt;"
| extend DataSensitivityInfo = tostring(parse_json(AdditionalFields).data_sensitivity_information)
| where isnotempty(DataSensitivityInfo)
| project event_time_t, DataSensitivityInfo
| take 100&lt;/LI-CODE&gt;
&lt;P&gt;This one&amp;nbsp;&lt;STRONG&gt;works perfectly&lt;/STRONG&gt;. The difference? It reads from AdditionalFields instead of directly from the column.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;💡 &lt;STRONG&gt;Important:&lt;/STRONG&gt; This is NOT a bug with data_sensitivity_information_s specifically. This behavior can happen to &lt;STRONG&gt;any column&lt;/STRONG&gt; in the AzureDiagnostics table — and understanding why is key to writing reliable audit queries.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;🧩 Root Cause: The AzureDiagnostics 500-Column Limit — A Default Platform Behavior&lt;/H3&gt;
&lt;P&gt;The &lt;STRONG&gt;AzureDiagnostics&lt;/STRONG&gt; table&amp;nbsp;in Azure Monitor Log Analytics is a &lt;STRONG&gt;shared, multi-resource table&lt;/STRONG&gt;. It collects diagnostic data from every Azure resource type that sends logs to your workspace — Azure SQL, Key Vault, App Service, Storage Accounts, Firewalls, and many more. Each resource type contributes its own set of columns, all landing in this single table.&lt;/P&gt;
&lt;P&gt;Because of this design, the AzureDiagnostics table enforces a &lt;STRONG&gt;hard limit of 500 columns per workspace&lt;/STRONG&gt;. This is a &lt;STRONG&gt;default, platform-wide behavior&lt;/STRONG&gt; — not specific to any one Azure service or column and is designed to avoid any data loss due to the number of active columns exceeding this 500 column limit.&lt;/P&gt;
&lt;P&gt;Here's what happens when that limit is reached:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;th&gt;Workspace Column Count&lt;/th&gt;&lt;th&gt;Behavior&lt;/th&gt;&lt;/tr&gt;&lt;tr&gt;&lt;th&gt;&lt;STRONG&gt;&amp;lt; 500 columns&lt;/STRONG&gt;&lt;/th&gt;&lt;td&gt;New columns are created normally (e.g., data_sensitivity_information_s)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;th&gt;&lt;STRONG&gt;≥ 500 columns&lt;/STRONG&gt;&lt;/th&gt;&lt;td&gt;&lt;STRONG&gt;Any new or overflow column data is silently redirected&lt;/STRONG&gt; into a dynamic JSON property bag called AdditionalFields&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;⚠️&amp;nbsp;&lt;STRONG&gt;This redirection is completely silent.&lt;/STRONG&gt; No error is thrown. No warning appears in your query results. The data is &lt;STRONG&gt;NOT lost&lt;/STRONG&gt; — it's simply stored differently, inside AdditionalFields as a JSON key-value pair.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This means that in a busy Log Analytics workspace shared across many Azure resource types, &lt;STRONG&gt;virtually any column&lt;/STRONG&gt; that exceeds the 500-column threshold will have its data moved to AdditionalFields — whether it's data_sensitivity_information_s, deadlock_xml_s, query_hash_s, error_message_s, or any other field your resource type emits.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;BR /&gt;⚠️ &lt;EM&gt;"Because AzureDiagnostics is a single shared table, &lt;STRONG&gt;all Azure resource types&lt;/STRONG&gt; sending logs to your workspace — Azure Firewall, Key Vault, App Service, AKS, and others — contribute their columns to the &lt;STRONG&gt;same 500-column pool&lt;/STRONG&gt;. As Microsoft's documentation notes, this is precisely why AzureDiagnostics is&lt;/EM&gt; &lt;EM&gt;&lt;STRONG&gt;'much more susceptible to exceeding the 500-column limit'&lt;/STRONG&gt;&lt;/EM&gt; &lt;EM&gt;compared to other tables — your SQL columns can spill even if your SQL workload alone would never breach the limit."&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H3&gt;🔎 Step 1: Diagnose — Has Your Workspace Hit the 500-Column Limit?&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Run this query in your Log Analytics workspace to check the current column count:&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;// Check total column count in AzureDiagnostics
AzureDiagnostics
| getschema
| summarize
    TotalColumns = count(),
    RemainingCapacity = 500 - count(),
    LimitReached = iff(count() &amp;gt;= 500, "⚠️ YES - Data is going to AdditionalFields", "✅ NO - Under the limit")&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Then check whether a specific column you're looking for exists in the schema at all:&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;// Replace the your_column_name with any column you're investigating
AzureDiagnostics 
| getschema 
| where ColumnName == "your_column_name"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If this returns&amp;nbsp;&lt;STRONG&gt;no rows&lt;/STRONG&gt;, that column's data has been redirected to AdditionalFields. The same check applies to &lt;STRONG&gt;any column&lt;/STRONG&gt; you suspect is missing.&lt;/P&gt;
&lt;P&gt;Use this query to &lt;STRONG&gt;discover all fields&lt;/STRONG&gt; currently spilled to AdditionalFields for SQL audit events:&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;AzureDiagnostics 
| where Category == "SQLSecurityAuditEvents" 
| where isnotempty(AdditionalFields) 
| extend ParsedAF = parse_json(AdditionalFields) 
| extend AF_Keys = bag_keys(ParsedAF) 
| mv-expand AF_Keys to typeof(string) 
| summarize SpilledColumns = make_set(AF_Keys), Count = count() by Category 
| project Category, Count, SpilledColumns&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;✅ Step 2: Fix — Use AdditionalFields to Retrieve Any Spilled Column&lt;/H3&gt;
&lt;P&gt;Once you've confirmed the spill, update your queries to extract the missing field from AdditionalFields using parse_json. This approach works for &lt;STRONG&gt;any column&lt;/STRONG&gt; that has been redirected.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Recommended Pattern — Resilient to Both States:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;AzureDiagnostics
| where Category == "SQLSecurityAuditEvents"
| extend 
    DataSensitivityInfo = coalesce(data_sensitivity_information_s,   tostring(parse_json(AdditionalFields).data_sensitivity_information)),
    QueryHash           = coalesce(query_hash_s,                     tostring(parse_json(AdditionalFields).query_hash)),
    AdditionalInfo      = coalesce(additional_information_s,         tostring(parse_json(AdditionalFields).additional_information))
| project
    event_time_t,
    statement_s,
    succeeded_s,
    server_principal_name_s,
    client_ip_s,
    application_name_s,
    DataSensitivityInfo,
    QueryHash,
    AdditionalInfo
| order by event_time_t desc
| take 100&lt;/LI-CODE&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;A shorter version:&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="kusto"&gt;AzureDiagnostics 
| where Category == "SQLSecurityAuditEvents" 
| where ResourceId == "/SUBSCRIPTIONS/&amp;lt;SubscriptionID&amp;gt;/RESOURCEGROUPS/&amp;lt;RG&amp;gt;/PROVIDERS/MICROSOFT.SQL/SERVERS/&amp;lt;ServerName&amp;gt;/DATABASES/&amp;lt;DBName&amp;gt;" 
| extend DataSensitivityInfo = tostring(parse_json(AdditionalFields).data_sensitivity_information) 
| where isnotempty(DataSensitivityInfo) 
| project event_time_t, DataSensitivityInfo 
| take 100&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;💡&amp;nbsp;&lt;STRONG&gt;coalesce()&lt;/STRONG&gt; tries the direct column first. If null, it falls back to AdditionalFields. This makes your query &lt;STRONG&gt;portable across workspaces&lt;/STRONG&gt; — whether at the limit or not.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;📚 References&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/azurediagnostics" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Azure Monitor Logs — AzureDiagnostics table reference&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/sqlsecurityauditevents" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;SQLSecurityAuditEvents table reference&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/auditing-analyze-audit-logs?view=azuresql" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Analyze Azure SQL audit logs in Log Analytics&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/platform/resource-logs?tabs=log-analytics#send-to-log-analytics-workspace" target="_blank"&gt;Resource logs in Azure Monitor - Azure Monitor | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Apr 2026 07:11:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-audit-columns-missing-understanding-azurediagnostics/ba-p/4508825</guid>
      <dc:creator>Ashriti_Jamwal</dc:creator>
      <dc:date>2026-04-06T07:11:20Z</dc:date>
    </item>
    <item>
      <title>Expanding Azure Arc SQL Migration with a New Target: SQL Server on Azure Virtual Machines</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-data-migration-blog/expanding-azure-arc-sql-migration-with-a-new-target-sql-server/ba-p/4508564</link>
      <description>&lt;P&gt;Modernizing a SQL Server estate is rarely a single-step effort. It typically involves multiple phases, from discovery and assessment to migration and optimization, often spanning on-premises, hybrid, and cloud environments. SQL Server enabled by Azure Arc simplifies this process by bringing all migration steps into a single, cohesive experience in the Azure portal.&lt;/P&gt;
&lt;P&gt;With the March 2026 release, this integrated experience is extended by adding &lt;STRONG&gt;SQL Server on Azure Virtual Machines &lt;/STRONG&gt;as a new migration target in Azure Arc. Arc-enabled SQL Server instances can now be migrated not only to Azure SQL Managed Instance, but also to SQL Server running on Azure infrastructure, using the same unified workflow.&lt;/P&gt;
&lt;H3&gt;Expanding Choice Without Adding Complexity&lt;/H3&gt;
&lt;P&gt;By introducing SQL Server on Azure Virtual Machines as a migration target, Azure Arc now supports a broader range of migration strategies while preserving a single operational model. It becomes possible to choose between Azure SQL Managed Instance and SQL Server on Azure VMs without fragmenting migration tooling or processes.&lt;/P&gt;
&lt;P&gt;The result is a flexible, scalable, and consistent migration experience that supports hybrid environments, reduces operational overhead, and enables modernization at a controlled and predictable pace.&lt;/P&gt;
&lt;H3&gt;One Integrated Migration Journey&lt;/H3&gt;
&lt;P&gt;A core value of SQL Server migration in Azure Arc is that the entire migration lifecycle is managed from one place. Once a SQL Server instance is enabled by Azure Arc, readiness can be assessed, a migration target selected, a migration method chosen, progress monitored, and cutover completed directly in the Azure portal.&lt;/P&gt;
&lt;P&gt;This approach removes the need for disconnected tools or custom orchestration. The only prerequisite remains unchanged: the source SQL Server needs to be enabled by Azure Arc. From there, migration is fully integrated into the Azure Arc SQL experience.&lt;/P&gt;
&lt;H3&gt;A Consistent Experience Across Migration Targets&lt;/H3&gt;
&lt;P&gt;The migration experience for SQL Server on Azure Virtual Machines follows the same model already available for Azure SQL Managed Instance migrations in Azure Arc. The same guided workflow, migration dashboard, and monitoring capabilities are used regardless of the selected target.&lt;/P&gt;
&lt;P&gt;This consistency is intentional. It allows teams to choose the destination that best fits their technical, operational, or regulatory requirements without having to learn a new migration process. Whether migrating to a fully managed PaaS service or to SQL Server on Azure infrastructure, the experience remains predictable and familiar.&lt;/P&gt;
&lt;img /&gt;
&lt;H3&gt;Backup Log Shipping Migration to SQL Server in Azure VM&lt;/H3&gt;
&lt;P&gt;Migration to SQL Server on Azure Virtual Machines is based on &lt;STRONG&gt;backup and restore&lt;/STRONG&gt;, specifically using &lt;STRONG&gt;log shipping mechanism&lt;/STRONG&gt;. This is a well-established approach for online migrations that minimizes downtime while maintaining control over the cutover window.&lt;/P&gt;
&lt;P&gt;In this model, database backups need to be uploaded from the source SQL Server to Azure Blob Storage. The migration engine will restore the initial full backup followed by ongoing transaction log and diff. backups. Azure Blob Storage acts as the intermediary staging location between the source and the target.&lt;/P&gt;
&lt;P&gt;The Azure Blob Storage account and the target SQL Server running on an Azure Virtual Machine must be co-located in the same Azure region&lt;STRONG&gt;.&lt;/STRONG&gt; This regional alignment is required to ensure efficient data transfer, reliable restore operations, and predictable migration performance.&lt;/P&gt;
&lt;P&gt;Within the Azure Arc migration experience, a simple and guided UX is used to select the Azure Blob Storage container that holds the backup files. Both the selected storage account and the Azure VM hosting SQL Server must reside in the same Azure region.&lt;/P&gt;
&lt;P&gt;Once the migration job is started, Azure Arc automatically restores the backup files to SQL Server on the Azure VM. As new log backups are uploaded to Blob Storage, they are continuously detected and applied to the target database, keeping it closely synchronized with the source.&lt;/P&gt;
&lt;H3&gt;Controlled Cutover on Your Terms&lt;/H3&gt;
&lt;P&gt;This automated restore process continues until the final cutover is initiated. When the cutover command is issued, Azure Arc applies the final backup to the target SQL Server on the Azure Virtual Machine and completes the migration.&lt;/P&gt;
&lt;P&gt;The target database is then brought online, and applications can be redirected to the new environment. This controlled cutover model allows downtime to be planned precisely, rather than being dictated by long-running restore operations.&lt;/P&gt;
&lt;H3&gt;Getting started&lt;/H3&gt;
&lt;P&gt;To get started, &lt;A href="https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/deployment-options" target="_blank" rel="noopener"&gt;Arc enable you SQL Server&lt;/A&gt;. Then, in the Azure portal, navigate to your Arc enabled SQL Server and select Database migration under the Migration menu on the left. For more information, see the &lt;A href="https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/migration-overview" target="_blank" rel="noopener"&gt;SQL Server migration in Azure Arc&lt;/A&gt; documentation.&lt;/P&gt;</description>
      <pubDate>Fri, 03 Apr 2026 22:55:58 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-data-migration-blog/expanding-azure-arc-sql-migration-with-a-new-target-sql-server/ba-p/4508564</guid>
      <dc:creator>danimir</dc:creator>
      <dc:date>2026-04-03T22:55:58Z</dc:date>
    </item>
    <item>
      <title>Connect to Azure SQL Database using a custom domain name
with Microsoft Entra ID authentication</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/connect-to-azure-sql-database-using-a-custom-domain-name-with/ba-p/4507679</link>
      <description>&lt;P&gt;&lt;SPAN data-olk-copy-source="MailCompose"&gt;Many of us might prefer to connect to Azure SQL Server using a custom domain name (like &lt;STRONG&gt;&lt;EM&gt;devsqlserver.mycompany.com&lt;/EM&gt;&lt;/STRONG&gt;) rather than the default fully qualified domain name (&lt;STRONG&gt;&lt;EM&gt;devsqlserver.database.windows.net&lt;/EM&gt;&lt;/STRONG&gt;), often because of application-specific or compliance reasons. This article details how you can accomplish this when logging in with &lt;STRONG&gt;Microsoft Entra ID&lt;/STRONG&gt;&amp;nbsp;(for example, &lt;EM&gt;&lt;U&gt;user@mycompany.com&lt;/U&gt;&lt;/EM&gt;) in Azure SQL Database specific environment. Frequently, users encounter errors similar to the one described below during this process.&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG data-olk-copy-source="MailCompose"&gt;Before you start&lt;/STRONG&gt;: If you use &lt;STRONG&gt;SQL authentication&lt;/STRONG&gt; (SQL username/password), the steps are different. Refer the following article for that scenario:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/blog/azuredbsupport/how-to-use-different-domain-name-to-connect-to-azure-sql-db-server/1217805" target="_blank" rel="noopener"&gt;&lt;U&gt;How to use different domain name to connect to Azure SQL DB Server | Microsoft Community Hub&lt;/U&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;With SQL authentication, you can include the server name in the login (for example, &lt;EM&gt;username@&lt;/EM&gt;&lt;STRONG&gt;&lt;EM&gt;servername&lt;/EM&gt;&lt;/STRONG&gt;). With Microsoft Entra ID authentication, you don’t do that—so your custom DNS name must follow one important rule.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key requirement for Microsoft Entra ID authentication&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In an Azure SQL Database (PaaS) environment, the platform relies on the &lt;STRONG&gt;server name portion &lt;/STRONG&gt;of the Fully Qualified Domain Name (FQDN) to correctly route incoming connection requests to the appropriate logical server.&lt;/P&gt;
&lt;P&gt;When you use a custom DNS name, it is important that the name &lt;STRONG&gt;starts with the exact Azure SQL server name&lt;/STRONG&gt; (the part before &lt;EM&gt;.database.windows.net&lt;/EM&gt;).&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why this is required:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Azure SQL Database is a &lt;STRONG&gt;multi-tenant PaaS service&lt;/STRONG&gt;, where multiple logical servers are hosted behind shared infrastructure.&lt;/LI&gt;
&lt;LI&gt;During the connection process (especially with &lt;STRONG&gt;Microsoft Entra ID authentication&lt;/STRONG&gt;), Azure SQL uses the server name extracted from the FQDN to:&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;Identify the correct logical server&lt;/LI&gt;
&lt;LI&gt;Route the connection internally within the platform&lt;/LI&gt;
&lt;LI&gt;Validate the authentication context&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;This behavior aligns with how Azure SQL endpoints are designed and resolved within Microsoft’s managed infrastructure.&lt;/P&gt;
&lt;P&gt;If your custom DNS name doesn’t start with the Azure SQL server name, Azure can’t route the connection to the correct server. Sign-in may fail and you might see error &lt;STRONG&gt;40532&lt;/STRONG&gt; (&lt;EM&gt;as shown above&lt;/EM&gt;). To fix this, change the custom DNS name so it starts with your Azure SQL server name.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Example:&lt;/EM&gt;&amp;nbsp;if your server is &lt;EM&gt;devsqlserver.database.windows.net&lt;/EM&gt;, your custom name must start with '&lt;STRONG&gt;&lt;EM&gt;devsqlserver'&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;devsqlserver&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;.mycompany.com&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;devsqlserver&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;.contoso.com&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;devsqlserver&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;.mydomain.com&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step-by-step: set up and connect&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL data-editing-info="{&amp;quot;orderedStyleType&amp;quot;:1}"&gt;
&lt;LI&gt;&lt;STRONG&gt;Pick the custom name.&lt;/STRONG&gt;&amp;nbsp;It must start with your server name. Example: use &lt;STRONG&gt;&lt;EM&gt;devsqlserver&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;.mycompany.com&lt;/EM&gt;&amp;nbsp;(not &lt;EM&gt;othername.mycompany.com&lt;/EM&gt;).&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Create DNS records for the custom name.&lt;/STRONG&gt; Create a CNAME or DNS alias to point the custom name to your Azure SQL server endpoint (public) or to the private endpoint IP (private) as per the blog mentioned above.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Check DNS from your computer.&lt;/STRONG&gt;&amp;nbsp;Make sure &lt;EM&gt;devsqlserver.mycompany.com&lt;/EM&gt;&amp;nbsp;resolves to the right address before you try to connect.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Connect with Microsoft Entra ID.&lt;/STRONG&gt;&amp;nbsp;In SSMS/Azure Data Studio, set &lt;EM&gt;Server&lt;/EM&gt;&amp;nbsp;to your custom server name and select a Microsoft Entra ID authentication option (for example, &lt;EM&gt;Universal with MFA&lt;/EM&gt;).&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Sign in and connect.&lt;/STRONG&gt;&amp;nbsp;Use your Entra ID (for example, &lt;A href="mailto:user@mycompany.com" target="_blank" rel="noopener"&gt;&lt;U&gt;&lt;EM&gt;user@mycompany.com&lt;/EM&gt;&lt;/U&gt;&lt;/A&gt;).&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;EM&gt;Example:&lt;/EM&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Also, when you connect to Azure SQL Database using a custom domain name, you might see the following error:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;“The target principal name is incorrect”&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Example&lt;/EM&gt;&lt;STRONG&gt;:&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;This happens because Azure SQL’s SSL/TLS certificate is issued for the default server name (for example,&amp;nbsp;&lt;EM&gt;servername.database.windows.net&lt;/EM&gt;), not for your custom DNS name.&lt;/P&gt;
&lt;P&gt;During the secure connection process, the client validates that the server name you are connecting to matches the name in the certificate. Since the custom domain does not match the certificate, this validation fails, resulting in the error.&lt;/P&gt;
&lt;P&gt;This is expected behavior and is part of standard security checks to prevent connecting to an untrusted or impersonated server.&lt;/P&gt;
&lt;P&gt;To proceed with the connection, you can configure the client to trust the server certificate by:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Setting &lt;STRONG&gt;Trust Server Certificate = True&lt;/STRONG&gt; in the client settings, or&lt;/LI&gt;
&lt;LI&gt;Adding TrustServerCertificate=True in the connection string&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This bypasses the strict name validation and allows the connection to succeed.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Note: &lt;/EM&gt;Please use the latest client drivers (ODBC/JDBC/.NET, etc.). In some old driver versions, the '&lt;EM&gt;TrustServerCertificate&lt;/EM&gt;' setting may not work properly, and you may still face connection issues with the same '&lt;EM&gt;target principal name is incorrect&lt;/EM&gt;' error. So, it is always better to keep drivers updated for smooth connectivity with Azure SQL.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Applies to both public and private endpoints&lt;/STRONG&gt;: This naming requirement and approach work whether you connect over the public endpoint or through a private endpoint for Azure SQL Database scenario, as long as DNS resolution for the custom name is set up correctly for your network.&lt;/P&gt;</description>
      <pubDate>Fri, 03 Apr 2026 11:02:36 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/connect-to-azure-sql-database-using-a-custom-domain-name-with/ba-p/4507679</guid>
      <dc:creator>Sunil-Nair</dc:creator>
      <dc:date>2026-04-03T11:02:36Z</dc:date>
    </item>
    <item>
      <title>Handling Unique Constraint Conflicts in Logical Replication</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/handling-unique-constraint-conflicts-in-logical-replication/ba-p/4507066</link>
      <description>&lt;P&gt;&lt;EM&gt;Authors: Ashutosh Sharma, Senior Software Engineer, and Gauri Kasar, Product Manager&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Logical replication can keep your PostgreSQL environments in sync, helping replicate selected tables with minimal impact on the primary workload. But what happens when your subscriber hits a duplicate key error and replication grinds to a halt? If you’ve seen a &lt;A class="lia-external-url" href="https://www.postgresql.org/docs/current/logical-replication-conflicts.html" target="_blank" rel="noopener"&gt;unique‑constraint violation&lt;/A&gt; while replicating between Azure Database for PostgreSQL servers, you’re not alone. This blog covers common causes, prevention tips, and practical recovery options.&lt;/P&gt;
&lt;P&gt;In PostgreSQL logical replication, the subscriber can fail with a unique-constraint error when it tries to apply a change that would create a duplicate key.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;duplicate key value violates unique constraint&lt;/LI-CODE&gt;
&lt;H2&gt;Understanding why this happens?&lt;/H2&gt;
&lt;P&gt;When an INSERT or UPDATE would create a value that already exists in a column (or set of columns) protected by a UNIQUE constraint (including a PRIMARY KEY). In logical replication, this most commonly occurs because of local writes on the subscriber or&amp;nbsp;if the table is being subscribed from multiple publishers. These conflicts are resolved on the subscriber side.&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Local writes on the subscriber&lt;/STRONG&gt;: a row with the same primary key/unique key is inserted on the subscriber before the apply worker processes the corresponding change from the publisher.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Multi-origin / multi-master without conflict-free keys&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;: two origins generate (or replicate) the same unique key. &lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Initial data synchronization issues&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;: the subscriber already contains data when the subscription is created with initial copy enabled, resulting in duplicate inserts during the initial table sync.&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;How to avoid this?&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Avoid local writes on subscribed tables (treat the subscriber as read-only for replicated relations).&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Avoid subscribing to the same table from multiple publishers unless you have explicit conflict handling and a conflict-free key design.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Enabling server logs can help you identify and troubleshoot unique‑constraint conflicts more effectively. Refer to the official documentation &lt;A href="https://learn.microsoft.com/azure/postgresql/monitor/how-to-configure-and-access-logs?source=recommendations" target="_blank" rel="noopener"&gt;to configure and access PostgreSQL logs&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;How to handle conflicts (recovery options)&amp;nbsp;&lt;/H2&gt;
&lt;H4&gt;Option 1: Delete the conflicting row on the subscriber&amp;nbsp;&lt;/H4&gt;
&lt;P&gt;Use the subscriber logs to identify the key (or row) causing the conflict, then delete the row on the subscriber with a DELETE statement. Resume apply and repeat if more conflicts appear.&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Option 2: Use conflict logs and skip the conflicting transaction (PostgreSQL 17+)&amp;nbsp;&lt;/H4&gt;
&lt;P&gt;Starting with PostgreSQL 17, logical replication provides &lt;STRONG&gt;detailed conflict logging&lt;/STRONG&gt; on the subscriber, making it easier to understand &lt;EM&gt;why&lt;/EM&gt; replication stopped and &lt;EM&gt;which transaction&lt;/EM&gt; caused the failure. When a replicated INSERT would violate a &lt;STRONG&gt;non‑deferrable unique constraint&lt;/STRONG&gt; on the subscriber for example, when a row with the same key already exists the apply worker detects this as an insert_exists conflict and stops replication. In this case, PostgreSQL logs the conflict along with the &lt;STRONG&gt;transaction’s finish LSN&lt;/STRONG&gt;, which uniquely identifies the failing transaction.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;ERROR: conflict detected on relation "public.t2": conflict=insert_exists
... in transaction 754, finished at 0/034F4090
ALTER SUBSCRIPTION &amp;lt;subscription_name&amp;gt; SKIP (lsn = '0/034F4090');
&lt;/LI-CODE&gt;
&lt;H4&gt;Option 3: Rebuild (re-sync) the table&lt;/H4&gt;
&lt;P&gt;Rebuilding (re‑syncing) a table is the &lt;STRONG&gt;safest and most deterministic way&lt;/STRONG&gt; to resolve logical replication conflicts caused by &lt;STRONG&gt;pre‑existing data differences&lt;/STRONG&gt; or &lt;STRONG&gt;local writes on the subscriber&lt;/STRONG&gt;. This approach is especially useful when a table repeatedly fails with unique‑constraint violations and it is unclear which rows are out of sync.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1 (subscriber):&lt;/STRONG&gt; Disable the subscription.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;ALTER SUBSCRIPTION &amp;lt;subscription_name&amp;gt; DISABLE;&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2 (subscriber):&lt;/STRONG&gt;&amp;nbsp;Remove the local copy of the table so it can be re-copied.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;TRUNCATE TABLE &amp;lt;conflicting_table&amp;gt;;&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3 (publisher):&lt;/STRONG&gt;&amp;nbsp;Ensure the publication will (re)send the table (one approach is to recreate the publication entry for that table).&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;ALTER PUBLICATION &amp;lt;pub_with_conflicting_table&amp;gt; DROP TABLE &amp;lt;conflicting_table&amp;gt;;
CREATE PUBLICATION &amp;lt;pub_with_conflicting_table_rebuild&amp;gt; FOR TABLE &amp;lt;conflicting_table&amp;gt;;
&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4 (subscriber):&lt;/STRONG&gt;&amp;nbsp;Create a new subscription (or refresh the existing one) to re-copy the table.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE SUBSCRIPTION &amp;lt;sub_rebuild&amp;gt;
    CONNECTION '&amp;lt;connection_string&amp;gt;'
    PUBLICATION &amp;lt;pub_with_conflicting_table_rebuild&amp;gt;;
&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Step 5 (subscriber):&lt;/STRONG&gt;&amp;nbsp;Re-enable the original subscription (if applicable).&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;ALTER SUBSCRIPTION &amp;lt;subscription_name&amp;gt; ENABLE;&lt;/LI-CODE&gt;
&lt;H3&gt;Conclusion&lt;/H3&gt;
&lt;P&gt;In most cases, these conflicts occur due to local changes on the subscriber or differences in data that existed before logical replication was fully synchronized. It is recommended to avoid direct modifications on subscribed tables and ensure that the replication setup is properly planned, especially when working with tables that have unique constraints.&lt;/P&gt;</description>
      <pubDate>Thu, 02 Apr 2026 16:33:58 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/handling-unique-constraint-conflicts-in-logical-replication/ba-p/4507066</guid>
      <dc:creator>gauri-kasar</dc:creator>
      <dc:date>2026-04-02T16:33:58Z</dc:date>
    </item>
    <item>
      <title>Script to Export All Azure SQL Whitelisted Public IPs Within Your Subscription</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/script-to-export-all-azure-sql-whitelisted-public-ips-within/ba-p/4504024</link>
      <description>&lt;P&gt;Recently, I worked on an interesting customer case where they had a requirement to export all the IP addresses that are whitelisted on the Azure SQL Servers within their subscription.&lt;/P&gt;
&lt;P&gt;Below is the script that helped customer to export all server-level (public) firewall rules (whitelisted IP ranges) for every Azure SQL logical server in a subscription into a single CSV file.&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Set-AzContext -SubscriptionId "sub_ID"
 
# Ensure context exists
$context = Get-AzContext
if (-not $context) {
    throw "No Azure context found. Please run Connect-AzAccount."
}
 
$subId = $context.Subscription.Id
 
# Get servers safely
$servers = Get-AzSqlServer
if (-not $servers) {
    Write-Output "No SQL Servers found in this subscription."
    return
}
 
$results = New-Object System.Collections.Generic.List[object]
 
foreach ($server in $servers) {
 
    # Skip invalid entries (prevents prompt issue)
    if (-not $server.ServerName -or -not $server.ResourceGroupName) {
        Write-Warning "Skipping invalid server object"
        continue
    }
 
    try {
        Write-Output "Processing: $($server.ServerName)"
 
        $firewallRules = Get-AzSqlServerFirewallRule `
            -ResourceGroupName $server.ResourceGroupName `
            -ServerName $server.ServerName `
            -ErrorAction Stop
 
        foreach ($rule in $firewallRules) {
            $results.Add([PSCustomObject]@{
                SubscriptionId   = $subId
                ResourceGroup    = $server.ResourceGroupName
                SqlServerName    = $server.ServerName
                FirewallRuleName = $rule.FirewallRuleName
                StartIP          = $rule.StartIpAddress
                EndIP            = $rule.EndIpAddress
            })
        }
    }
    catch {
        Write-Warning "Failed for server $($server.ServerName): $($_.Exception.Message)"
 
        $results.Add([PSCustomObject]@{
            SubscriptionId   = $subId
            ResourceGroup    = $server.ResourceGroupName
            SqlServerName    = $server.ServerName
            FirewallRuleName = "ERROR_READING_RULES"
            StartIP          = ""
            EndIP            = ""
        })
    }
}
 
# Export
$path = "./AzureSqlServer_PublicFirewallIPs.csv"
$results | Export-Csv -Path $path -NoTypeInformation
 
Write-Output "Export completed: $path"
&lt;/LI-CODE&gt;
&lt;P&gt;Once the script executed successfully, follow the steps below to download the CSV file.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Clicked on Download&lt;BR /&gt;&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Provide the path as ./AzureSqlServer_PublicFirewallIPs.csv and initiated download&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Overview of what this PowerShell script will exactly do.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;It first sets the target subscription and captures the subscription ID.&lt;/LI&gt;
&lt;LI&gt;It retrieves all Azure SQL logical servers in that subscription.&lt;/LI&gt;
&lt;LI&gt;For each server, it fetches the server‑level firewall rules (whitelisted IP addresses and ranges).&lt;/LI&gt;
&lt;LI&gt;Each firewall rule is recorded with details such as subscription ID, resource group, server name, rule name, start IP, and end IP.&lt;/LI&gt;
&lt;LI&gt;If any server fails during retrieval (for example, due to permission issues), the script logs a warning and continues processing the remaining servers.&lt;/LI&gt;
&lt;LI&gt;Finally, all collected data is exported to a CSV file (AzureSqlServer_PublicFirewallIPs.csv) in the Cloud Shell directory.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This allows you to centrally audit and review all whitelisted IPs across Azure SQL Servers in a subscription without stopping execution due to individual server errors.&lt;/P&gt;</description>
      <pubDate>Fri, 03 Apr 2026 10:12:19 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/script-to-export-all-azure-sql-whitelisted-public-ips-within/ba-p/4504024</guid>
      <dc:creator>Anuradha_A</dc:creator>
      <dc:date>2026-04-03T10:12:19Z</dc:date>
    </item>
    <item>
      <title>Understanding and Monitoring Class 2 Transactions in Azure SQL Database</title>
      <link>https://techcommunity.microsoft.com/t5/azure-database-support-blog/understanding-and-monitoring-class-2-transactions-in-azure-sql/ba-p/4507671</link>
      <description>&lt;P&gt;During a recent customer engagement, we investigated sustained transaction log growth in &lt;STRONG&gt;Azure SQL Database&lt;/STRONG&gt; without obvious large user transactions. The customer was familiar with PostgreSQL diagnostics and wanted to understand how similar insights can be obtained in Azure SQL Database—especially around &lt;STRONG&gt;Class 2 (system) transactions&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;This post summarizes what we discussed, explains &lt;STRONG&gt;why Azure SQL behaves differently&lt;/STRONG&gt;, and walks through &lt;STRONG&gt;practical DMV‑based monitoring patterns&lt;/STRONG&gt; you can use today.&lt;/P&gt;
&lt;H2&gt;Azure SQL Database vs. PostgreSQL: Diagnostic Model Differences&lt;/H2&gt;
&lt;P&gt;One of the first clarifications we made is that &lt;STRONG&gt;Azure SQL Database does not expose diagnostic settings equivalent to PostgreSQL’s system‑level log diagnostics&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;Azure SQL Database is a &lt;STRONG&gt;fully managed PaaS service&lt;/STRONG&gt;, and many internal operations—such as checkpoints, version store cleanup, and background maintenance—are abstracted from direct control. Instead of low‑level engine logs, Azure SQL provides &lt;STRONG&gt;cumulative Dynamic Management Views (DMVs)&lt;/STRONG&gt; that expose the &lt;EM&gt;effects&lt;/EM&gt; of system activity rather than the internal implementation.&lt;/P&gt;
&lt;H2&gt;What Are Class 2 Transactions?&lt;/H2&gt;
&lt;P&gt;In Azure SQL Database, &lt;STRONG&gt;Class 2 transactions&lt;/STRONG&gt; generally refer to &lt;STRONG&gt;system‑generated transactions&lt;/STRONG&gt;, not directly initiated by user workloads. These commonly include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Checkpoint operations&lt;/LI&gt;
&lt;LI&gt;Version store cleanup&lt;/LI&gt;
&lt;LI&gt;Ghost record cleanup&lt;/LI&gt;
&lt;LI&gt;Background metadata maintenance&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Although they are not user‑driven, these transactions still generate &lt;STRONG&gt;transaction log activity&lt;/STRONG&gt;, which can be surprising when log usage grows steadily without large user transactions.&lt;/P&gt;
&lt;H2&gt;Key DMVs to Monitor Class 2 Activity&lt;/H2&gt;
&lt;H3&gt;1. Transaction Log Usage&lt;/H3&gt;
&lt;LI-CODE lang="sql"&gt;SELECT *
FROM sys.dm_db_log_space_usage;&lt;/LI-CODE&gt;
&lt;P&gt;This DMV provides:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Total log size&lt;/LI&gt;
&lt;LI&gt;Used log space&lt;/LI&gt;
&lt;LI&gt;Used log percentage&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If log usage grows steadily &lt;STRONG&gt;without&lt;/STRONG&gt; large user transactions, it is often a signal that &lt;STRONG&gt;background system activity&lt;/STRONG&gt; (Class 2 transactions) is responsible.&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;&lt;STRONG&gt;Checkpoint Activity&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;LI-CODE lang="sql"&gt;SELECT *
FROM sys.dm_exec_requests
WHERE command = 'CHECKPOINT';&lt;/LI-CODE&gt;
&lt;P&gt;Frequent checkpoints result in:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;More frequent log flushes&lt;/LI&gt;
&lt;LI&gt;Increased system log writes&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In Azure SQL Database, &lt;STRONG&gt;checkpoint frequency is system‑managed&lt;/STRONG&gt; and cannot be tuned through configuration or diagnostic settings.&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;&lt;STRONG&gt;Version Store Usage (Common Class 2 Contributor)&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;LI-CODE lang="sql"&gt;SELECT *
FROM sys.dm_tran_version_store_space_usage;&lt;/LI-CODE&gt;
&lt;P&gt;High version store usage often leads to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Background cleanup tasks&lt;/LI&gt;
&lt;LI&gt;Increased system transactions&lt;/LI&gt;
&lt;LI&gt;Additional transaction log generation&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This is especially common in workloads using:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Snapshot Isolation&lt;/LI&gt;
&lt;LI&gt;Read Committed Snapshot Isolation (RCSI)&lt;/LI&gt;
&lt;LI&gt;Long‑running transactions or readers&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Automating Monitoring with Azure Elastic Jobs&lt;/H2&gt;
&lt;P&gt;Because these DMVs are cumulative, &lt;STRONG&gt;capturing them over time is key&lt;/STRONG&gt;. During the call, we discussed automating data collection using &lt;STRONG&gt;Azure Elastic Jobs&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;Elastic Jobs allow you to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Schedule DMV snapshots&lt;/LI&gt;
&lt;LI&gt;Store historical trends&lt;/LI&gt;
&lt;LI&gt;Correlate spikes with workload patterns&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Microsoft provides full guidance on creating and managing Elastic Jobs using T‑SQL here:&lt;BR /&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-jobs-tsql-create-manage?view=azuresql" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}" target="_blank"&gt;Create and manage Elastic Jobs using T‑SQL&lt;/A&gt;&lt;/P&gt;
&lt;H2&gt;Index Management and Class 2 Impact&lt;/H2&gt;
&lt;P&gt;Index maintenance can indirectly increase Class 2 activity by:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Increasing version store usage&lt;/LI&gt;
&lt;LI&gt;Triggering additional background cleanup&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Instead of manual index tuning, we recommended enabling &lt;STRONG&gt;Query Performance Insight – Index recommendations&lt;/STRONG&gt; in the Azure Portal. This allows Azure SQL Database to automatically:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Suggest index creation&lt;/LI&gt;
&lt;LI&gt;Suggest index removal based on real workload patterns.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Why Checkpoints Cannot Be Tuned&lt;/H2&gt;
&lt;P&gt;A common question is whether checkpoint frequency can be reduced to lower system log activity.&lt;/P&gt;
&lt;P&gt;In Azure SQL Database:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Checkpoints are &lt;STRONG&gt;engine‑managed&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;There is &lt;STRONG&gt;no diagnostic or configuration setting&lt;/STRONG&gt; to control their frequency&lt;/LI&gt;
&lt;LI&gt;This design ensures platform stability and predictable recovery behavior&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;As a result, monitoring—not tuning—is the correct approach.&lt;/P&gt;
&lt;H2&gt;Practical Takeaways&lt;/H2&gt;
&lt;P&gt;From this case, the key lessons are:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Not all transaction log growth is user‑driven&lt;/LI&gt;
&lt;LI&gt;Class 2 transactions are a normal part of Azure SQL Database&lt;/LI&gt;
&lt;LI&gt;DMVs provide the best visibility into system behavior&lt;/LI&gt;
&lt;LI&gt;Trend‑based monitoring is more valuable than point‑in‑time checks&lt;/LI&gt;
&lt;LI&gt;Automation via Elastic Jobs is essential for long‑term analysis&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;Class 2 transactions are often misunderstood because they operate quietly in the background. By using the right DMVs and collecting data over time, you can clearly distinguish &lt;STRONG&gt;expected system behavior&lt;/STRONG&gt; from genuine workload issues.&lt;/P&gt;
&lt;P&gt;If you’re coming from PostgreSQL or on‑prem SQL Server, the key mindset shift is this:&lt;BR /&gt;&lt;STRONG&gt;Azure SQL Database exposes outcomes, not internals—and that’s by design.&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 01 Apr 2026 11:51:59 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-database-support-blog/understanding-and-monitoring-class-2-transactions-in-azure-sql/ba-p/4507671</guid>
      <dc:creator>Mohamed_Baioumy_MSFT</dc:creator>
      <dc:date>2026-04-01T11:51:59Z</dc:date>
    </item>
    <item>
      <title>No code left behind: How AI streamlines Oracle-to-PostgreSQL migration</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/no-code-left-behind-how-ai-streamlines-oracle-to-postgresql/ba-p/4506107</link>
      <description>&lt;H5&gt;&lt;EM&gt;Coauthored by Jonathon Frost, Aditya Duvuri and Shriram Muthukrishnan&lt;/EM&gt;&lt;/H5&gt;
&lt;P&gt;More and more organizations are choosing PostgreSQL over proprietary database platforms such as Oracle, and for good reasons. It’s fully open source and community supported with a steady pace of innovation. It’s also preferred by developers for its extensibility and flexibility, often being used for vector data along with relational data to support modern applications and agents. Still, organizations considering a shift from Oracle to PostgreSQL, may hesitate due to the complexity that often accompanies an enterprise-scale migration project. Challenges such as incompatible data types, language mismatches, and the risk of breaking critical applications are hard to ignore.&lt;/P&gt;
&lt;P&gt;Recently, the &lt;A href="https://www.youtube.com/watch?v=LdCExagKS4Y" target="_blank" rel="noopener"&gt;Azure Postgres team released a new, free tool for migrations from Oracle to PostgreSQL&lt;/A&gt; that was designed to address these challenges, making the decision to migrate a lot less risky. The new AI-assisted Oracle-to-PostgreSQL migration tool, available in public preview via the &lt;A href="https://marketplace.visualstudio.com/items?itemName=ms-ossdata.vscode-pgsql" target="_blank" rel="noopener"&gt;PostgreSQL extension for Visual Studio Code&lt;/A&gt;, brings automation, validation, and AI-powered migration assistance into a single, user-friendly interface.&lt;/P&gt;
&lt;H2&gt;Meet your new migration assistant&lt;/H2&gt;
&lt;P&gt;The AI-assisted Oracle to PostgreSQL migration tool dramatically simplifies moving off Oracle databases. Accessible through VS Code, the tool uses intelligent automation, powered by GitHub Copilot, to convert Oracle database schemas and PL/SQL code into PostgreSQL-compatible formats. It can analyze Oracle schema, and automatically translate table definitions, data types, and even stored procedures/triggers into PostgreSQL equivalents speeding up migrations that once took months of manual effort.&lt;/P&gt;
&lt;P&gt;By handling the heavy lifting of schema and code conversion, this tool allows teams to focus on higher-level testing and optimization rather than tedious code rewrites. Users are already reporting that migrations are now faster, safer, and more transparent. The tool is simple, free, and ready for you to use today. Let’s take a look at how it works by covering the following:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Creating the migration project&lt;/LI&gt;
&lt;LI&gt;Setting up the connections&lt;/LI&gt;
&lt;LI&gt;AI-assisted schema migration&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;Reviewing schema migration report&lt;/LI&gt;
&lt;LI&gt;AI-assisted application migration&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;Reviewing application migration report&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Step by step with the AI-assisted Oracle-to-PostgreSQL migration tool&lt;/H2&gt;
&lt;H3&gt;Step 1 – Create the project in VS Code&lt;/H3&gt;
&lt;P&gt;Start by installing or updating the PostgreSQL extension for VS Code from the marketplace. Open the PostgreSQL extension panel and click “Create Migration Project.” You’ll name your project, which will create a folder to store all migration artifacts. This folder will house extracted and converted files, organized for version control and collaboration.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Step 2 - Connect to your databases and AI model&lt;/H3&gt;
&lt;P&gt;Before beginning the migration, you’ll need to connect to the Oracle databases and select an OpenAI model to leverage during the process. Enter the connection details for your source Oracle database, credentials, and the schema to migrate. Then, select a PostgreSQL scratch database. This temporary environment is used to validate converted DDL in real time. Next, you will be prompted to select an OpenAI model.&lt;/P&gt;
&lt;H3&gt;Step 3 – Begin schema migration&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once you’ve set up your connections, click the button to start the schema migration. The tool performs an extraction of all relevant Oracle database objects: tables, views, packages, procedures, and more. The extracted DDL is saved as files in your project folder. This file-based approach functions like a software project, enabling change tracking, collaboration, and source control.&lt;/P&gt;
&lt;H4&gt;Enter - AI assistance&lt;/H4&gt;
&lt;P&gt;This is where the AI takes over. The tool breaks the extracted schema into manageable chunks, and each chunk is processed by a multi-agent orchestration system:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The Migration Specialist Agent converts Oracle DDL to PostgreSQL.&lt;/LI&gt;
&lt;LI&gt;The Migration Critic Agent validates the conversion by executing it in the PostgreSQL scratch database.&lt;/LI&gt;
&lt;LI&gt;The Documentation Agent captures follow up review tasks, metadata, and coding notes for later integration with the application code migration process.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Each chunk is converted, validated, and deployed. If validation fails, the agents auto correct and retry. This self-healing loop ensures high conversion accuracy. Essentially, the tool conducts compile-time validation against a live PostgreSQL instance to catch issues early and reduce downstream surprises.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Checkpoint - review the schema migration report&lt;/H3&gt;
&lt;P&gt;Some complex objects, like Oracle packages with intricate PL/SQL, may not convert cleanly on the first pass. These are flagged as “review tasks.” You can invoke GitHub Copilot’s agent mode directly from VS Code to assist. The tool constructs a composite prompt with the original Oracle DDL, the partially converted PostgreSQL version, and any validation errors. This context-rich prompt enables Copilot to generate more accurate fixes.&lt;/P&gt;
&lt;P&gt;With the schema fully converted, you can compare the original Oracle and new PostgreSQL versions side by side. Right-click any object in the project folder and select “Compare File Pair." You can also use the “Visualize Schema” feature to see a graphical representation of the converted schema. This is ideal for verifying tables, relationships, and constraints.&lt;/P&gt;
&lt;P&gt;Once the schema migration is complete, the tool generates a detailed report that includes:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Total number of objects converted&lt;/LI&gt;
&lt;LI&gt;Conversion success rate&lt;/LI&gt;
&lt;LI&gt;PostgreSQL version and extensions used&lt;/LI&gt;
&lt;LI&gt;List of converted objects by type&lt;/LI&gt;
&lt;LI&gt;Any flagged review tasks&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This report serves as both a validation summary and an audit artifact. It helps confirm success and identify any follow-up actions. If you have compliance or change management requirements you need to meet, this documentation is essential.&lt;/P&gt;
&lt;H3&gt;Step 4 – Begin application migration&lt;/H3&gt;
&lt;P&gt;The next phase that the tool supports is updating the application code that interacts with the schema. Migrations often stall when code is overlooked or when traditional tools treat SQL statements as simple strings rather than part of a cohesive system. The AI-assisted Oracle-to-PostgreSQL migration tool’s application conversion feature takes a more holistic, context-aware approach.&lt;/P&gt;
&lt;P&gt;Before starting, you’ll need to configure GitHub Copilot Agent Mode with a capable AI model. Then, navigate to the ‘application_code’ directory typically found in &lt;EM&gt;.github/postgres-migration/&amp;lt;project_name&amp;gt;/application_code&lt;/EM&gt;, and copy your source code into this directory. Keeping your application and converted schema together provides the AI with the structural context it needs to refactor your code accurately. To start the app migration, this time you’d select the "Migrate Application" button. Then select the folder containing your source code and the converted schema.&lt;/P&gt;
&lt;H4&gt;Enter - AI assistance&lt;/H4&gt;
&lt;P&gt;The AI orchestrator will analyze your application’s database interactions against the new Postgres schema and generate a series of transformation tasks. These tasks address SQL dialect changes, data access modifications, and library updates. This process goes beyond a simple search-and-replace operation. The AI queries your migrated PostgreSQL database to gain grounded context of your converted schema, and ensures that things like function signatures, data types, and ORM models are migrated correctly in the application code.&lt;/P&gt;
&lt;H3&gt;Checkpoint - review the app migration report&lt;/H3&gt;
&lt;P&gt;When the AI finishes converting your application, it produces a detailed summary. The report lists which files were migrated, notes any unresolved tasks, and outlines how the changes map to the database schema. This audit-ready document can help DBAs and developers collaborate effectively on follow-up actions and integration testing.&lt;/P&gt;
&lt;P&gt;You can use VS Code’s built-in diff viewer to compare each migrated file with its original. Right-click on a migrated file and select "Compare App Migration File Pairs" to open a side-by-side view. This comparison highlights differences in SQL queries, driver imports, and other code changes, allowing you to verify the updates.&lt;/P&gt;
&lt;H3&gt;Wrapping up the migration project&lt;/H3&gt;
&lt;P&gt;During schema migration, the tool created detailed coding notes summarizing data-type mappings, constraints, and package transformations. These notes are essential for understanding why specific changes were made and for guiding the application conversion. Use them as reference points when validating and refining the AI-generated application code.&lt;/P&gt;
&lt;H3&gt;Destination - PostgreSQL on Azure&lt;/H3&gt;
&lt;P&gt;The AI-assisted Oracle-to-PostgreSQL migration tool brings together automation, validation, and AI to make Oracle-to-PostgreSQL migrations faster, safer, and more transparent. With schema extraction, multi-agent orchestration, app conversion, real-time validation, and detailed reporting, it provides a clear, confident path to modernization so you can start taking advantage of the benefits of open-source Postgres.&lt;/P&gt;
&lt;H4&gt;What’s in store&lt;/H4&gt;
&lt;P&gt;On the other side of a successful migration project to PostgreSQL on Azure, you get:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;First-class support in Azure&lt;/LI&gt;
&lt;LI&gt;Significantly lower total cost of ownership from eliminating license fees and reducing vendor lock-in&lt;/LI&gt;
&lt;LI&gt;Unmatched extensibility, with support for custom data types, procedural languages and powerful extensions like PostGIS, TimescaleDB, pgvector, Azure AI, and DiskANN&lt;/LI&gt;
&lt;LI&gt;Frequent updates and cutting-edge features delivered via a vibrant open-source community&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Whether you’re migrating a single schema or leading a broader replatforming initiative, the AI-assisted Oracle-to-PostgreSQL migration tool helps you move forward with confidence without sacrificing control or visibility.&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/postgresql/migrate/oracle-schema-conversions/schema-conversions-overview" target="_blank" rel="noopener"&gt;Learn more about starting your own migration project.&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 31 Mar 2026 15:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/no-code-left-behind-how-ai-streamlines-oracle-to-postgresql/ba-p/4506107</guid>
      <dc:creator>TeneilLawrence</dc:creator>
      <dc:date>2026-03-31T15:00:00Z</dc:date>
    </item>
    <item>
      <title>PostgreSQL Buffer Cache Analysis</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/postgresql-buffer-cache-analysis/ba-p/4501264</link>
      <description>&lt;P&gt;PostgreSQL performance is often dictated not just by query design or indexing strategy, but by how effectively the database leverages memory. At the heart of this memory usage lies shared_buffers—PostgreSQL’s primary buffer cache. Understanding how well this cache is utilized can make the difference between a system that scales smoothly and one that struggles under load.&lt;/P&gt;
&lt;P&gt;In this post, we’ll walk you through a practical, data-driven approach to analyzing PostgreSQL buffer cache behavior using native statistics and the pg_buffercache extension. The goal is to answer a few critical questions:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Is the current shared_buffers configuration sufficient?&lt;/LI&gt;
&lt;LI&gt;Are high-value tables and indexes actually being served from memory?&lt;/LI&gt;
&lt;LI&gt;Is PostgreSQL spending too much time going to disk when it shouldn’t?&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;By the end, you’ll have a repeatable methodology to assess cache efficiency and make informed tuning decisions.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why Buffer Cache Analysis Matters&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;PostgreSQL relies heavily on its buffer cache to minimize disk I/O. Every time a query needs a data or index page, PostgreSQL first checks whether that page already exists in shared_buffers. If it does, the page is served directly from memory—fast and efficient. If not, PostgreSQL must fetch it from disk (or the OS page cache), which is significantly slower.&lt;/P&gt;
&lt;P&gt;While metrics like query latency and IOPS can tell you that performance is degraded, buffer cache analysis helps explain why. It allows you to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Validate whether frequently accessed objects stay hot in cache&lt;/LI&gt;
&lt;LI&gt;Identify cache pollution caused by large, low-value tables&lt;/LI&gt;
&lt;LI&gt;Determine whether increasing shared_buffers would provide real benefits or just waste memory&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Inspecting Shared Buffers with pg_buffercache&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The pg_buffercache extension provides a real-time view into PostgreSQL’s shared buffers. Unlike cumulative statistics, it shows what is in memory right now—which relations are cached, how many blocks they occupy, and how frequently those buffers are reused.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Enabling the Extension&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;pg_buffercache is not enabled by default and requires superuser privileges:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE EXTENSION pg_buffercache;&lt;/LI-CODE&gt;
&lt;P&gt;Once enabled, you can directly query the contents of shared buffers across databases, tables, and indexes.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Analyzing Cache Distribution&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Understanding where your shared buffers are being consumed is the first step toward meaningful tuning.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Database-Level Cache Distribution&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This query shows how shared buffers are distributed across databases in the server:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT CASE
  WHEN c.reldatabase IS NULL THEN ''
  WHEN c.reldatabase = 0 THEN ''
  ELSE d.datname
END AS database,
count(*) AS cached_blocks
FROM pg_buffercache AS c
LEFT JOIN pg_database AS d ON c.reldatabase = d.oid
WHERE datname NOT LIKE 'template%'
GROUP BY d.datname, c.reldatabase
ORDER BY d.datname, c.reldatabase;&lt;/LI-CODE&gt;
&lt;P&gt;This is particularly useful in multi-database environments where one workload may be evicting cache pages needed by another.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Table and Index-Level Cache Consumption&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To understand which relations, dominate the cache, the following query breaks buffer usage down by tables and indexes:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT c.relname, c.relkind, count(*)
FROM pg_database AS a, pg_buffercache AS b, pg_class AS c
WHERE c.relfilenode = b.relfilenode
AND b.reldatabase = a.oid
GROUP BY 1, 2
ORDER BY 3 DESC, 1;&lt;/LI-CODE&gt;
&lt;P&gt;This helps answer an important question: Are your most business-critical tables and indexes actually resident in memory, or are they constantly being evicted?&lt;/P&gt;
&lt;P&gt;If large, rarely used tables consume a disproportionate share of buffers, it may indicate cache churn or the need for workload isolation.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Understanding Buffer Usage Count (Hot vs Cold Data)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Each buffer in shared memory carries a usage count, which reflects how frequently it has been accessed before eviction. Higher values indicate hotter data.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT c.relname, c.relkind, usagecount, count(*) AS buffers
FROM pg_database AS a, pg_buffercache AS b, pg_class AS c
WHERE c.relfilenode = b.relfilenode
AND b.reldatabase = a.oid
AND a.datname = current_database()
GROUP BY 1, 2, 3
ORDER BY 3 DESC, 1;&lt;/LI-CODE&gt;
&lt;P&gt;A healthy system typically shows a meaningful number of buffers with higher usage counts (for example, 4–5), indicating frequently reused data that benefits from caching.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Buffer Cache Percentages: Putting Numbers in Context&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Raw buffer counts are useful, but percentages make interpretation easier. The following query shows:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;How much of shared_buffers each relation occupies&lt;/LI&gt;
&lt;LI&gt;What percentage of the relation itself is cached&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="sql"&gt;SELECT c.relname,
pg_size_pretty(count(*) * 8192) AS buffered,
round(100.0 * count(*) / (SELECT setting FROM pg_settings WHERE name='shared_buffers')::integer, 1) AS buffers_percent,
round(100.0 * count(*) * 8192 / pg_relation_size(c.oid), 1) AS percent_of_relation
FROM pg_class c
JOIN pg_buffercache b ON b.relfilenode = c.relfilenode
JOIN pg_database d ON b.reldatabase = d.oid AND d.datname = current_database()
GROUP BY c.oid, c.relname
ORDER BY 3 DESC
LIMIT 10;&lt;/LI-CODE&gt;
&lt;P&gt;This view is especially powerful when validating whether performance-critical objects are adequately cached relative to their size.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Complementing Cache Views with I/O Statistics&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;While pg_buffercache shows the current state of memory, I/O statistics reveal long-term trends. PostgreSQL exposes these via pg_statio_user_tables and pg_statio_user_indexes.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Table Heap Hit Ratios&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT relname,
heap_blks_hit::numeric / (heap_blks_hit + heap_blks_read) AS hit_pct,
heap_blks_hit,
heap_blks_read
FROM pg_catalog.pg_statio_user_tables
WHERE (heap_blks_hit + heap_blks_read) &amp;gt; 0
ORDER BY hit_pct;&lt;/LI-CODE&gt;
&lt;P&gt;Hit ratios close to 1 indicate that table data is largely served from memory rather than disk.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Index Hit Ratios&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT relname,
idx_blks_hit::numeric / (idx_blks_hit + idx_blks_read) AS hit_pct,
idx_blks_hit,
idx_blks_read
FROM pg_catalog.pg_statio_user_tables
WHERE (idx_blks_hit + idx_blks_read) &amp;gt; 0
ORDER BY hit_pct;&lt;/LI-CODE&gt;
&lt;P&gt;Poor index hit ratios often point to insufficient cache or inefficient query patterns that bypass indexes.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Including TOAST and Index Reads&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;For large objects, TOAST activity can significantly impact I/O. This query provides a more holistic view:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT *,
(heap_blks_read + toast_blks_read + tidx_blks_read) AS total_blocks_read,
(heap_blks_hit + toast_blks_hit + tidx_blks_hit) AS total_blocks_hit
FROM pg_catalog.pg_statio_user_tables;&lt;/LI-CODE&gt;
&lt;P&gt;This helps identify indexes that are frequently read from disk and may benefit from better caching or query rewrites.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;How to Interpret the Results&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;When reviewing buffer cache and I/O metrics, keep the following guidelines in mind:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Validate cache residency of critical objects&lt;/STRONG&gt;: If business-critical tables and indexes occupy a meaningful share of shared_buffers, your cache sizing is likely reasonable.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Correlate buffer data with hit ratios:&lt;/STRONG&gt; High hit ratios in pg_statio_user_tables and pg_statio_user_indexes confirm effective caching. Persistently low ratios may justify increasing shared_buffers.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Analyze usage count distribution:&lt;/STRONG&gt; A healthy number of buffers with higher usage counts indicates hot data benefiting from cache reuse.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Avoid over-tuning&lt;/STRONG&gt;: If most buffers have low usage counts but hit ratios remain high, increasing shared_buffers further may not yield measurable gains.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Buffer cache analysis bridges the gap between theory and reality in PostgreSQL performance tuning. By combining real-time cache inspection with long-term I/O statistics, you gain a clear picture of how memory is actually used—and whether changes to shared_buffers will deliver tangible benefits.&lt;/P&gt;
&lt;P&gt;Rather than tuning memory blindly, this approach lets you optimize with confidence, grounded in data that reflects your real workload.&lt;/P&gt;</description>
      <pubDate>Tue, 31 Mar 2026 12:51:22 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-blog-for-postgresql/postgresql-buffer-cache-analysis/ba-p/4501264</guid>
      <dc:creator>Gayathri_Paderla</dc:creator>
      <dc:date>2026-03-31T12:51:22Z</dc:date>
    </item>
  </channel>
</rss>

