Replication
33 TopicsEffectively troubleshoot latency in SQL Server Transactional replication: Part 2
Are you struggling with latency issues in SQL Server Transactional replication? Our comprehensive guide provides clear, step-by-step instructions to effectively troubleshoot and resolve these challenges. Dive into proven techniques and best practices that will help you enhance your SQL Server's performance and ensure seamless data replication. Don't let latency slow you down—master the art of SQL Server troubleshooting today! I hope you find this teaser engaging! If you need any adjustments or additional content, feel free to let me know.4.1KViews4likes2CommentsAnnouncing Mirroring for Azure Database for PostgreSQL in Microsoft Fabric for Public Preview
Back at the first European Microsoft Fabric Community Conference in September 2024 we announced our Private Preview program for Mirroring for Azure Database for PostgreSQL in Microsoft Fabric. Today, in conjunction with 2025 edition of Microsoft Fabric Community Conference in Las Vegas, we're thrilled to announce our Public Preview milestone, giving customers the ability to leverage friction-free near-real time replication from Azure Database for PostgreSQL flexible server to Fabric OneLake in Delta tables, providing a solid foundation for reporting, advanced analytics, AI, and data science on operational data with minimal effort and impact on transactional workloads. Mirroring is setup from Fabric Data Warehousing experience by providing the Azure Database for PostgreSQL flexible server and database connection details, provide selections on what needs to be mirrored into Fabric, either all data or user selected eligible mirrored tables. And, just like that, mirroring is ready to go. Mirroring Azure Database for PostgreSQL flexible server creates an initial snapshot in Fabric OneLake, after which data is kept in sync in near-real time with every transaction. How mirroring to Fabric works in Azure Database for PostgreSQL flexible server Fabric mirroring in Azure Database for PostgreSQL flexible server is based on principles such as logical replication and the Change Data Capture (CDC) design pattern. Once Fabric mirroring is established for a database in Azure Database for PostgreSQL flexible server, an initial snapshot is created by a background process for selected tables to be mirrored. That snapshot is shipped to a Fabric OneLake's landing zone in Parquet format. A process running in Fabric, known as replicator, takes these initial snapshot files and creates tables in Delta format in the Mirrored database artifact. Subsequent changes applied to selected tables are also captured in the source database and shipped to the OneLake landing zone in batches. Those batches of changes are finally applied to the respective Delta tables in the Mirrored database artifact. For Fabric mirroring, the CDC pattern is implemented in a proprietary PostgreSQL extension called azure_cdc, which is installed and registered in source databases during Fabric mirroring enablement workflow. This guided process has a new dedicated page in Azure Portal and is setting up all required pre-requisites and is offering a simplified experience where you just need to select which databases you want to replicate to Fabric OneLake (default is up to 3). You can read additional details regarding the server enablement process and other critical configuration and monitoring options on a dedicated page in Azure Database for PostgreSQL flexible server product documentation. Explore advanced analytics and data engineering for PostgreSQL in Microsoft Fabric Once data is on OneLake, mirrored data in the delta format is ready for immediate consumption across all Fabric experiences and features, such as Power BI with new Direct Lake mode, Data Warehouse, Data Engineering, Lakehouse, KQL Database, Notebooks and Copilot, which work instantly. Direct Lake mode is a fast path to load the data from the lake with groundbreaking semantic model capability for analyzing very large data volumes in Power BI. As Direct Lake mode also supports reading Delta tables right from OneLake, the Mirrored PostgreSQL database is Power BI ready along with Copilot capabilities. Data across any mirrored database (either Azure Database for PostgreSQL, Azure SQL DB, Azure Cosmos DB or Snowflake) can be cross-joined as well, enabling querying across any database, warehouse or Lakehouse (either as a shortcut to AWS S3 or ADLS Gen 2 etc.). With the same approach, you can also have multiple PosgreSQL databases from multiple servers mirrored to OneLake like in a typical SaaS provider scenario, where each database belongs to a different tenant, and execute cross-database queries to aggregate and analyze critical business metrics. Data scientists and data engineers can work with the mirrored Azure Database for PostgreSQL data joined with other sources (see this example with CosmosDB data) that are created as shortcuts in Lakehouse. Read about endless possibilities when loading operational databases in OneLake and Microsoft Fabric in related section of our product documentation here. Getting started with Mirroring for Azure Database for PostgreSQL in Fabric To summarize, Mirroring Azure Database for PostgreSQL in Microsoft Fabric plays a crucial role in enabling analytics and driving insights from operational data by ensuring that the most recent data is available for analysis. This allows businesses to make decisions based on the most current situation, rather than relying on outdated information. Improving accuracy also reduces the risk of discrepancies between the source and the replicated data, leading to more accurate analytics and reliable insights. In addition, is essential for predictive analytics and AI models provide the most recent data to make accurate predictions and decisions. To get started and learn more about Mirroring Azure Database for PostgreSQL flexible server in Microsoft Fabric, its pre-requisites, setup, FAQ’s, current limitations, and tutorial, please click here to read all about it and stay tuned for more updates and new features coming soon. To get more updates also on overall Mirroring capabilities in Fabric, please read this other blog post where you will get the latest news.1.3KViews3likes4CommentsLesson Learned #110 Azure SQL Managed Instance and Azure DNS
I got this recent case where customer wanted to create a transaction replication from Azure SQL Managed instance to another SQL Server inside the same virtual network, but in a different subnet, but all inside Azure. All communications that happens on Azure Network if not set a custom DNS will use Azure DNS to resolve name of resources inside the network. However there is something very important in the documentation https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances#azure-provided-name-resolution "Azure-provided name resolution Along with resolution of public DNS names, Azure provides internal name resolution for VMs and role instances that reside within the same virtual network or cloud service. VMs and instances in a cloud service share the same DNS suffix, so the host name alone is sufficient. But in virtual networks deployed using the classic deployment model, different cloud services have different DNS suffixes. In this situation, you need the FQDN to resolve names between different cloud services. In virtual networks deployed using the Azure Resource Manager deployment model, the DNS suffix is consistent across the virtual network, so the FQDN is not needed. DNS names can be assigned to both VMs and network interfaces. Although Azure-provided name resolution does not require any configuration, it is not the appropriate choice for all deployment scenarios, as detailed in the previous table." Consider a Azure Resource Manager deployment it would not be necessary to use FQDN, however the way that the deployment of Azure SQL Managed Instance happens it will not share the same DNS sufix and we can see connectivity errors There are some workarounds: use the private IP or the FQDN of the SQL VM. The default DNS sufix is documented in same article above ( https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances#name-resolution-that-uses-your-own-dns-server ) "When you are using Azure-provided name resolution, Azure Dynamic Host Configuration Protocol (DHCP) provides an internal DNS suffix (.internal.cloudapp.net) to each VM. This suffix enables host name resolution because the host name records are in the internal.cloudapp.net zone. When you are using your own name resolution solution, this suffix is not supplied to VMs because it interferes with other DNS architectures (like domain-joined scenarios). Instead, Azure provides a non-functioning placeholder (reddog.microsoft.com)." For this test I will create a linked server from SQL MI to SQL VM and we can see that I was able to connect with success to IP and VM.internal.cloudapp.net, but failing to simple VM name This behavior is also documented at https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance-custom-dns "An Azure SQL Database Managed Instance must be deployed within an Azure virtual network (VNet). There are a few scenarios (for example, db mail, linked servers to other SQL instances in your cloud or hybrid environment) that require private host names to be resolved from the Managed Instance. In this case, you need to configure a custom DNS inside Azure. Since Managed Instance uses the same DNS for its inner workings, you need to configure the custom DNS server so that it can resolve public domain names. Important Always use fully-qualified domain names (FQDN) for the mail servers, SQL Servers, and other services even if they are within your private DNS zone. For example use smtp.contoso.com for mail server because simple smtp will not be properly resolved."6.3KViews2likes0CommentsAugust 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, August was an exciting month for Azure Database for PostgreSQL! We have introduced updates that make your experience smarter and more secure. From simplified Entra ID group login to integrations with LangChain and LangGraph, these updates help with improving access control and seamless integration for your AI agents and applications. Stay tuned as we dive deeper into each of these feature updates. Feature Highlights Enhanced Performance recommendations for Azure Advisor - Generally Available Entra-ID group login using user credentials - Public Preview New Region Buildout: Austria East LangChain and LangGraph connector Active-Active Replication Guide Enhanced Performance recommendations for Azure Advisor - Generally Available Azure Advisor now offers enhanced recommendations to further optimize PostgreSQL server performance, security, and resource management. These key updates are as follows: Index Scan Insights: Detection and recommendations for disabled index and index-only scans to improve query efficiency. Audit Logging Review: Identification of excessive logging via the pgaudit.log parameter, with guidance to reduce overhead. Statistics Monitoring: Alerts on server statistics resets and suggestions to restore accurate performance tracking. Storage Optimization: Analysis of storage usage with recommendations to enable the Storage Autogrow feature for seamless scaling. Connection Management: Evaluation of workloads for short-lived connections and frequent connectivity errors, with recommendations to implement PgBouncer for efficient connection pooling. These enhancements aim to provide deeper operational insights and support proactive performance tuning for PostgreSQL workloads. For more details read the Performance recommendations documentation. Entra-ID group login using user credentials - Public Preview The public preview for Entra-ID group login using user credentials is now available. This feature simplifies user management and improves security within the Azure Database for PostgreSQL. This allows administrators and users to benefit from a more streamlined process like: Changes in Entra-ID group memberships are synchronized on a periodic 30min basis. This scheduled syncing ensures that access controls are kept up to date, simplifying user management and maintaining current permissions. Users can log in with their own credentials, streamlining authentication, and improving auditing and access management for PostgreSQL environments. As organizations continue to adopt cloud-native identity solutions, this update represents a major improvement in operational efficiency and security for PostgreSQL database environments. For more details read the documentation on Entra-ID group login. New Region Buildout: Austria East New region rollout! Azure Database for PostgreSQL flexible server is now available in Austria East, giving customers in and around the region lower latency and data residency options. This continues our mission to bring Azure PostgreSQL closer to where you build and run your apps. For the full list of regions visit: Azure Database for PostgreSQL Regions. LangChain and LangGraph connector We are excited to announce that native LangChain & LangGraph support is now available for Azure Database for PostgreSQL! This integration brings native support for Azure Database for PostgreSQL into LangChain or LangGraph workflows, enabling developers to use Azure PostgreSQL as a secure and high-performance vector store and memory store for their AI agents and applications. Specifically, this package adds support for: Microsoft Entra ID (formerly Azure AD) authentication when connecting to your Azure Database for PostgreSQL instances, and, DiskANN indexing algorithm when indexing your (semantic) vectors. This package makes it easy to connect LangChain to your Azure-hosted PostgreSQL instances whether you're building intelligent agents, semantic search, or retrieval-augmented generation (RAG) systems. Read more at https://aka.ms/azpg-agent-frameworks Active-Active Replication Guide We have published a new blog article that guides you through setting up active-active replication in Azure Database for PostgreSQL using the pglogical extension. This walkthrough covers the fundamentals of active-active replication, key prerequisites for enabling bi-directional replication, and step-by-step demo scripts for the setup. It also compares native and pglogical approaches helping you choose the right strategy for high availability, and multi-region resilience in production environments. Read more about the active-active replication guide on this blog. Azure Postgres Learning Bytes 🎓 Enabling Zone-Redundant High Availability for Azure Database for PostgreSQL Flexible Server Using APIs. High availability (HA) is essential for ensuring business continuity and minimizing downtime in production workloads. With Zone-Redundant HA, Azure Database for PostgreSQL Flexible Server automatically provisions a standby replica in a different availability zone, providing stronger fault tolerance against zone-level failures. This section will guide you on how to enable Zone-Redundant HA using REST APIs. Using REST APIs gives you clear visibility into the exact requests and responses, making it easier to debug issues and validate configurations as you go. You can use any REST API client tool of your choice to perform these operations including Postman, Thunder Client (VS Code extension), curl, etc. to send requests and inspect the results directly. Before enabling Zone-Redundant HA, make sure your server is on the General Purpose or Memory Optimized tier and deployed in a region that supports it. If your server is currently using Same-Zone HA, you must first disable it before switching to Zone-Redundant. Steps to Enable Zone-Redundant HA: Get an ARM Bearer token: Run this in a terminal where Azure CLI is signed in (or use Azure Cloud Shell) az account get-access-token --resource https://management.azure.com --query accessToken -o tsv Paste token in your API client tool Authorization: `Bearer <token>` </token> Inspect the server (GET) using the following URL: https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resourceGroup}}/providers/Microsoft.DBforPostgreSQL/flexibleServers/{{serverName}}?api-version={{apiVersion}} In the JSON response, note: sku.tier → must be 'GeneralPurpose' or 'MemoryOptimized' properties.availabilityZone → '1' or '2' or '3' (depends which availability zone that was specified while creating the primary server, it will be selected by system if the availability zone is not specified) properties.highAvailability.mode → 'Disabled', 'SameZone', or 'ZoneRedundant' properties.highAvailability.state → e.g. 'NotEnabled','CreatingStandby', 'Healthy' If HA is currently SameZone, disable it first (PATCH) using API. Use the same URL in Step 3, in the Body header insert: { "properties": { "highAvailability": { "mode": "Disabled" } } } Enable Zone Redundant HA (PATCH) using API: Use the same URL in Step 3, in the Body header insert: { "properties": { "highAvailability": { "mode": "ZoneRedundant" } } } Monitor until HA is Healthy: Re-run the GET from Step 3 every 30-60 seconds until you see: "highAvailability": { "mode": "ZoneRedundant", "state": "Healthy" } Conclusion That’s all for our August 2025 feature updates! We’re committed to making Azure Database for PostgreSQL better with every release, and your feedback plays a key role in shaping what’s next. 💬 Have ideas, questions, or suggestions? Share them with us: https://aka.ms/pgfeedback 📢 Want to stay informed about the latest features and best practices? Follow us here for the latest announcements, feature releases, and best practices: Azure Database for PostgreSQL Blog More exciting improvements are on the way—stay tuned for what’s coming next!Model Context Protocol (MCP) Server for Azure Database for MySQL
We are excited to introduce a new MCP Server for integrating your AI models with data hosted in Azure Database for MySQL. By utilizing this server, you can effortlessly connect any AI application that supports MCP to your MySQL flexible server (using either MySQL password-based authentication or Microsoft Entra authentication methods), enabling you to provide your business data as meaningful context in a standardized and secure manner.1.6KViews2likes0CommentsSQL Server Replication Management Pack SCOM
Windows Server 2019 SQL Server Standard 2019 CU 15 SCOM 2019 This server has four sql instances configured. Three of the sql instances has replication installed and configured. To monitor the OS, SQL and SQL Replication we have SCOM configured with the appropriate MP. The monitoring for SQL and SQL replication is configured for low level monitoring by following the guide in the Microsoft docs for the MP. For the Instance where we have the problem, I can under Replication\Local Publication see two configured BiDirectional replications, and under Local Subscriptions there is two Subscriptions. After the setup the replication monitoring I get two errors in scom, one is for monitoring and the other is for discovery. "Module: Microsoft.SQLServer.Replication.Windows.Module.Discovery.Discoveries.PublicationDiscovery Version: 7.0.28.0 Error(s) was(were) occurred: Message: An error occurred during discovery. ---------- Exception: ---------- Exception Type: System.Data.SqlClient.SqlException Message: Column names in each table must be unique. Column name 'allow_drop' in table '#TranPublTemp_PublicationDiscovery' is specified more than once." and "Module: Microsoft.SQLServer.Replication.Windows.Module.Monitoring.Monitors.MonitorDistributorSnapshotFreeSpace Version: 7.0.28.0" "Error(s): An error occurred while parallel querying SQL: One or more errors occurred. at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification) at Microsoft.SQLServer.Module.Helper.Sql.SqlParallelQueryHelper`1.<>c__DisplayClass7_0.<FillCollectionParallel>b__1(IGrouping`2 parallelQueryParameterGroup) mscorlib" When i collected the exceptions and got the statements it looks like it for each database that is part of a replication the MP is trying to do "ALTER TABLE #TranPublTemp_GetPublisherInfoQuery ADD allow_drop int;" But since that column is added for the first database, when it tries to add it for the second database it fails with the above error. Is this a bug in the monitoring MP? Can it be something in our configuration of the replication? I don't know if this is effecting the second alert I have or not, but I haven't found anything for that error in the logs.1.4KViews2likes0Comments