Azure Database for PostgreSQL Flexible Server
96 TopicsPostgreSQL for the enterprise: scale, secure, simplify
This week at Microsoft Ignite, along with unveiling the new Azure HorizonDB cloud native database service, we’re announcing multiple improvements to our fully managed open-source Azure Database for PostgreSQL service, delivering significant advances in performance, analytics, security, and AI-assisted migration. Let’s walk through nine of the top Azure Database for PostgreSQL features and improvements we’re announcing at Microsoft Ignite 2025. Feature Highlights New Intel and AMD v6-series SKUs (Preview) Scale to multiple nodes with Elastic Clusters (GA) PostgreSQL 18 (GA) Realtime analytics with Fabric Mirroring (GA) Analytical queries inside PostgreSQL with the pg_duckdb extension (Preview) Adding Parquet to the azure_storage extension (GA) Meet compliance requirements with the credcheck, anon & ip4r extensions (GA) Integrated identity with Entra token-refresh libraries for Python AI-Assisted Oracle to PostgreSQL Migration Tool (Preview) Performance and scale New Intel and AMD v6 series SKUs (Preview) You can run your most demanding Postgres workloads on new Intel and AMD v6 General Purpose and Memory Optimized hardware SKUs, now availble in preview These SKUs deliver massive scale for high-performance OLTP, analytics and complex queries, with improved price performance and higher memory ceilings. AMD Confidential Compute v6 SKUs are also in Public Preview, enabling enhanced security for sensitive workloads while leveraging AMD’s advanced hardware capabilities. Here’s what you need to know: Processors: Powered by 5th Gen Intel® Xeon® processor (code-named Emerald Rapids) and AMD's fourth Generation EPYC™ 9004 processors Scale: VM size options scale up to 192 vCores and 1.8 TiB IO: Using the NVMe protocol for data disk access, IO is parallelized to the number of CPU cores and processed more efficiently, offering significant IO improvements Compute tier: Available in our General Purpose and Memory Optimized tiers. You can scale up to these new compute SKUs as needed with minimal downtime. Learn more: Here's a quick summary of the v6 SKUs we’re launching, with links to more information: Processor SKU Max vCores Max Mem Intel Ddsv6 192 768 GiB Edsv6 192 1.8 TiB AMD Dadsv6 96 384 GiB Eadsv6 96 672 GiB DCadsv6 96 386 GiB ECadsv6 96 672 GiB Scale to multiple nodes with Elastic clusters (GA) Elastic clusters are now generally available in Azure Database for PostgreSQL. Built on Citus open-source technology, elastic clusters bring the horizontal scaling of a distributed database to the enterprise features of Azure Database for PostgreSQL. Elastic clusters enable horizontal scaling of databases running across multiple server nodes in a “shared nothing” architecture. This is ideal for workloads with high-throughput and storage-intensive demands such as multi-tenant SaaS and IoT-based workloads. Elastic clusters come with all the enterprise-level capabilities that organizations rely upon in Azure Database for PostgreSQL, including high availability, read replicas, private networking, integrated security and connection pooling. Built-in sharding support at both row and schema level enables you to distribute your data across a cluster of compute resources and run queries in parallel, dramatically increasing throughput and capacity. Learn more: Elastic clusters in Azure Database for PostgreSQL PostgreSQL 18 (GA) When PostgreSQL 18 was released in September, we made a preview available on Azure on the same day. Now we’re announcing that PostgreSQL 18 is generally available on Azure Database for PostgreSQL, with full Major Version Upgrade (MVU) support, marking our fastest-ever turnaround from open-source release to managed service general availability. This release reinforces our commitment to delivering the latest PostgreSQL community innovations to Azure customers, so you can adopt the latest features, performance improvements, and security enhancements on a fully managed, production-ready platform without delay. ^Note: MVU to PG18 is currently available in the NorthCentralUS and WestCentralUS regions, with additional regions being enabled over the next few weeks Now you can: Deploy PostgreSQL 18 in all public Azure regions. Perform in-place major version upgrades to PG18 with no endpoint or connection string changes. Use Microsoft Entra ID authentication for secure, centralized identity management in all PG versions. Enable Query Store and Index Tuning for built-in performance insights and automated optimization. Leverage the 90+ Postgres extensions supported by Azure Database for PostgreSQL. PostgreSQL 18 also delivers major improvements under the hood, ranging from asynchronous I/O and enhanced vacuuming to improved indexing and partitioning, ensuring Azure continues to lead as the most performant, secure, and developer-friendly PostgreSQL managed service in the cloud. Learn more: PostgreSQL 18 open-source release announcement Supported versions of PostgreSQL in Azure Database for PostgreSQL Analytics Real-time analytics with Fabric Mirroring (GA) With Fabric mirroring in Azure Database for PostgreSQL, now generally available, you can run your Microsoft Fabric analytical workloads and capabilities on near-real-time replicated data, without impacting the performance of your production PostgreSQL databases, and at no extra cost. Mirroring in Fabric connects your operational and analytical platforms with continuous data replication from PostgreSQL to Fabric. Transactions are mirrored to Fabric in near real-time, enabling advanced analytics, machine learning, and reporting on live data sets without waiting for traditional batch ETL processes to complete. This approach eliminates the overhead of custom integrations or data pipelines. Production PostgreSQL servers can run mission-critical transactional workloads without being affected by surges in analytical queries and reporting. With our GA announcement Fabric mirroring is ready for production workloads, with secure networking (VNET integration and Private Endpoints supported), Entra ID authentication for centralized identity management, and support for high availability enabled servers, ensuring business continuity for mirroring sessions. Learn more: Mirroring Azure Database for PostgreSQL flexible server Adding Parquet support to the azure_storage extension (GA) In addition to mirroring data directly to Microsoft Fabric, there are many other scenarios that require moving operational data into data lakes for analytics or archival. The complexity of building and maintaining ETL pipelines can be expensive and time-consuming. Azure Database for PostgreSQL now natively supports Parquet via the azure_storage extension, enabling direct SQL-based read/write to Parquet files in Azure Storage. This makes it easy to import and export data in Postgres without external tools or scripts. Parquet is a popular columnar storage format often used in big data and analytics environments (like Spark and Azure Data Lake) because of its efficient compression and query performance for large datasets. Now you can use the azure_storage extension to can skip an entire step: just issue a SQL command to write to and query from a Parquet file in Azure Blob Storage. Learn more: Azure storage extension in Azure Database for PostgreSQL Analytical queries inside PostgreSQL with the pg_duckdb extension (Preview) DuckDB’s columnar engine excels at high performance scans, aggregations and joins over large tables, making it particularly well-suited for analytical queries. The pg_duckdb extension, now available in preview for Azure Database for PostgreSQL combines PostgreSQL’s transactional performance and reliability with DuckDB’s analytical speed for large datasets. Together pg_duckdb and PostgreSQL are an ideal combination for hybrid OLTP + OLAP environments where you need to run analytical queries directly in PostgreSQL without sacrificing performance., To see the pg_duckdb extension in action check out this demo video: https://aka.ms/pg_duckdb Learn more: pg_duckdb – PostgreSQL extension for DuckDB Security Meet compliance requirements with the credcheck, anon & ip4r extensions (GA) Operating in a regulated industry such as Finance, Healthcare and Government means negotiating compliance requirements like HIPAA and PCI-DSS, GDPR that include protection for personalized data and password complexity, expiration and reuse. This week the anon extension, previously in preview, is now generally available for Azure Database for PostgreSQL adding support for dynamic and static masking, anonymized exports, randomization and many other advanced masking techniques. We’ve also added GA support for the credcheck extension, which provides credential checks for usernames, and password complexity, including during user creation, password change and user renaming. This is particularly useful if your application is not using Entra ID and needs to rely on native PostgreSQL users and passwords. If you need to store and query IP ranges for scenarios like auditing, compliance, access control lists, intrusion detection and threat intelligence, another useful extension announced this week is the ip4r extension which provides a set of data types for IPv4 and IPv6 network addresses. Learn more: PostgreSQL Anonymizer credcheck – PostgreSQL username/password checks IP4R - IPv4/v6 and IPv4/v6 range index type for PostgreSQL The Azure team maintains an active pipeline of new PostgreSQL extensions to onboard and upgrade to Azure Database for PostgreSQL For example, another important extension upgraded this week is pg_squeeze which removes unused space from a table. The updated 1.9.1 version adds important stability improvements. Learn more: List of extensions and modules by name Integrated identity with Entra token-refresh libraries for Python In a modern cloud-connected enterprise, identity becomes the most important security perimeter. Azure Database for PostgreSQL is the only managed PostgreSQL service with full Entra integration, but coding applications to take care of Entra token refresh can be complex. This week we’re announcing a new Python library to simplify Entra token refresh. The library automatically refreshes authentication tokens before they expire, eliminating manual token handling and reducing connection failures. The new python_azure_pg_auth library provides seamless Azure Entra ID authentication and supports the latest psycopg and SQLAlchemy drivers with automatic token acquisition, validation, and refresh. Built-in connection pooling is available for both synchronous and asynchronous workloads. Designed for cross-platform use (Windows, Linux, macOS), the package features clean architecture and flexible installation options for different driver combinations. This is our first milestone in a roadmap to add token refresh for additional programming languages and frameworks. Learn more, with code samples to get started here: https://aka.ms/python-azure-pg-auth Migration AI-Assisted Oracle to PostgreSQL Migration Tool (Preview) Database migration is a challenging and time-consuming process, with multiple manual steps requiring schema and apps specific information. The growing popularity, maturity and low cost of PostgreSQL has led to a healthy demand for migration tooling to simplify these steps. The new AI-assisted Oracle Migration Tool preview announced this week greatly simplifies moving from Oracle databases to Azure Database for PostgreSQL. Available in the VS Code PostgreSQL extension the new migration tool combines GitHub Copilot, Azure OpenAI, and custom Language Model Tools to convert Oracle schema, database code and client applications into PostgreSQL-compatible formats. Unlike traditional migration tools that rely on static rules, Azure’s approach leverages Large Language Models (LLMs) and validates every change against a running Azure Database for PostgreSQL instance. This system not only translates syntax but also detects and fixes errors through iterative re-compilation, flagging any items that require human review. Application codebases like Spring Boot and other popular frameworks are refactored and converted. The system also understands context by querying the target Postgres instance for version and installed extensions. It can even invoke capabilities from other VS Code extensions to validate the converted code. The new AI-assisted workflow reduces risk, eliminates significant manual effort, and enables faster modernization while lowering costs. Learn more: https://aka.ms/pg-migration-tooling Be sure to follow the Microsoft Blog for PostgreSQL for regular updates from the Postgres on Azure team at Microsoft. We publish monthly recaps about new features in Azure Database for PostgreSQL, as well as an annual blog about what’s new in Postgres at Microsoft.1KViews9likes0CommentsUpgrade Azure Database for PostgreSQL with Minimal Downtime Using Logical Replication
Azure Database for PostgreSQL provides a seamless Major Version Upgrade (MVU) experience for your servers, which is important for security, performance, and feature enhancements. For production workloads, minimizing downtime during this upgrade is essential to maintain business continuity. This blog explores a practical approach to performing a Major Version Upgrade (MVU) with minimal downtime and maximum reliability using logical replication and virtual endpoints. Upgrading without disrupting your applications is critical. With this method, you can: Approach 1: Configure two servers where the publisher runs on the lower version and the subscriber on the higher version, perform MVU and then switch over using virtual endpoints. The time taken to restore the server is specific to your workloads on the primary server. Approach 2: Maintain two servers on different versions, use pg_dump and pg_restore to restore with data for production server, and perform a seamless switchover using virtual endpoints. To enable logical replication on a table, it must have one of the following: A Primary Key, or A Unique Index Approach 1 Approach 2 Restores the instance using the same version. Creates and restores the instance on a higher version. Faster restore with PITR (Point-in-Time Recovery) but requires a few additional steps. Takes longer to restore because it uses pg_dump and pg_restore commands but enables version upgrade during restore. Best suited when speed is the priority to restore the server. Best suited when you want to restore directly to a higher version, and it does not downtime for the MVU operation. Approach 1 Setup: Two servers, one for testing, and one for production. Here are the steps to follow: Create a virtual endpoint on the production server. Perform a Point-in-time-restore (PITR) from the first server (Production) and create your test server. Add a virtual endpoint for the test server. Establish logical replication between the two servers. Perform the Major Version Upgrade (MVU) on the test server. Validate data on the test server. Update virtual endpoints: Remove the endpoints from both servers, then assign the original production endpoint to the test server. Step By Step Guide Environment Setup Two servers are involved: Server 1: Current production server (Publisher) Server 2: Restored server for MVU (Subscriber) Create a virtual endpoint for the production server. Configure Logical Replication & Grant Permissions Enable replication parameters on the publisher: wal_level = logical max_worker_processes = 16 max_replication_slots = 10 max_wal_senders = 10 track_commit_timestamp = on Grant replication role to the user ALTER ROLE <user> WITH REPLICATION; GRANT azure_pg_admin TO <user>; Create tables and insert data CREATE TABLE basic (id INTEGER NOT NULL PRIMARY KEY, a TEXT); INSERT INTO basic VALUES (1, 'apple'), (2, 'banana'); Set Up Logical Replication on the Production Server Create a publication slot on the Production Server: create publication <publisher-name>; alter publication <publisher-name> add table<table>; SELECT pg_create_logical_replication_slot(‘<publisher-name>’, ‘pgoutput’); Choose Restore Point: Determine the latest possible point in time (PIT) to restore the data from the source server. This Point in Time must be, necessarily, after you created the replication slot in Step 3. Provision Target Server via PITR: Use Azure Portal or Azure CLI to trigger a Point-in-Time Restore. This creates the test server based on the production backup. You are provisioning the test server based on the production server's backup capabilities. This test server will initially be a copy of the production server’s data state at a specific point in time. Configure Server Parameters on Test Server wal_level = logical max_worker_processes = 16 max_replication_slots = 10 max_wal_senders = 10 track_commit_timestamp = on Create the target server as subscriber & Advance Replication Origin: This is the crucial step that connects the test server (subscriber) to the production (publisher) server and manually tells the target where in the WAL log stream to begin reading changes, skipping the data already restored. Create Subscription: Creates a logical replication subscription on the test server, linking it to the source and specifying connection details, publication, and replication slot without copying existing data. CREATE SUBSCRIPTION <subscriber-name>;CONNECTION 'host=<host-name>.postgres.database.azure.com port=5432 dbname=postgres user=<username> password=<password>' PUBLICATION <publisher-name> WITH ( copy_data = false, create_slot = false, enabled = false, slot_name = <publisher-name> ); Retrieves the replication origin identifier and name on the test server, which is needed to advance the replication position. SELECT roident, roname FROM pg_replication_origin; Execute this query on the Production server: Fetches the replication slot name and the restart LSN from the source server, indicating where replication should resume. SELECT slot_name, restart_lsn FROM pg_replication_slots WHERE slot_name = <publisher-name>; On the test server execute this command: Manually advances the replication origin on the target server to skip already restored data and start replication from the correct WAL position. SELECT pg_replication_origin_advance(roident, restart_lsn); Enable the target server as a subscriber of the source server With the target server populated and the replication origin advanced, you can start the synchronization. ALTER SUBSCRIPTION <publisher-name> ENABLE; The target server now starts consuming the WAL entries from the source, rapidly closing the gap on all transactions that occurred between the slot creation and the completion of the PITR. Test Replication works Create a virtual endpoint for the test server, and validate the data on the test server Confirm that the synchronization is working by inserting a record on the production server and immediately verifying its presence on the test server. Perform Major Version Upgrade (MVU) Upgrade your test server, and validate all the new extensions and features by using the virtual endpoint for the test server Manage virtual endpoints Once the data and all the new extensions are validated, drop the virtual endpoint on production server and recreate the same virtual endpoint on test server. Key Considerations: Test server initially handles read traffic; writes remain on production server to avoid conflicts. Virtual endpoint creation: ~1–2 minutes per endpoint. Time taken for Point-in-time-restore depends on the workload that you have on the production server Approach 2: This approach enables a Major Version Upgrade (MVU) by combining logical replication with an initial dump and restore process. It minimizes downtime while ensuring data consistency. Create a new Azure Database for PostgreSQL Flexible Server instance using your desired target major version (e.g., PostgreSQL 17). Ensure the new server's configuration (SKU, storage size, and location) is suitable for your eventual production load. This approach enables the core benefit of a side-by-side migration, running two distinct database versions concurrently. The existing application remains connected to the source environment, minimizing risk and allowing the new target to be fully configured offline. Configure Role Privileges on Source and Target Servers ALTER ROLE <replication_user> WITH REPLICATION; GRANT azure_pg_admin TO <replication_user>; Check Prerequisites for Logical Replication Set these parameters on both source and target servers: Set these server parameters to at least the minimum recommended values shown below to enable and support the features required for logical replication. wal_level=logical max_worker_processes=16 max_replication_slots=10 max_wal_senders=10 track_commit_timestamp=on Ensure tables are ready: Each table to be replicated must have a primary key or unique identifier Create Publication and Replication Slot on Source create publication <publisher-name>; alter publication <publisher-name> add table<table>; SELECT pg_create_logical_replication_slot(‘<publisher-name>’, ‘pgoutput’); This slot tracks all changes from this point onward. Generate Schema and Initial Data Dump Run pg_dump after creating the replication slot: Perform the dump after creating the replication slot to capture a static starting point. Using an Azure VM is recommended for optimal network performance. pg_dump -U demo -W -h <hostname>.postgres.database.azure.com -p 5432 -Fc -v -f dump.bak postgres -N pg_catalog -N cron -N information_schema Restore Data into Target (recommended: Azure VM): This populates the target server with the initial dataset. pg_restore -U demo -W -h <hostname>.postgres.database.azure.com -p 5432 --no-owner -Fc -v -d postgres dump.bak --no-acl Catch-Up Mechanism: While the restoration is ongoing, new transactions on the source are safely recorded by the replication slot. It is critical to have sufficient storage on the source to hold the WAL files during this initial period until replication is fully active. Create Subscription and Advance Replication Origin on Target: This step connects the test server (subscriber) to the production server (source) and manually tells the target where in the WAL log stream to begin reading changes, skipping the data already restored. Create subscription: Creates a logical replication subscription on the target server, linking it to the source and specifying connection details, publication, and replication slot without copying existing data. CREATE SUBSCRIPTION <subscription-name> CONNECTION 'host=<hostname>.postgres.database.azure.com port=5432 dbname=postgres user=<username> password=<password>' PUBLICATION <publisher-name> WITH ( copy_data = false, create_slot = false, enabled = false, slot_name = '<publisher-name>); Retrieves the replication origin identifier and name on the target server, which is needed to advance the replication position. SELECT roident, roname FROM pg_replication_origin; Fetches the replication slot name and the restart LSN from the source server, indicating where replication should resume. SELECT slot_name, restart_lsn FROM pg_replication_slots WHERE slot_name = '<publisher-name>; Manually advances the replication origin on the target server to skip already restored data and start replication from the correct WAL position. SELECT pg_replication_origin_advance('<roname>', '<restart_lsn>'); Enable Subscription: With the target server populated and the replication origin advanced, you can start the synchronization. ALTER SUBSCRIPTION <subscription-name> ENABLE; Result: The target server now starts consuming the WAL entries from the source, rapidly closing the gap on all transactions that occurred during the dump and restore process. Validate Replication: Insert a record on the source and confirm it appears on the target: Perform Cutover Stop application traffic to the production database. Wait for the target database to confirm zero replication lag. Disable the subscription (ALTER SUBSCRIPTION logical_sub01 DISABLE;). Connect the application to the new Azure Database for PostgreSQL instance. Utilize Virtual Endpoints or a CNAME DNS record for your database connection string. By simply pointing the endpoint/CNAME to the new server, you can switch your application stack without changing hundreds of individual configuration files, making the final cutover near-instantaneous. Conclusion This MVU strategy using logical replication and virtual endpoints provides a safe, efficient way to upgrade PostgreSQL servers without disrupting workloads. By combining replication, endpoint management, and automation, you can achieve a smooth transition to newer versions while maintaining high availability. For an alternative approach, check out our blog on using the Migration Service for MVU: Hacking the migration service in Azure Database for PostgreSQL506Views2likes2CommentsExciting things on the horizon for PostgreSQL fans @ Ignite 2025
If you’re passionate about PostgreSQL or just curious about what’s new, you’ll want to join us at Microsoft Ignite 2025. We have a packed lineup, including sessions exploring cutting-edge features and exclusive giveaways at the PostgreSQL on Azure booth. Haven’t registered yet? Now’s the time – sign up for Microsoft Ignite and start building your schedule. Below are the must-see PostgreSQL on Azure activities, with highlights of what you’ll learn at each. Add these to your agenda today. Sessions can fill up fast! Theater sessions: get a first look, fast I know from experience that attention spans can start to wane after hours-long keynotes, content-rich sessions, and conference socializing. Luckily, we have a couple of theater sessions that offer snackable but substantial information in less time than it will take to grab lunch. And they’re located conveniently on the main conference floor. PostgreSQL on Azure: Your launchpad for intelligent apps and agents (THR705) - See how we’re making PostgreSQL AI-aware for developers to drive app and agent innovation. Includes a demo of vector similarity search, semantic operators baked into Postgres, and more! Simplifying scale-out of PostgreSQL for performant multi-tenant apps (THR706) - Discover a smarter, simpler way to scale PostgreSQL using the new Elastic Clusters feature. If your app or service is growing fast (or you want it to!), add this breakout to learn how Azure makes it easier to scale Postgres and keep it reliable. These talks are a great way to sample what’s new and decide where to dive deeper. Plus, they’re fun and demo-heavy, and who doesn’t love a good demo? Breakout sessions: a deep dive into Postgres innovations Led by Azure product leaders and executives from organizations driving innovation backed by PostgreSQL, these breakout sessions will dive into the coolest new capabilities and real-world use cases. If you want rich, technical content and more live demos, these are for you. Build mission-critical apps that scale with PostgreSQL on Azure (BRK127) - Get a closer look at the next generation of PostgreSQL on Azure. Add this session, if you’re curious about how we’re taking Postgres to the next level to support your mission-critical AI workloads. Modern data, modern apps: Innovation with Microsoft Databases (BRK134) - Gain insider knowledge on the latest innovations across open-source, SQL, and NoSQL databases, and understand how Microsoft’s integrated database portfolio supports next-gen innovation. Nasdaq Boardvantage: AI-driven governance on PostgreSQL and AI Foundry (BRK137) - Discover how a Fortune 100 merges trust with cutting-edge AI leveraging Azure’s AI-enriched and enterprise-ready solutions, including Azure Database for PostgreSQL, Azure Database for MySQL, Azure AI Foundry, Azure Kubernetes Service (AKS), and API Management. AI-assisted migration: The path to powerful performance on PostgreSQL (BRK123) - A before and after migration journey from Oracle to Azure Database for PostgreSQL. See how the new AI-assisted migration experience delivers conversion in a few clicks and minimal downtime. The blueprint for intelligent AI agents backed by PostgreSQL (BRK130) - If you’re into AI development, this session will spark ideas on bridging the gap between raw data and AI reasoning. You’ll leave with practical tips to turbocharge your AI agents with PostgreSQL. Each breakout session is 45 minutes with live demos and Q&A, so you’ll get plenty of detail and interaction with Postgres experts. Hands-on lab: experience coding with Azure superpowers Do you learn best by doing? Then our guided workshop, Build advanced AI agents with PostgreSQL (Lab515), is for you. In each 75-minute session, you’ll get to create a fully functional AI-powered application backed by PostgreSQL on Azure with step-by-step guidance and expert insight on the latest innovations enabling intelligent app development. All the tools and instructions you’ll need are provided. Labs have limited capacity, so be sure to reserve your seat for any of the four labs in advance. This lab is a great way to understand how all the pieces come together on Azure. And you’ll gain practical skills you can apply to your own projects, whether it’s customer support bots, intelligent search in your app, or any scenario where PostgreSQL + AI collide. Expert meet-up booth: meet the team, grab some swag If you still want more Postgres (or a little Postgres souvenir), you can stop by the PostgreSQL on Azure Expert Meetup booth in the Ignite Hub. This will be our homebase on the show floor, where you can: Meet the team: I’ll be there in person, along with engineers, program managers, cloud solution architects, and advocates from our team. Whether you have a burning technical question, want to share feedback, or need guidance for your specific use case, come chat with us. Get a quick demo re-run: Sometimes a 5-minute demo is worth a thousand words, especially after you’ve sat through all those words already in a keynote. The booth will have a monitor and a live environment so we can walk you through select use cases if you have questions - no appointment needed. Swag and giveaways: Ah yes, the goodies! We know conference swag is part of the fun, so we’ve got some special PostgreSQL-themed giveaways at the booth. I won’t spoil all the surprises, but rumor has it there are some limited-edition items up for grabs. Network with peers: The expert meet-up area is also a magnet for PostgreSQL enthusiasts. You might bump into other attendees at the booth who are tackling similar projects or challenges. Ignite is about community as much as content, so come by and spark up a conversation. Meet you there? Ignite is our largest event of the year. We love sharing what we’ve been working on and, most of all, hearing from you, the community. So, on behalf of the Azure for PostgreSQL team, thank you for your interest and support. We can’t wait to show you what’s new and to help you continue to succeed with Postgres. See you in San Francisco!423Views2likes0CommentsSeptember 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, We are back with another round of updates for Azure Database for PostgreSQL! September is packed with powerful enhancements, from the public preview of PostgreSQL 18 to the general availability of Azure Confidential Computing, plus several new capabilities designed to boost performance, security, and developer experience. Stay tuned as we dive deeper into each of these feature updates. Before we dive into the feature highlights, let’s take a look at PGConf NYC 2025 highlights. PGConf NYC 2025 Highlights Our Postgres team was glad to be part of PGConf NYC 2025! As a Platinum sponsor, Microsoft joined the global PostgreSQL community for three days of sessions covering performance, extensibility, cloud, and AI, highlighted by Claire Giordano’s keynote, “What Microsoft is Building for Postgres—2025 in Review,” along with deep dives from core contributors and engineers. If you missed it, you can catch up here: Keynote slides: What Microsoft is Building for Postgres—2025 in Review by Claire Giordano at PGConf NYC 2025 Day 3 wrap-up: Key takeaways, highlights, and insights from the Azure Database for PostgreSQL team. Feature Highlights Near Zero Downtime scaling for High Availability (HA) enabled servers - Generally Available Azure Confidential Computing for Azure Database for PostgreSQL - Generally Available PostgreSQL 18 on Azure Database for PostgreSQL - Public Preview PostgreSQL Discovery & Assessment in Azure Migrate - Public Preview LlamaIndex Integration with Azure Postgres Latest Minor Versions GitHub Samples: Entra ID Token Refresh for PostgreSQL VS Code Extension for PostgreSQL enhancements Near Zero Downtime scaling for High Availability (HA) enabled servers – Generally Available Scaling compute for high availability (HA) enabled Azure Database for PostgreSQL servers just got faster. With Near Zero Downtime (NZD) scaling, compute changes such as vCore or tier modifications are now complete with minimal interruption, typically under 30 seconds using HA failover which maintains the connection string. The service provisions a new primary and standby instance with the updated configuration, synchronizes them with the existing setup, and performs a quick failover. This significantly reduces downtime compared to traditional scaling (which could take 2–10 minutes), improving overall availability. Visit our documentation for full details on how Near Zero Downtime scaling works. Azure Confidential Computing for Azure Database for PostgreSQL - Generally Available Azure Confidential Computing (ACC) Confidential Virtual Machines (CVMs) are now generally available for Azure Database for PostgreSQL. This capability brings hardware-based protection for data in use, ensuring your most sensitive information remains secure, even while being processed. With CVMs, your PostgreSQL flexible server instance runs inside a Trusted Execution Environment (TEE), a secure, hardware-backed enclave that encrypts memory and isolates it from the host OS, hypervisor, and even Azure operators. This means your data enjoys end-to-end protection: at rest, in transit, and in use. Key Benefits: End-to-End Security: Data protected at rest, in transit, and in use Enhanced Privacy: Blocks unauthorized access during processing Compliance Ready: Meets strict security standards for regulated workloads Confidence in Cloud: Hardware-backed isolation for critical data Discover how Azure Confidential Computing enhances PostgreSQL check out the blog announcement. PostgreSQL 18 on Azure Database for PostgreSQL – Public Preview PostgreSQL 18 is now available in public preview on Azure Database for PostgreSQL, launched the same day as the PostgreSQL community release. PostgreSQL 18 introduces new performance, scalability, and developer productivity improvements. With this preview, you get early access to the latest community release on a fully managed Azure service. By running PostgreSQL 18 on flexible server, you can test application compatibility, explore new SQL and performance features, and prepare for upgrades well before general availability. This preview release gives you the opportunity to validate your workloads, extensions, and development pipelines in a dedicated preview environment while taking advantage of the security, high availability, and management capabilities in Azure. With PostgreSQL 18 in preview, you are among the first to experience the next generation of PostgreSQL on Azure, ensuring your applications are ready to adopt it when it reaches full general availability. To learn more about preview, read https://aka.ms/pg18 PostgreSQL Discovery & Assessment in Azure Migrate – Public Preview The PostgreSQL Discovery & Assessment feature is now available in public preview on Azure Migrate, making it easier to plan your migration journey to Azure. Migrating PostgreSQL workloads can be challenging without clear visibility into your existing environment. This feature solves that problem by delivering deep insights into on-premises PostgreSQL deployments, making migration planning easier and more informed. With this feature, you can discover PostgreSQL instances across your infrastructure, assess migration readiness and identify potential blockers, receive configuration-based SKU recommendations for Azure Database for PostgreSQL, and estimate costs for running your workloads in Azure all in one unified experience. Key Benefits: Comprehensive Visibility: Understand your on-prem PostgreSQL landscape Risk Reduction: Identify blockers before migration Optimized Planning: Get tailored SKU and cost insights Faster Migration: Streamlined assessment for a smooth transition Learn more in our blog: PostgreSQL Discovery and Assessment in Azure Migrate LlamaIndex Integration with Azure Postgres The support for native LlamaIndex integration is now available for Azure Database for PostgreSQL! This enhancement brings seamless connectivity between Azure Database for PostgreSQL and LlamaIndex, allowing developers to leverage Azure PostgreSQL as a secure and high-performance vector store for their AI agents and applications. Specifically, this package adds support for: Microsoft Entra ID (formerly Azure AD) authentication when connecting to your Azure Database for PostgreSQL instances, and, DiskANN indexing algorithm when indexing your (semantic) vectors. This package makes it easy to connect LlamaIndex to your Azure PostgreSQL instances whether you're building intelligent agents, semantic search, or retrieval-augmented generation (RAG) systems. Explore the full guide here: https://aka.ms/azpg-llamaindex Latest Postgres minor versions: 17.6, 16.9, 15.13, 14.18 and 13.21 PostgreSQL minor versions 17.6, 16.9, 15.13, 14.18 and 13.21 are now supported by Azure Database for PostgreSQL. These minor version upgrades are automatically performed as part of the monthly planned maintenance in Azure Database for PostgreSQL. The upgrade automation ensures that your databases are always running the latest optimized versions without requiring manual intervention. This release fixes 3 security vulnerabilities and more than 55 bugs reported over the last several months. PostgreSQL minor versions are backward-compatible, so updates won’t affect your applications. For details about the release, see PostgreSQL community announcement. GitHub Samples: Entra ID Token Refresh for PostgreSQL We have introduced code samples for Entra ID token refresh, built specifically for Azure Database for PostgreSQL. These samples simplify implementing automatic token acquisition and refresh, helping you maintain secure, uninterrupted connectivity without manual intervention. By using these examples, you can keep sessions secure, prevent connection drops from expired tokens, and streamline integration with Azure Identity libraries for PostgreSQL workloads. What’s Included: Ready-to-use code snippets for token acquisition and refresh for Python and .NET Guidance for integrating with Azure Identity libraries Explore the samples repository on https://aka.ms/pg-access-token-refresh and start implementing it today. VS Code Extension for PostgreSQL enhancements A new version for VS Code Extension for PostgreSQL is out! This update introduces a Server Dashboard that provides high-level metadata and real-time performance metrics, along with historical insights for Azure Database for PostgreSQL Flexible Server. You can even use GitHub Copilot Chat to ask performance questions in natural language and receive diagnostic SQL queries in response. Additional enhancements include: A new keybinding for “Run Current Statement” in the Query Editor Support for dragging Object Explorer entities into the editor with properly quoted identifiers Ability to connect to databases via socket file paths Key fixes: Preserves the state of the Explain Analyze toolbar toggle Removes inadvertent logging of sensitive information from extension logs Stabilizes memory usage during long-running dashboard sessions Don’t forget to update to the latest version in the marketplace to take advantage of these enhancements and visit our GitHub repository to learn more about this month’s release. We’d love your feedback! Help us improve the Server Dashboard and other features by sharing your thoughts on GitHub . Azure Postgres Learning Bytes 🎓 Setting up logical replication between two servers This section will walk through setting up logical replication between two Azure Database for PostgreSQL flexible server instances. Logical replication replicates data changes from a source (publisher) server to a target (subscriber) server. Prerequisites PostgreSQL versions supported by logical replication (publisher/subscriber compatible). Network connectivity: subscriber must be able to connect to the publisher (VNet/NSG/firewall rules). A replication role on the publisher (or a role with REPLICATION privilege). Step 1: Configure Server Parameters on both publisher and subscriber: On Publisher: wal_level=logical max_worker_processes=16 max_replication_slots=10 max_wal_senders=10 track_commit_timestamp=on On Subscriber: wal_level=logical max_worker_processes=16 max_replication_slots=10 max_wal_senders=10 track_commit_timestamp=on max_worker_processes = 16 max_sync_workers_per_subscription = 6 autovacuum = OFF (during initial copy) max_wal_size = 64GB checkpoint_timeout = 3600 Step 2: Create Publication (Publisher) and alter role with replication privilege ALTER ROLE <replication_user> WITH REPLICATION; CREATE PUBLICATION pub FOR ALL TABLES; Step 3: Create Subscription (Subscriber) CREATE SUBSCRIPTION <subscription-name> CONNECTION 'host=<publisher_host> dbname=<db> user=<user> password=<pwd>' PUBLICATION <publication-name>;</publication-name></pwd></user></db></publisher_host></subscription-name> Step 4: Monitor Publisher: This shows active processes on the publisher, including replication workers. SELECT application_name, wait_event_type, wait_event, query, backend_type FROM pg_stat_activity WHERE state = 'active'; Subscriber: The ‘pg_stat_progress_copy’ table tracks the progress of the initial data copy for each table. SELECT * FROM pg_stat_progress_copy; To explore more details on how to get started with logical replication, visit our blog on Tuning logical replication for Azure Database for PostgreSQL. Conclusion That’s all for the September 2025 feature highlights! We remain committed to making Azure Database for PostgreSQL more powerful and secure with every release. Stay up to date on the latest enhancements by visiting our Azure Database for PostgreSQL blog updates link. Your feedback matters and helps us shape the future of PostgreSQL on Azure. If you have suggestions, ideas, or questions, we’d love to hear from you: https://aka.ms/pgfeedback. We look forward to sharing even more exciting capabilities in the coming months. Stay tuned!541Views4likes0CommentsPostgreSQL and the Power of Community
PGConf NYC 2025 is the premier event for the global PostgreSQL community, and Microsoft is proud to be a Platinum sponsor this year. The conference will also feature a keynote from Claire Giordano, Principal PM for PostgreSQL at Microsoft, who will share our vision for Postgres along with lessons from ten PostgreSQL hacker journeys.315Views3likes1CommentArchitecting Secure PostgreSQL on Azure: Insights from Mercedes-Benz
Authors: Johannes Schuetzner, Software Engineer at Mercedes-Benz & Nacho Alonso Portillo, Principal Program Manager at Microsoft When you think of Mercedes-Benz, you think of innovation, precision, and trust. But behind every iconic vehicle and digital experience is a relentless drive for security and operational excellence. At Mercedes-Benz R&D in Sindelfingen, Germany, Johannes Schuetzner and the team faced a challenge familiar to many PostgreSQL users: how to build a secure, scalable, and flexible database architecture in the cloud—without sacrificing agility or developer productivity. This article shares insights from Mercedes-Benz about how Azure Database for PostgreSQL can be leveraged to enhance your security posture, streamline access management, and empower teams to innovate with confidence. The Challenge: Security Without Compromise “OK, let’s stop intrusions in their tracks,” Schuetzner began his POSETTE talk, setting the tone for a deep dive into network security and access management. Many organizations need to protect sensitive data, ensure compliance, and enable secure collaboration across distributed teams. The typical priorities are clear: Encrypt data in transit and at rest Implement row-level security for granular access Integrate with Microsoft Defender for Cloud for threat protection Focus on network security and access management—where configuration can make the biggest impact Building a Secure Network: Private vs. Public Access Mercedes-Benz explored two fundamental ways to set up their network for Azure Database for PostgreSQL: private access and public access. “With private access, your PostgreSQL server is integrated in a virtual network. With public access, it is accessible by everybody on the public internet,” explained Schuetzner. Public Access: Public endpoint, resolvable via DNS Firewall rules control allowed IP ranges Vulnerable to external attacks; traffic travels over public internet Private Access: Server injected into an Azure VNET Traffic travels securely over the Azure backbone Requires delegated subnet and private DNS VNET peering enables cross-region connectivity “One big benefit of private access is that the network traffic travels over the Azure backbone, so not the public internet,” said Schuetzner. This ensures that sensitive data remain protected, even as applications scaled across regions. An Azure VNET is restricted to an Azure region though and peering them may be complex. Embracing Flexibility: The Power of Private Endpoints Last year, Azure introduced private endpoints for PostgreSQL, a significant milestone in Mercedes-Benz’s database connectivity strategy. It adds a network interface to the resource that can also be reached from other Azure regions. This provides the resources in the VNET associated with the private endpoint to connect to the Postgres server. The network traffic travels securely over the Azure backbone. Private endpoints allow Mercedes-Benz to: Dynamically enable and disable public access during migrations Flexibly provision multiple endpoints for different VNETs and regions Have explicit control over the allowed network accesses Have in-built protection from data exfiltration Automate setup with Terraform and infrastructure-as-code This flexibility can be crucial for supporting large architectures and migration scenarios, all while maintaining robust security. Passwordless Authentication: Simplicity Meets Security Managing database passwords is a pain point for every developer. Mercedes-Benz embraced Azure Entra Authentication (formerly Azure Active Directory) to enable passwordless connections. Passwordless connections do not rely on traditional passwords but are based on more secure authentication methods of Azure Entra. They require less administrational efforts and prevent security breaches. Benefits include: Uniform user management across Azure resources Group-based access control Passwordless authentication for applications and CI/CD pipelines For developers, this means less manual overhead and fewer risks of password leaks. “Once you have set it up, then Azure takes good care of all the details, you don’t have to manage your passwords anymore, also they cannot be leaked anymore accidentally because you don’t have a password,” Schuetzner emphasized. Principle of Least Privilege: Granular Authorization Mercedes-Benz appreciates the principle of least privilege, ensuring applications have only the permissions they need—nothing more. By correlating managed identities with specific roles in PostgreSQL, teams can grant only necessary Data Manipulation Language (DML) permissions (select, insert, update), while restricting Data Definition Language (DDL) operations. This approach minimizes risk and simplifies compliance. Operational Excellence: Automation and Troubleshooting Automation is key to Mercedes-Benz’s success. Using Terraform and integrated in CI/CD pipelines, the team can provision identities, configure endpoints, and manage permissions—all as code. For troubleshooting, tools like Azure Bastion enable secure, temporary access to the database for diagnostics, without exposing sensitive endpoints. The Impact: Security, Agility, and Developer Empowerment By leveraging Azure Database for PostgreSQL, Mercedes-Benz can achieve: Stronger security through private networking and passwordless authentication Flexible, scalable architecture for global operations Streamlined access management and compliance Empowered developers to focus on innovation, not infrastructure Schuetzner concluded, “Private endpoints provide a new network opportunity for Postgres on Azure. There are additional costs, but it’s more flexible and more dynamic. Azure takes good care of all the details, so you don’t have to manage your passwords anymore. It’s basically the ultimate solution for password management.” Mercedes-Benz’s story shows that with the right tools and mindset, you can build secure and scalable solutions on Azure Database for PostgreSQL. For more details, refer to the full POSETTE session.August 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, August was an exciting month for Azure Database for PostgreSQL! We have introduced updates that make your experience smarter and more secure. From simplified Entra ID group login to integrations with LangChain and LangGraph, these updates help with improving access control and seamless integration for your AI agents and applications. Stay tuned as we dive deeper into each of these feature updates. Feature Highlights Enhanced Performance recommendations for Azure Advisor - Generally Available Entra-ID group login using user credentials - Public Preview New Region Buildout: Austria East LangChain and LangGraph connector Active-Active Replication Guide Enhanced Performance recommendations for Azure Advisor - Generally Available Azure Advisor now offers enhanced recommendations to further optimize PostgreSQL server performance, security, and resource management. These key updates are as follows: Index Scan Insights: Detection and recommendations for disabled index and index-only scans to improve query efficiency. Audit Logging Review: Identification of excessive logging via the pgaudit.log parameter, with guidance to reduce overhead. Statistics Monitoring: Alerts on server statistics resets and suggestions to restore accurate performance tracking. Storage Optimization: Analysis of storage usage with recommendations to enable the Storage Autogrow feature for seamless scaling. Connection Management: Evaluation of workloads for short-lived connections and frequent connectivity errors, with recommendations to implement PgBouncer for efficient connection pooling. These enhancements aim to provide deeper operational insights and support proactive performance tuning for PostgreSQL workloads. For more details read the Performance recommendations documentation. Entra-ID group login using user credentials - Public Preview The public preview for Entra-ID group login using user credentials is now available. This feature simplifies user management and improves security within the Azure Database for PostgreSQL. This allows administrators and users to benefit from a more streamlined process like: Changes in Entra-ID group memberships are synchronized on a periodic 30min basis. This scheduled syncing ensures that access controls are kept up to date, simplifying user management and maintaining current permissions. Users can log in with their own credentials, streamlining authentication, and improving auditing and access management for PostgreSQL environments. As organizations continue to adopt cloud-native identity solutions, this update represents a major improvement in operational efficiency and security for PostgreSQL database environments. For more details read the documentation on Entra-ID group login. New Region Buildout: Austria East New region rollout! Azure Database for PostgreSQL flexible server is now available in Austria East, giving customers in and around the region lower latency and data residency options. This continues our mission to bring Azure PostgreSQL closer to where you build and run your apps. For the full list of regions visit: Azure Database for PostgreSQL Regions. LangChain and LangGraph connector We are excited to announce that native LangChain & LangGraph support is now available for Azure Database for PostgreSQL! This integration brings native support for Azure Database for PostgreSQL into LangChain or LangGraph workflows, enabling developers to use Azure PostgreSQL as a secure and high-performance vector store and memory store for their AI agents and applications. Specifically, this package adds support for: Microsoft Entra ID (formerly Azure AD) authentication when connecting to your Azure Database for PostgreSQL instances, and, DiskANN indexing algorithm when indexing your (semantic) vectors. This package makes it easy to connect LangChain to your Azure-hosted PostgreSQL instances whether you're building intelligent agents, semantic search, or retrieval-augmented generation (RAG) systems. Read more at https://aka.ms/azpg-agent-frameworks Active-Active Replication Guide We have published a new blog article that guides you through setting up active-active replication in Azure Database for PostgreSQL using the pglogical extension. This walkthrough covers the fundamentals of active-active replication, key prerequisites for enabling bi-directional replication, and step-by-step demo scripts for the setup. It also compares native and pglogical approaches helping you choose the right strategy for high availability, and multi-region resilience in production environments. Read more about the active-active replication guide on this blog. Azure Postgres Learning Bytes 🎓 Enabling Zone-Redundant High Availability for Azure Database for PostgreSQL Flexible Server Using APIs. High availability (HA) is essential for ensuring business continuity and minimizing downtime in production workloads. With Zone-Redundant HA, Azure Database for PostgreSQL Flexible Server automatically provisions a standby replica in a different availability zone, providing stronger fault tolerance against zone-level failures. This section will guide you on how to enable Zone-Redundant HA using REST APIs. Using REST APIs gives you clear visibility into the exact requests and responses, making it easier to debug issues and validate configurations as you go. You can use any REST API client tool of your choice to perform these operations including Postman, Thunder Client (VS Code extension), curl, etc. to send requests and inspect the results directly. Before enabling Zone-Redundant HA, make sure your server is on the General Purpose or Memory Optimized tier and deployed in a region that supports it. If your server is currently using Same-Zone HA, you must first disable it before switching to Zone-Redundant. Steps to Enable Zone-Redundant HA: Get an ARM Bearer token: Run this in a terminal where Azure CLI is signed in (or use Azure Cloud Shell) az account get-access-token --resource https://management.azure.com --query accessToken -o tsv Paste token in your API client tool Authorization: `Bearer <token>` </token> Inspect the server (GET) using the following URL: https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resourceGroup}}/providers/Microsoft.DBforPostgreSQL/flexibleServers/{{serverName}}?api-version={{apiVersion}} In the JSON response, note: sku.tier → must be 'GeneralPurpose' or 'MemoryOptimized' properties.availabilityZone → '1' or '2' or '3' (depends which availability zone that was specified while creating the primary server, it will be selected by system if the availability zone is not specified) properties.highAvailability.mode → 'Disabled', 'SameZone', or 'ZoneRedundant' properties.highAvailability.state → e.g. 'NotEnabled','CreatingStandby', 'Healthy' If HA is currently SameZone, disable it first (PATCH) using API. Use the same URL in Step 3, in the Body header insert: { "properties": { "highAvailability": { "mode": "Disabled" } } } Enable Zone Redundant HA (PATCH) using API: Use the same URL in Step 3, in the Body header insert: { "properties": { "highAvailability": { "mode": "ZoneRedundant" } } } Monitor until HA is Healthy: Re-run the GET from Step 3 every 30-60 seconds until you see: "highAvailability": { "mode": "ZoneRedundant", "state": "Healthy" } Conclusion That’s all for our August 2025 feature updates! We’re committed to making Azure Database for PostgreSQL better with every release, and your feedback plays a key role in shaping what’s next. 💬 Have ideas, questions, or suggestions? Share them with us: https://aka.ms/pgfeedback 📢 Want to stay informed about the latest features and best practices? Follow us here for the latest announcements, feature releases, and best practices: Azure Database for PostgreSQL Blog More exciting improvements are on the way—stay tuned for what’s coming next!961Views2likes0CommentsAnnouncing Mirroring for Azure Database for PostgreSQL in Microsoft Fabric for Public Preview
Back at the first European Microsoft Fabric Community Conference in September 2024 we announced our Private Preview program for Mirroring for Azure Database for PostgreSQL in Microsoft Fabric. Today, in conjunction with 2025 edition of Microsoft Fabric Community Conference in Las Vegas, we're thrilled to announce our Public Preview milestone, giving customers the ability to leverage friction-free near-real time replication from Azure Database for PostgreSQL flexible server to Fabric OneLake in Delta tables, providing a solid foundation for reporting, advanced analytics, AI, and data science on operational data with minimal effort and impact on transactional workloads. Mirroring is setup from Fabric Data Warehousing experience by providing the Azure Database for PostgreSQL flexible server and database connection details, provide selections on what needs to be mirrored into Fabric, either all data or user selected eligible mirrored tables. And, just like that, mirroring is ready to go. Mirroring Azure Database for PostgreSQL flexible server creates an initial snapshot in Fabric OneLake, after which data is kept in sync in near-real time with every transaction. How mirroring to Fabric works in Azure Database for PostgreSQL flexible server Fabric mirroring in Azure Database for PostgreSQL flexible server is based on principles such as logical replication and the Change Data Capture (CDC) design pattern. Once Fabric mirroring is established for a database in Azure Database for PostgreSQL flexible server, an initial snapshot is created by a background process for selected tables to be mirrored. That snapshot is shipped to a Fabric OneLake's landing zone in Parquet format. A process running in Fabric, known as replicator, takes these initial snapshot files and creates tables in Delta format in the Mirrored database artifact. Subsequent changes applied to selected tables are also captured in the source database and shipped to the OneLake landing zone in batches. Those batches of changes are finally applied to the respective Delta tables in the Mirrored database artifact. For Fabric mirroring, the CDC pattern is implemented in a proprietary PostgreSQL extension called azure_cdc, which is installed and registered in source databases during Fabric mirroring enablement workflow. This guided process has a new dedicated page in Azure Portal and is setting up all required pre-requisites and is offering a simplified experience where you just need to select which databases you want to replicate to Fabric OneLake (default is up to 3). You can read additional details regarding the server enablement process and other critical configuration and monitoring options on a dedicated page in Azure Database for PostgreSQL flexible server product documentation. Explore advanced analytics and data engineering for PostgreSQL in Microsoft Fabric Once data is on OneLake, mirrored data in the delta format is ready for immediate consumption across all Fabric experiences and features, such as Power BI with new Direct Lake mode, Data Warehouse, Data Engineering, Lakehouse, KQL Database, Notebooks and Copilot, which work instantly. Direct Lake mode is a fast path to load the data from the lake with groundbreaking semantic model capability for analyzing very large data volumes in Power BI. As Direct Lake mode also supports reading Delta tables right from OneLake, the Mirrored PostgreSQL database is Power BI ready along with Copilot capabilities. Data across any mirrored database (either Azure Database for PostgreSQL, Azure SQL DB, Azure Cosmos DB or Snowflake) can be cross-joined as well, enabling querying across any database, warehouse or Lakehouse (either as a shortcut to AWS S3 or ADLS Gen 2 etc.). With the same approach, you can also have multiple PosgreSQL databases from multiple servers mirrored to OneLake like in a typical SaaS provider scenario, where each database belongs to a different tenant, and execute cross-database queries to aggregate and analyze critical business metrics. Data scientists and data engineers can work with the mirrored Azure Database for PostgreSQL data joined with other sources (see this example with CosmosDB data) that are created as shortcuts in Lakehouse. Read about endless possibilities when loading operational databases in OneLake and Microsoft Fabric in related section of our product documentation here. Getting started with Mirroring for Azure Database for PostgreSQL in Fabric To summarize, Mirroring Azure Database for PostgreSQL in Microsoft Fabric plays a crucial role in enabling analytics and driving insights from operational data by ensuring that the most recent data is available for analysis. This allows businesses to make decisions based on the most current situation, rather than relying on outdated information. Improving accuracy also reduces the risk of discrepancies between the source and the replicated data, leading to more accurate analytics and reliable insights. In addition, is essential for predictive analytics and AI models provide the most recent data to make accurate predictions and decisions. To get started and learn more about Mirroring Azure Database for PostgreSQL flexible server in Microsoft Fabric, its pre-requisites, setup, FAQ’s, current limitations, and tutorial, please click here to read all about it and stay tuned for more updates and new features coming soon. To get more updates also on overall Mirroring capabilities in Fabric, please read this other blog post where you will get the latest news.1.4KViews3likes4Comments