azure database for postgresql
122 TopicsJanuary 2026 Recap: Azure Database for PostgreSQL
Hello Azure Community, We’re kicking off the year with important updates for Azure Database for PostgreSQL. From Premium SSD v2 features now available in public preview to REST API feature updates across developer tools, this blog highlights what’s new and what’s coming. Terraform Adds Support for PostgreSQL 18 – Generally Available Ansible module update - Generally Available Achieving Zonal Resiliency with Azure CLI - Generally Available SDKs Released : Go, Java, JavaScript, .NET and Python – Generally Available What’s New in Premium SSD v2 - Public Preview Latest PostgreSQL minor versions January 2026 Maintenance Release Notes Terraform Adds Support for PostgreSQL 18 Azure Database for PostgreSQL now provides support for PostgreSQL 18 which allows customers to create new servers with PostgreSQL 18 version and upgrade existing ones using Terraform. This update makes it easier to adopt PostgreSQL 18 on Azure while managing both provisioning and upgrades through consistent Terraform workflows. Learn more about using the new terraform resource Ansible Module Update A new Ansible module is now available with support for the latest GA REST API features, enabling customers to automate provisioning and management of Azure Database for PostgreSQL resources. This includes support for Elastic Clusters provisioning, deployment of PostgreSQL instances with PostgreSQL 18, and broader adoption of newly released Azure Database for PostgreSQL capabilities through Ansible. Learn more about using Ansible module with latest REST API features Achieve zonal resiliency with Azure CLI We have released updates to the Azure CLI that allow users to enable zone‑redundant high availability (HA) by default using a new --zonal-resiliency parameter. This parameter can be set to enabled or disabled. When --zonal-resiliency is enabled, the service provisions a standby server in a different availability zone than the primary, providing protection against zonal failures. If zonal capacity is not available in the selected region, you can use the --allow-same-zone flag to provision the standby in the same zone as the primary. Azure CLI commands: az postgres flexible-server update --resource-group <resource_group> --name <server> --zonal-resiliency enabled --allow-same-zone</server></resource_group> az postgres flexible-server update --resource-group <resource_group> --name <server> --zonal-resiliency Disabled</server></resource_group> az postgres flexible-server create --resource-group <resource_group> --name <server> --zonal-resiliency enabled --allow-same-zone</server></resource_group> Learn more about how to configure high availability on Azure Database for PostgreSQL. SDKs Released : Go, Java, JavaScript, .NET and Python We have released updated SDKs for Go, Java, JavaScript, .NET, and Python, built on the latest GA REST API (2025‑08‑01). These SDKs enable developers to programmatically provision, configure, and manage Azure Database for PostgreSQL resources using stable, production‑ready APIs. It also adds the ability to set a default database name for Elastic Clusters, simplifying cluster provisioning workflows, support for PostgreSQL 18. To improve developer experience and reliability, operation IDs have been renamed for clearer navigation, and HTTP response codes have been corrected so automation scripts and retries behave as expected. Learn More about .NET SDK Learn more about Go SDK Learn more about Java SDK Learn more about Javascript SDK Learn more about Python SDK What’s New in Premium SSD v2: Public Preview Azure Database for PostgreSQL Flexible Server now supports a broader set of resiliency and lifecycle management capabilities on Premium SSD v2, enabling production‑grade PostgreSQL deployments with improved durability, availability, and operational flexibility. In this preview, customers can use High Availability (same‑zone and zone‑redundant), geo‑redundant backups, in‑region and geo read replicas, geo‑disaster recovery (Geo‑DR), and Major Version Upgrades on SSDv2‑backed servers, providing both zonal and regional resiliency options for mission‑critical PostgreSQL workloads. These capabilities help protect data across availability zones and regions, support compliance and disaster‑recovery requirements, and simplify database lifecycle operations. Premium SSD v2 enhances these resiliency workflows with higher and independently scalable IOPS and throughput, predictable low latency, and decoupled scaling of performance and capacity. Customers can provision and adjust storage performance without over‑allocating disk size, enabling more efficient capacity planning while sustaining high‑throughput, low‑latency workloads. When combined with zone‑resilient HA and cross‑region data protection, SSDv2 provides a consistent storage foundation for PostgreSQL upgrades, failover, backup, and recovery scenarios. These capabilities are being expanded incrementally across regions as the service progresses toward general availability For more details, see Premium SSDv2 Latest Postgres minor versions: 18.1, 17.7, 16.11, 15.15, 14.20, 13.23 Azure Database for PostgreSQL now supports the latest PostgreSQL minor versions: 18.1, 17.7, 16.11, 15.15, 14.20, and 13.23. These updates are applied automatically during planned maintenance windows, ensuring your databases stay up to date with critical security fixes and reliability improvements no manual action required. This release includes two security fixes and over 50 bug fixes across indexing, replication, partitioning, memory handling, and more. PostgreSQL 13.23 is the final community release for version 13, which has now reached end-of-life (EOL). Customers still using PostgreSQL 13 on Azure should review their upgrade options and refer to Azure’s Extended Support policy for more details. For details about the minor release, see PostgreSQL community announcement. January 2026 Maintenance Release Notes We’re excited to announce the January 2026 version of Azure Database for PostgreSQL maintenance updates. This new version delivers major engine updates, new extensions, Elastic clusters enhancements, performance improvements, and critical reliability fixes. This release introduces expands migration and Fabric mirroring support, and adds powerful analytics, security, and observability capabilities across the service. Customers also benefit from improved Query Store performance, new WAL metrics, enhanced networking flexibility, and multiple Elastic clusters enhancements. All new servers are automatically onboarded beginning January 20, 2026, with existing servers upgraded during their next scheduled maintenance. For a complete list of features, improvements, and resolved issues, see the full release notes here. Azure Postgres Learning Bytes Managing Replication Lag with Debezium Change Data Capture (CDC) enables real‑time integrations by streaming row‑level changes from OLTP systems like PostgreSQL into event streams, data lakes, caches, and microservices. In a typical CDC pipeline, Debezium captures changes from PostgreSQL and streams them into Kafka with minimal latency. However, during large bulk updates that affect millions of rows, replication lag can spike significantly, impacting replication lag. This learning byte walks through how to detect and mitigate replication lag in Azure Database for PostgreSQL when using Debezium. Detect Replication Lag: Start by identifying where lag is building up in the system. Monitor replication slots and lag: Use the following query to inspect active replication slots and measure how far behind they are relative to the current WAL position: SELECT slot_name, active_pid, confirmed_flush_lsn, restart_lsn, pg_current_wal_lsn(), pg_size_pretty( ( pg_current_wal_lsn() - confirmed_flush_lsn ) ) AS lsn_distance FROM pg_replication_slots; Check WAL sender backend status: Verify whether WAL sender processes are stalled due to decoding or I/O waits: SELECT pid, backend_type, application_name, wait_event FROM pg_stat_activity WHERE backend_type = 'walsender' ORDER BY backend_start; Inspect spill activity : High spill activity indicates memory pressure during logical decoding and may contribute to lag. Large values for spill_bytes or spill_count suggest the need to increase logical_decoding_work_mem, reduce transaction sizes, or tune Debezium connector throughput. SELECT slot_name, spill_txns, spill_count, pg_size_pretty(spill_bytes) AS spill_bytes, total_txns, pg_size_pretty(total_bytes) AS total_bytes, stats_reset FROM pg_stat_replication_slots; Fix Replication Lag: Database and infrastructure tuning Reduce unnecessary overhead and ensure compute, memory, and storage resources are appropriately scaled to handle peak workloads. Connector level tuning Adjust Debezium configuration to keep pace with PostgreSQL WAL generation and Kafka throughput. This includes tuning batch sizes, poll intervals, and throughput settings to balance latency and stability. To learn more about diagnosing and resolving CDC performance issues, read the full blog: Performance Tuning for CDC: Managing Replication Lag in Azure Database for PostgreSQL with Debezium200Views1like0CommentsJanuary 2026 Recap: Azure Database for PostgreSQL
We just dropped the 𝗝𝗮𝗻𝘂𝗮𝗿𝘆 𝟮𝟬𝟮𝟲 𝗿𝗲𝗰𝗮𝗽 for Azure Database for PostgreSQL and this one’s all about developer velocity, resiliency, and production-ready upgrades. January 2026 Recap: Azure Database for PostgreSQL • PostgreSQL 18 support via Terraform (create + upgrade) • Premium SSD v2 (Preview) with HA, replicas, Geo-DR & MVU • Latest PostgreSQL minor version releases • Ansible module GA with latest REST API features • Zone-redundant HA now configurable via Azure CLI • SDKs GA (Go, Java, JS, .NET, Python) on stable APIs Read the full January 2026 recap here and see what’s new (and what’s coming) - January 2026 Recap: Azure Database for PostgreSQLPostgres speakers - POSETTE 2026 CFP is closing soon!
Guidelines for submitting a proposal to the POSETTE CFP POSETTE: An Event for Postgres is back for its 5th year, and the excitement is already building. Scheduled for June 16 – June 18, 2026, this free and virtual developer event brings together the global Postgres community for three days of learning, sharing, and deep technical storytelling. Whether you're a first-time speaker or a seasoned contributor, your story matters and the Call for Proposals (CFP) closes on February 1, 2026. If you’re considering submitting a proposal (or encouraging someone else to), in this post I will walk you through everything you need to know to craft a strong, compelling submission before the deadline arrives. 1. Key Dates to Know CFP Deadline: February 1, 2026 @ 11:59 PM PST Talk Acceptance Notifications: February 11, 2026 Event Dates: June 16 – June 18, 2026 (includes four unique livestreams, live text chat, and speaker Q&A) Schedule & sessions announced: Feb 25, 2026 Pre-record all talks: Weeks of April 20 & April 27 Tip: Add a calendar reminder, this deadline arrives quickly, and no late submissions are accepted. 2. Why Submit a Talk to POSETTE? Submitting a talk for a conference can seem like a difficult task at the start, but this guide can help you come up with potential ideas that can be used to submit a talk for the conference. Share your story with the global Postgres community Your experience, whether it’s a deep dive into query planning, a migration journey, or lessons learned from scaling can help thousands of developers. Grow your professional visibility POSETTE is a high‑reach, virtual event that enables your content to live on well after the livestream. First‑time speakers are welcomed and encouraged POSETTE is not an exclusive club. If you have a story to tell, this is a supportive, welcoming place to tell it. 3. What Makes a Strong Proposal? First‑time speaker? Don’t worry. The guidelines below cover the key elements you’ll need to craft a strong, successful proposal. Make your proposal focused, not broad: Many proposals try to cover too much. The strongest ones zoom in on a specific challenge, insight, or transformation. A narrow, well‑defined topic reads more clearly and creates a stronger takeaway for attendees. Clearly identify the target audience: State who the talk is for: Beginner Postgres developers Cloud architects DBAs focusing on performance Engineers migrating from Oracle/MySQL This helps the selection team understand fit and event balance. Demonstrate real‑world value, not generic theory: Talks rooted in hands‑on experience tend to perform best. Strong abstracts answer: What problem did we face? What did we try? What worked (or didn’t)? What can you replicate in your environment? POSETTE audiences love actionable content. 4. Show how attendees will grow from your talk: Selection committees love when speakers articulate transformation. Clarify what people will gain: “Improve query execution time by…” “Avoid common replication pitfalls…” “Design HA setups more confidently…” The reviewers want talks with practical outcomes. 5. Highlight what makes your talk unique Is your approach unconventional? Did you migrate at massive scale? Did you build or extend an OSS tool? Did you learn something the hard way? Emphasize novelty POSETTE gets many submissions, so originality matters. 6. Use a storytelling angle: Human brains love stories. Strong abstracts often follow a mini narrative: Problem Tension Turning point Solution Lessons This makes your proposal memorable and relatable. 7. Keep the abstract concise and structured: Avoid long, meandering paragraphs. A clear structure like this works well: Topic summary (one sentence) Problem + context (two–three sentences) Solution or insights (two–three sentences) What attendees will learn (one–two sentences) 4. Ideas for Topics That Work Well Not every proposal needs to be a deep internal dive real‑world stories resonate. Consider topics like: Migrating to Postgres (cloud or on‑prem) Performance tuning adventures and lessons Postgres extensions and ecosystem tooling Operational best practices, HA architecture, or incident learnings Developer productivity with Postgres Novel patterns or creative uses of Postgres internals Azure Database for PostgreSQL customer stories Community‑focused topics, such as how to start a PGDay event, how to begin contributing to open source, or how to engage with the Postgres community effectively. Look at POSETTE 2024 or 2025 talk titles to calibrate tone and depth. 5. What Happens If Your Talk Is Accepted? Good news: the speaker experience is designed to be smooth and supportive. Talks are 25 minutes long and pre‑recorded, with professional production support from the POSETTE organizing team at an agreed-upon time during the weeks of April 20 & April 27 Speakers join live text chat during the session to interact with attendees No travel required the event is fully virtual All you need is a good microphone, a quiet space, and a story worth telling. 6. How to Submit Your Proposal Here are the official links you’ll want handy: 📄 CFP Page: https://posetteconf.com/2026/cfp/ ❓ FAQ: https://posetteconf.com/2026/faq/ 📝 Submit on Sessionize: https://sessionize.com/posette2026/ Submission Checklist Before hitting "submit," make sure you have: A strong, interesting title A clear and concise abstract Defined takeaways for attendees An understanding of your target audience Submission completed before Feb 1 @ 11:59 PM PST POSETTE is built by and for the Postgres community and your experience, whether small or monumental, has the potential to help others. With the CFP deadline approaching fast on February 1, now is the perfect time to refine your idea, shape your abstract, and submit your talk. This could be the year your story gets shared with thousands. Take the leap the community will be glad you did.185Views4likes0CommentsFrom Oracle to Azure: How Quadrant Technologies accelerates migrations
This blog was authored by Manikyam Thukkapuram, Director, Alliances & Engineering at Quadrant Technologies; and Thiwagar Bhalaji, Migration Engineer and DevOps Architect at Quadrant Technologies Over the past 20+ years, Quadrant Technologies has accelerated database modernization for hundreds of organizations. As momentum to the cloud continues to grow, a major focus for our business has been migrating on-premises Oracle databases to Azure. We’ve found that landing customers in Azure Database for PostgreSQL has been the best option both in terms of cost savings and efficiency. Azure Migrate is by far the best way to get them there. With Azure Migrate, we’re able to streamline migrations that traditionally took months, into weeks. As a Microsoft solutions partner, we help customers migrate to Azure and develop Azure-based solutions. We’re known as “the great modernization specialists” because many of our customers come to us with complex legacy footprints, outdated infrastructure, and monolithic applications that can be challenging to move to the cloud. But we excel at untangling these complex environments. And with our Q-Migrator tool, which is a wrapper around Azure Migrate, we’re able to automate and accelerate these kinds of migrations. Manual steps slowed down timelines In general, each migration we lead includes a discovery phase, a compatibility assessment, and the migration execution. In discovery, we identify every server, database, and application in a customer’s environment and map their interactions. Next, we assess each asset’s readiness for Azure and plan for optimal cloud configurations. Finally, we bring the plan to life, integrating applications, moving workloads, and validating performance. Before adopting Azure Migrate, each of these phases involved manual tasks for our team. During our discovery process we manually collected inventory and wrote custom scripts to track server relationships and database dependencies. Our engineers also had to dig through configuration files and use third-party assessment tools for aspects like VM utilization and Oracle schema. When we mapped compatibility, we worked from static data to predict cost estimates and sizing, as opposed to operating from real-time telemetry. By the time we reached the migration phase, fragmented tooling and inconsistent assessments made it difficult to maintain accuracy and efficiency. Hidden dependencies sometimes surfaced late in the process, causing unexpected rework and delays. Streamlining migrations with Azure Migrate To automate and streamline these manual tasks, we developed Q-Migrator, which is our in-house framework built around Azure Migrate. Now we can offer clients an efficient, agentless approach to discovery, assessment, and migration. As part of our on-premises database migration initiatives, we rely on Azure Migrate to seamlessly migrate a wide range of structured databases (including MySQL, Microsoft SQL Server, PostgreSQL, and Oracle) from on-premises environments to Azure IaaS and PaaS. For instance, for an on-premises PostgreSQL migration, we begin by setting up an Azure Migrate appliance in the client’s environment to automatically discover servers, databases, and applications. That generates a complete inventory and dependency map that identifies every relationship between servers and databases. From there, we run an assessment through Azure Migrate to check compatibility, identify blockers, and right-size target environments for Azure Database for PostgreSQL. By integrating Azure Database Migration Service (DMS), we can replicate data continuously until cutover, ensuring near-zero downtime. In addition, Azure DMS provides robust telemetry and analytics for deep visibility into every stage of the process. This unified and automated workflow not only replaces manual steps but also increases reliability and accelerates delivery. Teams benefit from a consolidated dashboard for planning, execution, and performance tracking, driving efficiency throughout the migration lifecycle. 75% faster deployment, 60% cost savings Since implementing Azure Migrate, which now facilitates discovery and assessment for on-premises PostgreSQL workloads, we’ve accelerated deployment by 75% compared to traditional migration methods. We’ve also reduced costs for our clients by up to 60 percent. Automated discovery alone reduces that phase by nearly 40%, and dependency mapping now takes a fraction of the effort. With the integrated dashboard in Azure Migrate we can also track progress across discovery, assessment, and migration in one place. This eliminates the need for multiple third-party tools. These efficiencies allow us to deliver complex migrations on tighter timelines without sacrificing quality or reliability. Rounding out the modernization journey with AKS As “the great modernization specialists,” we’re often asked which is the best database for landing Oracle workloads in the cloud. From our experience, Azure Database for PostgreSQL is ideal for enterprises seeking cost-efficient and secure PostgreSQL deployments. Its managed services reduce operational overhead while maintaining high availability, compliance, and scalability. Plus, seamless integration with Azure AI services allows us to innovate for clients and keep them ahead of the curve. We also recognize that database migration is only the first step for many clients—modernizing the application layer delivers even greater scalability, security, and manageability. When clients come to Quadrant for a broader modernization strategy, we often use Azure Kubernetes Service (AKS) to containerize their applications and break monoliths into microservices. AKS delivers a cloud-native architecture alongside database modernization. This integration supports DevOps practices, simplifies deployments, and allows customers to take full advantage of elastic cloud infrastructure. More innovation to come Overall, Azure Migrate and Azure Database for PostgreSQL, Azure Database for MySQL, and Azure SQL Database have redefined how we deliver database modernization, and our close collaboration with Microsoft has made it possible. By engaging early with Microsoft, we can validate migration architectures and gain insights into best practices for high-performance and secure cloud deployments. Access to Microsoft experts helps us fine-tune our designs, optimize performance, and resolve complex issues quickly. We’re also investing in AI-driven automation using Azure OpenAI in Foundry Models to analyze migration data, optimize queries, and predict performance outcomes. These innovations allow us to deliver more intelligent, adaptive solutions tailored to each customer’s unique environment.311Views2likes0CommentsPostgreSQL for the enterprise: scale, secure, simplify
This week at Microsoft Ignite, along with unveiling the new Azure HorizonDB cloud native database service, we’re announcing multiple improvements to our fully managed open-source Azure Database for PostgreSQL service, delivering significant advances in performance, analytics, security, and AI-assisted migration. Let’s walk through nine of the top Azure Database for PostgreSQL features and improvements we’re announcing at Microsoft Ignite 2025. Feature Highlights New Intel and AMD v6-series SKUs (Preview) Scale to multiple nodes with Elastic Clusters (GA) PostgreSQL 18 (GA) Realtime analytics with Fabric Mirroring (GA) Analytical queries inside PostgreSQL with the pg_duckdb extension (Preview) Adding Parquet to the azure_storage extension (GA) Meet compliance requirements with the credcheck, anon & ip4r extensions (GA) Integrated identity with Entra token-refresh libraries for Python AI-Assisted Oracle to PostgreSQL Migration Tool (Preview) Performance and scale New Intel and AMD v6 series SKUs (Preview) You can run your most demanding Postgres workloads on new Intel and AMD v6 General Purpose and Memory Optimized hardware SKUs, now availble in preview These SKUs deliver massive scale for high-performance OLTP, analytics and complex queries, with improved price performance and higher memory ceilings. AMD Confidential Compute v6 SKUs are also in Public Preview, enabling enhanced security for sensitive workloads while leveraging AMD’s advanced hardware capabilities. Here’s what you need to know: Processors: Powered by 5th Gen Intel® Xeon® processor (code-named Emerald Rapids) and AMD's fourth Generation EPYC™ 9004 processors Scale: VM size options scale up to 192 vCores and 1.8 TiB IO: Using the NVMe protocol for data disk access, IO is parallelized to the number of CPU cores and processed more efficiently, offering significant IO improvements Compute tier: Available in our General Purpose and Memory Optimized tiers. You can scale up to these new compute SKUs as needed with minimal downtime. Learn more: Here's a quick summary of the v6 SKUs we’re launching, with links to more information: Processor SKU Max vCores Max Mem Intel Ddsv6 192 768 GiB Edsv6 192 1.8 TiB AMD Dadsv6 96 384 GiB Eadsv6 96 672 GiB DCadsv6 96 386 GiB ECadsv6 96 672 GiB Scale to multiple nodes with Elastic clusters (GA) Elastic clusters are now generally available in Azure Database for PostgreSQL. Built on Citus open-source technology, elastic clusters bring the horizontal scaling of a distributed database to the enterprise features of Azure Database for PostgreSQL. Elastic clusters enable horizontal scaling of databases running across multiple server nodes in a “shared nothing” architecture. This is ideal for workloads with high-throughput and storage-intensive demands such as multi-tenant SaaS and IoT-based workloads. Elastic clusters come with all the enterprise-level capabilities that organizations rely upon in Azure Database for PostgreSQL, including high availability, read replicas, private networking, integrated security and connection pooling. Built-in sharding support at both row and schema level enables you to distribute your data across a cluster of compute resources and run queries in parallel, dramatically increasing throughput and capacity. Learn more: Elastic clusters in Azure Database for PostgreSQL PostgreSQL 18 (GA) When PostgreSQL 18 was released in September, we made a preview available on Azure on the same day. Now we’re announcing that PostgreSQL 18 is generally available on Azure Database for PostgreSQL, with full Major Version Upgrade (MVU) support, marking our fastest-ever turnaround from open-source release to managed service general availability. This release reinforces our commitment to delivering the latest PostgreSQL community innovations to Azure customers, so you can adopt the latest features, performance improvements, and security enhancements on a fully managed, production-ready platform without delay. ^Note: MVU to PG18 is currently available in the NorthCentralUS and WestCentralUS regions, with additional regions being enabled over the next few weeks Now you can: Deploy PostgreSQL 18 in all public Azure regions. Perform in-place major version upgrades to PG18 with no endpoint or connection string changes. Use Microsoft Entra ID authentication for secure, centralized identity management in all PG versions. Enable Query Store and Index Tuning for built-in performance insights and automated optimization. Leverage the 90+ Postgres extensions supported by Azure Database for PostgreSQL. PostgreSQL 18 also delivers major improvements under the hood, ranging from asynchronous I/O and enhanced vacuuming to improved indexing and partitioning, ensuring Azure continues to lead as the most performant, secure, and developer-friendly PostgreSQL managed service in the cloud. Learn more: PostgreSQL 18 open-source release announcement Supported versions of PostgreSQL in Azure Database for PostgreSQL Analytics Real-time analytics with Fabric Mirroring (GA) With Fabric mirroring in Azure Database for PostgreSQL, now generally available, you can run your Microsoft Fabric analytical workloads and capabilities on near-real-time replicated data, without impacting the performance of your production PostgreSQL databases, and at no extra cost. Mirroring in Fabric connects your operational and analytical platforms with continuous data replication from PostgreSQL to Fabric. Transactions are mirrored to Fabric in near real-time, enabling advanced analytics, machine learning, and reporting on live data sets without waiting for traditional batch ETL processes to complete. This approach eliminates the overhead of custom integrations or data pipelines. Production PostgreSQL servers can run mission-critical transactional workloads without being affected by surges in analytical queries and reporting. With our GA announcement Fabric mirroring is ready for production workloads, with secure networking (VNET integration and Private Endpoints supported), Entra ID authentication for centralized identity management, and support for high availability enabled servers, ensuring business continuity for mirroring sessions. Learn more: Mirroring Azure Database for PostgreSQL flexible server Adding Parquet support to the azure_storage extension (GA) In addition to mirroring data directly to Microsoft Fabric, there are many other scenarios that require moving operational data into data lakes for analytics or archival. The complexity of building and maintaining ETL pipelines can be expensive and time-consuming. Azure Database for PostgreSQL now natively supports Parquet via the azure_storage extension, enabling direct SQL-based read/write to Parquet files in Azure Storage. This makes it easy to import and export data in Postgres without external tools or scripts. Parquet is a popular columnar storage format often used in big data and analytics environments (like Spark and Azure Data Lake) because of its efficient compression and query performance for large datasets. Now you can use the azure_storage extension to can skip an entire step: just issue a SQL command to write to and query from a Parquet file in Azure Blob Storage. Learn more: Azure storage extension in Azure Database for PostgreSQL Analytical queries inside PostgreSQL with the pg_duckdb extension (Preview) DuckDB’s columnar engine excels at high performance scans, aggregations and joins over large tables, making it particularly well-suited for analytical queries. The pg_duckdb extension, now available in preview for Azure Database for PostgreSQL combines PostgreSQL’s transactional performance and reliability with DuckDB’s analytical speed for large datasets. Together pg_duckdb and PostgreSQL are an ideal combination for hybrid OLTP + OLAP environments where you need to run analytical queries directly in PostgreSQL without sacrificing performance., To see the pg_duckdb extension in action check out this demo video: https://aka.ms/pg_duckdb Learn more: pg_duckdb – PostgreSQL extension for DuckDB Security Meet compliance requirements with the credcheck, anon & ip4r extensions (GA) Operating in a regulated industry such as Finance, Healthcare and Government means negotiating compliance requirements like HIPAA and PCI-DSS, GDPR that include protection for personalized data and password complexity, expiration and reuse. This week the anon extension, previously in preview, is now generally available for Azure Database for PostgreSQL adding support for dynamic and static masking, anonymized exports, randomization and many other advanced masking techniques. We’ve also added GA support for the credcheck extension, which provides credential checks for usernames, and password complexity, including during user creation, password change and user renaming. This is particularly useful if your application is not using Entra ID and needs to rely on native PostgreSQL users and passwords. If you need to store and query IP ranges for scenarios like auditing, compliance, access control lists, intrusion detection and threat intelligence, another useful extension announced this week is the ip4r extension which provides a set of data types for IPv4 and IPv6 network addresses. Learn more: PostgreSQL Anonymizer credcheck – PostgreSQL username/password checks IP4R - IPv4/v6 and IPv4/v6 range index type for PostgreSQL The Azure team maintains an active pipeline of new PostgreSQL extensions to onboard and upgrade to Azure Database for PostgreSQL For example, another important extension upgraded this week is pg_squeeze which removes unused space from a table. The updated 1.9.1 version adds important stability improvements. Learn more: List of extensions and modules by name Integrated identity with Entra token-refresh libraries for Python In a modern cloud-connected enterprise, identity becomes the most important security perimeter. Azure Database for PostgreSQL is the only managed PostgreSQL service with full Entra integration, but coding applications to take care of Entra token refresh can be complex. This week we’re announcing a new Python library to simplify Entra token refresh. The library automatically refreshes authentication tokens before they expire, eliminating manual token handling and reducing connection failures. The new python_azure_pg_auth library provides seamless Azure Entra ID authentication and supports the latest psycopg and SQLAlchemy drivers with automatic token acquisition, validation, and refresh. Built-in connection pooling is available for both synchronous and asynchronous workloads. Designed for cross-platform use (Windows, Linux, macOS), the package features clean architecture and flexible installation options for different driver combinations. This is our first milestone in a roadmap to add token refresh for additional programming languages and frameworks. Learn more, with code samples to get started here: https://aka.ms/python-azure-pg-auth Migration AI-Assisted Oracle to PostgreSQL Migration Tool (Preview) Database migration is a challenging and time-consuming process, with multiple manual steps requiring schema and apps specific information. The growing popularity, maturity and low cost of PostgreSQL has led to a healthy demand for migration tooling to simplify these steps. The new AI-assisted Oracle Migration Tool preview announced this week greatly simplifies moving from Oracle databases to Azure Database for PostgreSQL. Available in the VS Code PostgreSQL extension the new migration tool combines GitHub Copilot, Azure OpenAI, and custom Language Model Tools to convert Oracle schema, database code and client applications into PostgreSQL-compatible formats. Unlike traditional migration tools that rely on static rules, Azure’s approach leverages Large Language Models (LLMs) and validates every change against a running Azure Database for PostgreSQL instance. This system not only translates syntax but also detects and fixes errors through iterative re-compilation, flagging any items that require human review. Application codebases like Spring Boot and other popular frameworks are refactored and converted. The system also understands context by querying the target Postgres instance for version and installed extensions. It can even invoke capabilities from other VS Code extensions to validate the converted code. The new AI-assisted workflow reduces risk, eliminates significant manual effort, and enables faster modernization while lowering costs. Learn more: https://aka.ms/pg-migration-tooling Be sure to follow the Microsoft Blog for PostgreSQL for regular updates from the Postgres on Azure team at Microsoft. We publish monthly recaps about new features in Azure Database for PostgreSQL, as well as an annual blog about what’s new in Postgres at Microsoft.3KViews9likes0CommentsPerformance Tuning for CDC: Managing Replication Lag in Azure Database for PostgreSQL with Debezium
Written By: Shashikant Shakya, Ashutosh Bapat, and Guangnan Shi The Problem Picture this: your CDC pipeline is running smoothly, streaming changes from PostgreSQL to Kafka. Then, a bulk update hits millions of rows. Suddenly, Kafka queues pile up, downstream systems lag; dashboards go stale. Why does replication lag spike during heavy operations? And what can you do about it? Why This Matters Change Data Capture (CDC) powers real-time integrations, pushing row-level changes from OLTP systems into event streams, data lakes, caches, and microservices. Debezium is a leading open-source CDC engine for PostgreSQL, and many teams successfully run Debezium against Azure Database for PostgreSQL to keep downstream systems synchronized. However, during large DML operations (bulk updates, deletes) or schema changes (DDL), replication lag can occur because: Debezium consumes WAL slower than the database produces it Kafka throughput dips Consumers fall behind This article explains why lag happens, grounded in logical decoding internals, and shows how to diagnose it quickly and what to tune across the database, Azure, and connector layers to keep pipelines healthy under heavy load. CDC Basics CDC streams incremental changes (INSERT/UPDATE/DELETE) from your source database to downstream systems in near real-time. In PostgreSQL, CDC is typically implemented using logical decoding and logical replication: PostgreSQL records every change in the Write-Ahead Log (WAL) WALSender reads WAL and decodes it into change events The pgoutput extension formats those changes, while Debezium subscribes and publishes them to Kafka topics Benefits of CDC: Low latency Lower source overhead than periodic full extracts Preserves transactional ordering for consumers The Internals: Why Lag Happens Replication lag during heavy operations isn’t random, it’s rooted in how PostgreSQL handles logical decoding. To understand why, let’s look at the components that process changes and what happens when they hit resource limits. Logical Decoding & ReorderBuffer Logical decoding reconstructs transaction-level changes so they can be delivered in commit order. The core component enabling this is the ReorderBuffer. What ReorderBuffer does: Reads WAL and groups changes per transaction, keeping them in memory until commit If memory exceeds logical_decoding_work_mem , PostgreSQL spills decoded changes to disk in per-slot spill files On commit, it reads back spilled data and emits changes to the client (via pgoutput → Debezium) Disk Spill Mechanics (Deep Dive) When a transaction is too large for memory: PostgreSQL writes decoded changes to spill files under pg_replslot/<slot_name>/ Wait events like ReorderBufferWrite and ReorderBufferRead dominate during heavy load Spills to disk increase latency because disk I/O is far slower than memory access Analogy: Think of ReorderBuffer as a warehouse staging floor: Small shipments move quickly in memory A huge shipment forces workers to move boxes offsite (spill-to-disk), then bring them back later, slowing everything down Why Multiple Slots Amplify the Impact The WAL is shared by all slots Each slot decodes the entire WAL stream because filtering happens after decoding Result: A single large transaction affects every slot, multiplying replication lag Recommendation: Minimize the number of slots/connectors Remember: logical_decoding_work_mem applies per slot, not globally Impact Snapshot: Scenario Spill Size I/O Impact 1 Slot 1 GB 1× I/O 5 Slots 1 GB × 5 5× I/O Lifecycle: WAL → ReorderBuffer → Memory → Spill to Disk → Read Back → Send to Client How to Detect Spills and Lag Detection should be quick and repeatable. Start by confirming slot activity and LSN distance (how far producers are ahead of consumers), then check walsender wait events to see if decoding is stalling, and finally inspect per-slot spill metrics to quantify memory overflow to disk. 1. Active slots and lag Use this to measure how far each logical slot is behind the current WAL. A large lsn_distance indicates backlog. If restart_lsn is far behind, the server must retain more WAL on disk, increasing storage pressure. SELECT slot_name, active_pid, confirmed_flush_lsn, restart_lsn, pg_current_wal_lsn(), pg_size_pretty((pg_current_wal_lsn() - confirmed_flush_lsn)) AS lsn_distance FROM pg_replication_slots; Interpretation: Focus on slots with the largest lsn_distance . If active_pid is NULL, the slot isn’t currently consuming; investigate connector health or connectivity. 2. Wait events for walsender Check whether the WAL sender backends are stalled on decoding or I/O. ReorderBuffer-related waits typically point to spill-to-disk conditions or slow downstream consumption. SELECT pid, backend_type, application_name, wait_event FROM pg_stat_activity WHERE backend_type = 'walsender' ORDER BY backend_start; Interpretation: Frequent ReorderBufferWrite / ReorderBufferRead suggests large transactions are spilling. 3. Spill stats Quantify how often and how much each slot spills from memory to disk. Rising spill_bytes and spill_count during heavy DML are strong signals to increase logical_decoding_work_mem , reduce transaction size, or tune connector throughput. SELECT slot_name, spill_txns, spill_count, pg_size_pretty(spill_bytes) AS spill_bytes, total_txns, pg_size_pretty(total_bytes) AS total_bytes, stats_reset FROM pg_stat_replication_slots; Interpretation: Compare spill_bytes across slots; if many slots spill simultaneously, aggregate I/O multiplies. Consider reducing the number of active slots or batching large DML. Fixing the Lag: Practical Strategies Once you’ve identified replication lag and its root causes, the next step is mitigation. Solutions span across the database configuration, Azure infrastructure, and the Debezium connector layer. These strategies aim to reduce I/O overhead, optimize memory usage, and ensure smooth data flow under heavy workloads. Database & Azure Layer At the database and infrastructure level, focus on reducing unnecessary overhead and ensuring resources are scaled for peak demand. Here’s what you can do: Avoid REPLICA IDENTITY FULL : prefer PRIMARY KEY; or add a unique index and set REPLICA IDENTITY USING INDEX Use appropriately scaled IO-capable storage / right SKU for higher IOPS Right-size logical_decoding_work_mem considering multiple slots Break up large DML: batch updates/deletes (10k–50k rows/commit) Schedule/throttle maintenance: stagger VACUUM/REINDEX/DDL Network placement: use Private Endpoint and co-locate Debezium/Kafka within the same region/VNet Debezium Connector Layer Connector-level tuning ensures that Debezium can keep pace with PostgreSQL WAL generation and Kafka throughput. Key adjustments include: Tune throughput & buffering: increase max.batch.size , max.queue.size , reduce poll.interval.ms Offset flush tuning: reduce offset.flush.interval.ms Heartbeats: introduce heartbeat events to detect staleness and prevent WAL buildup Conclusion Managing replication lag in Azure Database for PostgreSQL with Debezium isn’t just about tweaking parameters; it’s about understanding logical decoding internals, anticipating workload patterns, and applying proactive strategies across the entire solution. Key Takeaways: Monitor early, act fast: Use diagnostic queries to track lag, wait events, and spill activity Minimize complexity: Fewer replication slots and well-tuned connectors reduce redundant work Plan for scale: Batch large DML operations, right-size memory settings Leverage Azure capabilities: Optimize IOPS tiers, network placement for predictable performance By combining these best practices with continuous monitoring and operational discipline, you can keep your CDC pipelines healthy, even under heavy load, while ensuring downstream systems stay in sync with minimal latency. Further Reading Azure Database for PostgreSQL Flexible Server Overview PostgreSQL Logical Replication Debezium PostgreSQL ConnectorNovember 2025 Recap: PostgreSQL on Azure
Hello Azure Community, November was an exciting month for PostgreSQL on Azure, packed with announcements at Microsoft Ignite 2025. In this recap, we’ll walk you through the highlights from features recaps to deep-dive sessions so you can catch up on everything you might have missed. If you couldn’t join us live, here are some of the key sessions now available on demand: Modern data modern apps: Innovation with Microsoft Databases AI-assisted migration: The path to powerful performance on PostgreSQL Azure HorizonDB: Deep Dive into a New Enterprise-Scale PostgreSQL The blueprint for intelligent AI agents backed by PostgreSQL Nasdaq Boardvantage: AI-driven governance on PostgreSQL and Microsoft Foundry We also introduced major updates, including Azure HorizonDB preview with AI capabilities and new features for Azure Database for PostgreSQL that make migrations faster, deployments smarter, and performance more predictable. The blog is organized into the following sections: Azure HorizonDB (Preview) Azure Database for PostgreSQL feature announcements Azure HorizonDB: AI features & developer tools Photo Gallery from Ignite Azure HorizonDB (Preview) If it’s not obvious, the introduction of Azure HorizonDB is a big deal. This brand-new, fully managed PostgreSQL service is built for mission-critical workloads and modern AI applications, bringing cloud-native scale, ultra-low latency, and deep Azure integration in one powerful offering. Here are some of the features that we offer with Azure HorizonDB: Scale-out compute architecture supporting up to 3,072 vCores across primary and replica nodes. Auto-scaling shared storage that handles databases up to 128 TB, while achieving sub-millisecond multi-zone commit latencies. Breakthrough throughput up to 3× higher than open-source PostgreSQL for transactional workloads, powered by our storage innovations. Learn more about Azure HorizonDB in our detailed blog. Azure Database for PostgreSQL feature announcements We introduced a wave of new capabilities focusing on performance, analytics, security and AI-assisted migration for Azure Database for PostgreSQL. Among the key general availability announcements were PostgreSQL 18, Fabric mirroring, Elastic clusters, and support for Parquet in the azure_storage extension. We also unveiled several exciting preview features, including Intel and AMD v6-series SKUs, the pg_duckdb extension, and enhanced tooling for Oracle-to-PostgreSQL migrations. All these updates are captured in our blog post explore the full list and learn more. Azure HorizonDB: AI features & developer tools Azure HorizonDB isn’t just built for enterprise-scale workloads it’s also designed to power next-generation AI applications. At Ignite, we introduced advanced AI capabilities including DiskANN with Advanced Filtering, built-in AI model management, and Microsoft Foundry integration. DiskANN Advanced Filtering reduces query latency by up to 3×, depending on filter complexity. AI Model Management enables developers to set up semantic operators directly within the PostgreSQL environment, simplifying AI workflows. Microsoft Foundry Integration adds a PostgreSQL connector, allowing Foundry agents to interact with HorizonDB securely using natural language instead of SQL. General Availability of PostgreSQL extension for VS Code We announced the general availability of the PostgreSQL extension for VS Code, making development faster and more intuitive. The PostgreSQL extension for VS code has now over 300K downloads from the Visual Studio Marketplace! This extension makes it easier for developers to seamlessly interact with any PostgreSQL databases. To learn more about these AI features in Azure HorizonDB, check out our blog post. Photo Gallery from Microsoft Ignite Ignite 2025 brought a lot of great sessions, announcements, and hands-on demos. Here’s a quick photo recap of some key moments from technical deep dives to product launches to hearing real world impact from our amazing customer speakers. POSETTE CFP Now Open We are excited to announce that the Call for Proposals (CFP) for POSETTE: An Event for Postgres 2026 is now open! We’re inviting speakers, practitioners, educators, and community contributors to share their knowledge through talks and demos. If you’re passionate about PostgreSQL, open-source innovation, or building resilient data systems, we’d love to see your submission. CFP Link: https://posetteconf.com/2026/cfp/573Views3likes0Comments