Azure Database for PostgreSQL
105 TopicsPostgreSQL 18 Vacuuming Improvements Explained
Introduction This PostgreSQL 18 release brings one of the most significant collections of VACUUM and ANALYZE improvements in years. These updates include faster heap scans through asynchronous I/O (AIO), flexible on-the-fly autovacuum scaling, earlier triggers for large tables, safer tail shrinking, and more robust observability. At a Glance: What’s New VACUUM runs faster by overlapping reads with processing via asynchronous I/O (AIO). Autovacuum throughput is adjustable without restarts: set a startup ceiling at autovacuum_worker_slots and tune autovacuum_max_workers on the fly. Autovacuum can trigger earlier on big tables using autovacuum_vacuum_max_threshold (hard cap on dead tuples). Normal VACUUM can “eager-freeze” all visible pages to amortize anti-wraparound work; control with vacuum_max_eager_freeze_failure_rate . VACUUM and ANALYZE extend recursively into child tables by default; use the ONLY table modifier to target parent-level only. Tail shrinking is explicit and predictable using the server setting vacuum_truncate or per-command TRUNCATE. New visibility: per-table cumulative times, explicit cost-delay timing ( track_cost_delay_timing ). Deep Dive: Changes That Matter 1) Vacuum is faster with asynchronous I/O (AIO) A new asynchronous I/O subsystem lets VACUUM queue up reads and continue working asynchronously while data is fetched, reducing heap-scan wait time on cache misses and smoothing throughput during large table passes. How to enable & verify: Controlled by the io_method server parameter SHOW io_method; (typically 'worker' by default) SHOW io_workers; (default 3) Increase workers gradually if VACUUM remains I/O-bound: io_workers = 6; (test on non-prod first) Note: Raise workers gradually (e.g., 3 → 6 → 12) and monitor server storage metrics and VACUUM durations. 2) Adjust autovacuum workers on the fly Autovacuum scaling is now much more flexible. Previously, changing autovacuum_max_workers required a server restart, which could be painful for time-based tuning or urgent bloat cleanup activities. The new approach now separates these two concepts: autovacuum_worker_slots : The total number of slots reserved at startup (server restart required). autovacuum_max_workers : The number of workers allowed to run concurrently (reloadable on the fly). If you set autovacuum_max_workers higher than the reserved slots, PostgreSQL will cap the number and log a warning. Note: When you scale up workers, consider enlarging the shared cost budget so that per-worker share does not collapse. Either raise autovacuum_vacuum_cost_limit or reduce autovacuum_vacuum_cost_delay . 3) Force earlier autovacuum on very large tables Extremely large tables can accumulate a significant number of dead tuples while staying below the classic trigger threshold. threshold = reltuples * autovacuum_vacuum_scale_factor + autovacuum_vacuum_threshold To fix this, a new parameter autovacuum_vacuum_max_threshold adds a hard cap on dead tuples (default: 100,000,000). Once a table crosses above this number of dead rows, autovacuum runs immediately even if the scale factor says “wait.” This prevents bloat buildup between runs on very large relations. Set it to -1 to disable the feature. Examples: Server-wide cap (example: 50M dead tuples): autovacuum_vacuum_max_threshold = 50000000 Per-table override for a hot relation: ALTER TABLE orders SET (autovacuum_vacuum_max_threshold = 20000000); Disable on a table if needed: ALTER TABLE staging SET (autovacuum_vacuum_max_threshold = -1); Note: Prefer per-table caps (e.g., 10–50M) for a few giant tables instead of lowering the server-wide cap too far. 4) Eager freezing during normal vacuum On very large tables, “normal” VACUUM often skips all visible pages and leaves a backlog of pages that are all visible but not all frozen. When an anti-wraparound (aggressive) VACUUM finally triggers, it must scan all those pages, causing long, I/O heavy maintenance runs that compete for resources with your foreground workload. With new eager freezing, normal VACUUM can proactively freeze tuples on all visible pages, spreading the freeze workload across routine runs and shortening future anti-wraparound passes on very large tables. Examples: Server-wide: be a bit more eager: vacuum_max_eager_freeze_failure_rate = 0.05 Per-table: even more eager on a hot relation: ALTER TABLE events SET (vacuum_max_eager_freeze_failure_rate = 0.10); Disable for churny tables: ALTER TABLE temp_staging SET (vacuum_max_eager_freeze_failure_rate = 0.0); Note: Increase if anti-wraparound VACUUMs on big tables are too long or spike I/O. Decrease if routine VACUUMs become noticeably longer without observable benefit. 5) Vacuum/analyze: recurse by default; use ONLY modifier when needed PostgreSQL now simplifies maintenance for partitioned tables. For inheritance-based partitioning, VACUUM and ANALYZE automatically include child tables by default, removing the need for manual recursion. Use the ONLY keyword when you want to process just the parent. Declarative partitioning already behaved this way, but the new ability to run ANALYZE ONLY on a partitioned parent is a key improvement. It lets you refresh the parent’s statistics without scanning all partitions, ideal when partition stats are already current but accurate parent-level estimates are needed for better query planning. Examples: Inheritance-based partitioning: Recurse by default (new behavior) VACUUM VERBOSE public.legacy_parent; --vacuum parent + child tables ANALYZE public.legacy_parent; -- analyze parent + child tables Restrict to the parent only VACUUM VERBOSE ONLY public.legacy_parent; --old behaviour ANALYZE ONLY public.legacy_parent; --old behaviour Declarative partitioning: Refresh just the parent’s stats (skip partitions) ANALYZE ONLY public.sales_partitioned; VACUUM ONLY on a partitioned parent warns and does no work VACUUM VERBOSE ONLY public.sales_partitioned; Note: For large-partitioned tables, schedule parent-only ANALYZE during peak hours and full recursive runs off-peak. 6) Tail shrinking made explicit ( vacuum_truncate ) Autovacuum’s tail shrink step (removing empty pages from the end of a table) can take an ACCESS EXCLUSIVE lock that is replayed on hot standbys, briefly stalling read operations. To make this more predictable, PostgreSQL added a top level switch ( vacuum_truncate ) so you can disable automatic truncation and perform this operation only when you choose. Examples: Prefer explicit control: disable automatic tail-shrinks: vacuum_truncate = off Later, shrink the tail of a large log table in a window: VACUUM (VERBOSE, TRUNCATE ON) public.pgbench_history; Or keep truncation off for specific hot tables: ALTER TABLE public.orders SET (vacuum_truncate = off); Note: If replicas serve heavy read operations, keep automatic truncation off and run explicit truncation within your maintenance windows. 7) Easier to see vacuum/analyze overhead Per-table time counters: cumulative total_vacuum_time / total_autovacuum_time in the pg_stat_all_tables views expose where maintenance time is spent, which is great for finding your top offenders. Throttling timing: flip track_cost_delay_timing = on to record actual sleep time from cost based delays in pg_stat_progress_vacuum and in VERBOSE output, perfect to identify “we waited” vs. “we hit disk.” (This setting is Off by default to avoid adding timing overhead.) Example: Top tables by total maintenance time: SELECT relname, total_vacuum_time, total_autovacuum_time FROM pg_stat_user_tables ORDER BY (COALESCE(total_vacuum_time, 0) + COALESCE(total_autovacuum_time, 0)) DESC NULLS LAST LIMIT 10; Conclusion These improvements make your maintenance operations faster, more predictable, and easier to tune. Get started by enabling AIO, right sizing autovacuum concurrency and cost budgets, setting hard caps for your biggest tables, and adopting explicit tail shrinking if you have hot standbys. With better observability, you can quickly confirm whether VACUUM is throttled by cost-based delays or genuinely storage-bound and adjust accordingly. Before applying changes in production, ensure you have validated these settings in a staging environment, monitor pg_stat_progress_vacuum , pg_stat_io , and table-level counters, and iterate gradually. The net effect is smoother autovacuum behavior, shorter aggressive VACUUMs, and fewer unexpected locks.September 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, We are back with another round of updates for Azure Database for PostgreSQL! September is packed with powerful enhancements, from the public preview of PostgreSQL 18 to the general availability of Azure Confidential Computing, plus several new capabilities designed to boost performance, security, and developer experience. Stay tuned as we dive deeper into each of these feature updates. Before we dive into the feature highlights, let’s take a look at PGConf NYC 2025 highlights. PGConf NYC 2025 Highlights Our Postgres team was glad to be part of PGConf NYC 2025! As a Platinum sponsor, Microsoft joined the global PostgreSQL community for three days of sessions covering performance, extensibility, cloud, and AI, highlighted by Claire Giordano’s keynote, “What Microsoft is Building for Postgres—2025 in Review,” along with deep dives from core contributors and engineers. If you missed it, you can catch up here: Keynote slides: What Microsoft is Building for Postgres—2025 in Review by Claire Giordano at PGConf NYC 2025 Day 3 wrap-up: Key takeaways, highlights, and insights from the Azure Database for PostgreSQL team. Feature Highlights Near Zero Downtime scaling for High Availability (HA) enabled servers - Generally Available Azure Confidential Computing for Azure Database for PostgreSQL - Generally Available PostgreSQL 18 on Azure Database for PostgreSQL - Public Preview PostgreSQL Discovery & Assessment in Azure Migrate - Public Preview LlamaIndex Integration with Azure Postgres Latest Minor Versions GitHub Samples: Entra ID Token Refresh for PostgreSQL VS Code Extension for PostgreSQL enhancements Near Zero Downtime scaling for High Availability (HA) enabled servers – Generally Available Scaling compute for high availability (HA) enabled Azure Database for PostgreSQL servers just got faster. With Near Zero Downtime (NZD) scaling, compute changes such as vCore or tier modifications are now complete with minimal interruption, typically under 30 seconds using HA failover which maintains the connection string. The service provisions a new primary and standby instance with the updated configuration, synchronizes them with the existing setup, and performs a quick failover. This significantly reduces downtime compared to traditional scaling (which could take 2–10 minutes), improving overall availability. Visit our documentation for full details on how Near Zero Downtime scaling works. Azure Confidential Computing for Azure Database for PostgreSQL - Generally Available Azure Confidential Computing (ACC) Confidential Virtual Machines (CVMs) are now generally available for Azure Database for PostgreSQL. This capability brings hardware-based protection for data in use, ensuring your most sensitive information remains secure, even while being processed. With CVMs, your PostgreSQL flexible server instance runs inside a Trusted Execution Environment (TEE), a secure, hardware-backed enclave that encrypts memory and isolates it from the host OS, hypervisor, and even Azure operators. This means your data enjoys end-to-end protection: at rest, in transit, and in use. Key Benefits: End-to-End Security: Data protected at rest, in transit, and in use Enhanced Privacy: Blocks unauthorized access during processing Compliance Ready: Meets strict security standards for regulated workloads Confidence in Cloud: Hardware-backed isolation for critical data Discover how Azure Confidential Computing enhances PostgreSQL check out the blog announcement. PostgreSQL 18 on Azure Database for PostgreSQL – Public Preview PostgreSQL 18 is now available in public preview on Azure Database for PostgreSQL, launched the same day as the PostgreSQL community release. PostgreSQL 18 introduces new performance, scalability, and developer productivity improvements. With this preview, you get early access to the latest community release on a fully managed Azure service. By running PostgreSQL 18 on flexible server, you can test application compatibility, explore new SQL and performance features, and prepare for upgrades well before general availability. This preview release gives you the opportunity to validate your workloads, extensions, and development pipelines in a dedicated preview environment while taking advantage of the security, high availability, and management capabilities in Azure. With PostgreSQL 18 in preview, you are among the first to experience the next generation of PostgreSQL on Azure, ensuring your applications are ready to adopt it when it reaches full general availability. To learn more about preview, read https://aka.ms/pg18 PostgreSQL Discovery & Assessment in Azure Migrate – Public Preview The PostgreSQL Discovery & Assessment feature is now available in public preview on Azure Migrate, making it easier to plan your migration journey to Azure. Migrating PostgreSQL workloads can be challenging without clear visibility into your existing environment. This feature solves that problem by delivering deep insights into on-premises PostgreSQL deployments, making migration planning easier and more informed. With this feature, you can discover PostgreSQL instances across your infrastructure, assess migration readiness and identify potential blockers, receive configuration-based SKU recommendations for Azure Database for PostgreSQL, and estimate costs for running your workloads in Azure all in one unified experience. Key Benefits: Comprehensive Visibility: Understand your on-prem PostgreSQL landscape Risk Reduction: Identify blockers before migration Optimized Planning: Get tailored SKU and cost insights Faster Migration: Streamlined assessment for a smooth transition Learn more in our blog: PostgreSQL Discovery and Assessment in Azure Migrate LlamaIndex Integration with Azure Postgres The support for native LlamaIndex integration is now available for Azure Database for PostgreSQL! This enhancement brings seamless connectivity between Azure Database for PostgreSQL and LlamaIndex, allowing developers to leverage Azure PostgreSQL as a secure and high-performance vector store for their AI agents and applications. Specifically, this package adds support for: Microsoft Entra ID (formerly Azure AD) authentication when connecting to your Azure Database for PostgreSQL instances, and, DiskANN indexing algorithm when indexing your (semantic) vectors. This package makes it easy to connect LlamaIndex to your Azure PostgreSQL instances whether you're building intelligent agents, semantic search, or retrieval-augmented generation (RAG) systems. Explore the full guide here: https://aka.ms/azpg-llamaindex Latest Postgres minor versions: 17.6, 16.9, 15.13, 14.18 and 13.21 PostgreSQL minor versions 17.6, 16.9, 15.13, 14.18 and 13.21 are now supported by Azure Database for PostgreSQL. These minor version upgrades are automatically performed as part of the monthly planned maintenance in Azure Database for PostgreSQL. The upgrade automation ensures that your databases are always running the latest optimized versions without requiring manual intervention. This release fixes 3 security vulnerabilities and more than 55 bugs reported over the last several months. PostgreSQL minor versions are backward-compatible, so updates won’t affect your applications. For details about the release, see PostgreSQL community announcement. GitHub Samples: Entra ID Token Refresh for PostgreSQL We have introduced code samples for Entra ID token refresh, built specifically for Azure Database for PostgreSQL. These samples simplify implementing automatic token acquisition and refresh, helping you maintain secure, uninterrupted connectivity without manual intervention. By using these examples, you can keep sessions secure, prevent connection drops from expired tokens, and streamline integration with Azure Identity libraries for PostgreSQL workloads. What’s Included: Ready-to-use code snippets for token acquisition and refresh for Python and .NET Guidance for integrating with Azure Identity libraries Explore the samples repository on https://aka.ms/pg-access-token-refresh and start implementing it today. VS Code Extension for PostgreSQL enhancements A new version for VS Code Extension for PostgreSQL is out! This update introduces a Server Dashboard that provides high-level metadata and real-time performance metrics, along with historical insights for Azure Database for PostgreSQL Flexible Server. You can even use GitHub Copilot Chat to ask performance questions in natural language and receive diagnostic SQL queries in response. Additional enhancements include: A new keybinding for “Run Current Statement” in the Query Editor Support for dragging Object Explorer entities into the editor with properly quoted identifiers Ability to connect to databases via socket file paths Key fixes: Preserves the state of the Explain Analyze toolbar toggle Removes inadvertent logging of sensitive information from extension logs Stabilizes memory usage during long-running dashboard sessions Don’t forget to update to the latest version in the marketplace to take advantage of these enhancements and visit our GitHub repository to learn more about this month’s release. We’d love your feedback! Help us improve the Server Dashboard and other features by sharing your thoughts on GitHub . Azure Postgres Learning Bytes 🎓 Setting up logical replication between two servers This section will walk through setting up logical replication between two Azure Database for PostgreSQL flexible server instances. Logical replication replicates data changes from a source (publisher) server to a target (subscriber) server. Prerequisites PostgreSQL versions supported by logical replication (publisher/subscriber compatible). Network connectivity: subscriber must be able to connect to the publisher (VNet/NSG/firewall rules). A replication role on the publisher (or a role with REPLICATION privilege). Step 1: Configure Server Parameters on both publisher and subscriber: On Publisher: wal_level=logical max_worker_processes=16 max_replication_slots=10 max_wal_senders=10 track_commit_timestamp=on On Subscriber: wal_level=logical max_worker_processes=16 max_replication_slots=10 max_wal_senders=10 track_commit_timestamp=on max_worker_processes = 16 max_sync_workers_per_subscription = 6 autovacuum = OFF (during initial copy) max_wal_size = 64GB checkpoint_timeout = 3600 Step 2: Create Publication (Publisher) and alter role with replication privilege ALTER ROLE <replication_user> WITH REPLICATION; CREATE PUBLICATION pub FOR ALL TABLES; Step 3: Create Subscription (Subscriber) CREATE SUBSCRIPTION <subscription-name> CONNECTION 'host=<publisher_host> dbname=<db> user=<user> password=<pwd>' PUBLICATION <publication-name>;</publication-name></pwd></user></db></publisher_host></subscription-name> Step 4: Monitor Publisher: This shows active processes on the publisher, including replication workers. SELECT application_name, wait_event_type, wait_event, query, backend_type FROM pg_stat_activity WHERE state = 'active'; Subscriber: The ‘pg_stat_progress_copy’ table tracks the progress of the initial data copy for each table. SELECT * FROM pg_stat_progress_copy; To explore more details on how to get started with logical replication, visit our blog on Tuning logical replication for Azure Database for PostgreSQL. Conclusion That’s all for the September 2025 feature highlights! We remain committed to making Azure Database for PostgreSQL more powerful and secure with every release. Stay up to date on the latest enhancements by visiting our Azure Database for PostgreSQL blog updates link. Your feedback matters and helps us shape the future of PostgreSQL on Azure. If you have suggestions, ideas, or questions, we’d love to hear from you: https://aka.ms/pgfeedback. We look forward to sharing even more exciting capabilities in the coming months. Stay tuned!PgBouncer Best Practices in Azure Database for PostgreSQL – Part 1
Introduction Connection pooling is critical for scaling PostgreSQL workloads efficiently, especially in managed environments like Azure Database for PostgreSQL. PgBouncer, a lightweight connection pooler, helps manage thousands of client connections without overwhelming the database. Connection pooling is very important when managing multiple concurrent database requests, as PostgreSQL uses a process-per-connection model, which means too many active connections can: Increase context switching overhead Consume excessive CPU/memory Degrade performance under load PgBouncer addresses this by limiting active server connections and queuing the additional client requests. However, misconfiguring key settings such as default_pool_size can still lead to CPU/memory pressure, connection bottlenecks, and degraded performance. Careful planning and tuning are essential to avoid these pitfalls. Understanding connection pools Before diving into tuning, it’s important to understand how PgBouncer organizes connections: PgBouncer creates a separate pool for each unique (database, user) combination. For example: If you have 2 application roles/users connecting to 2 databases. In this scenario, PgBouncer will allocate 4 pools. Each pool maintains its own number of connections, determined by default_pool_size. So, the total number of potential server connections is: number_of_pools × default_pool_size This is why sizing default_pool_size correctly is critical. Azure PgBouncer defaults Azure Database for PostgreSQL comes with preconfigured PgBouncer settings optimized for most workloads. Understanding these defaults is essential before making any tuning changes: pool_mode: TRANSACTION (default in Azure; best for most workloads) default_pool_size: 50 (range: 1–4950) max_client_conn: 5000 (range: 1–50000) Transaction mode support for prepared statements PgBouncer now enables support for PostgreSQL PREPARED STATEMENTS when combined together with TRANSACTION mode pooling. Previously, in transaction mode cached plans were difficult to manage, as there was no way for PgBouncer to confirm whether a new connection allocated from the pool would benefit from any cached plans generated from prior PREPARED STATEMENT operations. To work around this scenario, PgBouncer now provides a parameter which controls how many globally cached plan statements remain in memory for any pooled connection to leverage. max_prepared_statements: 200 (range: 0-5000) PostgreSQL connection limits For large tiers (e.g., 96 vCores), the default max_connections is 5000, with 15 reserved for system use. That means 4985 user connections are available. For more details, see maximum connection. Sizing best practices Proper sizing ensures optimal performance and resource utilization. Here’s how to approach it: 1. Use transaction pooling Start by confirming that pool_mode = TRANSACTION is enabled. This is already the Azure default and provides the best pooling efficiency for most web applications. If your application is using prepared statements, ensure you configure max_prepared_statements accordingly. 2. Determine your maximum active concurrent database operations (max_concurrent_ops) Next, you need to estimate how many total concurrent active PostgreSQL backends your instance can maintain: For CPU-bound OLTP workloads: keep max_concurrent_ops near 1.5x -2x the number of CPU vCores. For I/O-heavy workloads: stay slightly higher than vCore count. Rule of thumb for 96 vCores: max_concurrent_ops ≈ 144–192. 3. Divide across pools Once you’ve estimated your max_concurrent_ops value, the next step is to distribute your capacity across all connection pools. default_pool_size ≈ max_concurrent_ops / number_of_pools Example: max_concurrent_ops = 144 number_of_pools = 4 default_pool_size = 144 / 4 = 36 Sample configuration To illustrate how these calculations translate into real-world settings, here’s a sample PgBouncer configuration tuned for a scenario with four pools and an Active_Backend_Target of 144. pool_mode = transaction default_pool_size = 36 ; tuned for 4 pools max_client_conn = 5000 Quick reference table For quick planning, the following table provides starting recommendations based on common Azure Database for PostgreSQL SKU sizes. Use these as a baseline and adjust according to your workload metrics. SKU Size Memory Default max_connections Pools Suggested max_concurrent_ops Starting default_pool_size 8 vCores 32 GiB 3437 2 12–16 6–12 16 vCores 64 GiB 5000 2 24–32 12–20 32 vCores 128 GiB 5000 2 48–64 30–40 48 vCores 192 GiB 5000 2 72–92 40–60 64 vCores 256 GiB 5000 2 96–128 50–70 96 vCores 384–672 GiB 5000 2 144–192 60–80 For all tiers ≥16 vCores, max_connections is capped at 5000 (with 15 reserved for system use). Notes: default_pool_size = max_concurrent_ops / number_of_pools These values are starting recommendations. You should validate them against actual workload metrics and adjust gradually. Always ensure: (number_of_pools × default_pool_size) < max_connections − 15 (reserved system slots) Monitoring and tuning After applying your configuration, continuous monitoring is key. Here’s how: Use PgBouncer metrics in Azure Monitor to track active, idle, and waiting connections. Run SHOW POOLS; for real-time stats; watch cl_waiting vs sv_idle. For detailed monitoring and management, visit the Admin Console. Recommended Alerts: Alert if waiting client connections > 0 while idle server connections = 0 (indicates pool exhaustion—consider increasing default_pool_size). Alert if active server connections approach the configured default_pool_size (may indicate need for tuning). Alert if max_client_conn utilization exceeds 80% (risk of client-side connection errors). Tip: If waiting client connections grow while idle server connections are zero, increase default_pool_size cautiously. Review performance regularly and adjust gradually. Common pitfalls Avoid these mistakes when configuring PgBouncer: Changing pool mode to SESSION by default: transaction pooling is better for most apps. Session mode will not release connections until the session is ended. Ignoring pool count: multiplying a large default_pool_size by many pools can exhaust connections. Confusing max_client_conn with Postgres capacity: PgBouncer can accept many more clients than the server concurrent processes can support, any client connections not being processed will be waiting for resources. Tuning without data: always review metrics before changes. Conclusion Choosing the right default_pool_size in Azure Database for PostgreSQL with PgBouncer is about balancing performance and resource efficiency. With built-in PgBouncer in Flexible Server, you can enable connection pooling with a single parameter making it easy to get started quickly. The default settings are optimized for most workloads, and as your requirements grow, you can further tune parameters like default_pool_size and max_connections to suit your needs. By understanding your workload, estimating an active concurrent operations, dividing it across pools while respecting PostgreSQL limits, and continuously monitoring and adjusting based on real data, you can achieve a stable, scalable, and cost-effective environment. Further reading For more in-depth guidance and real-world scenarios on PgBouncer configuration and tuning in Azure Database for PostgreSQL, explore the following resources: Leverage built-in PgBouncer in Flexible Server Monitoring PgBouncer in Azure PostgreSQL Flexible Server Identify and solve connection performance in Azure Postgres Not all Postgres connection pooling is equal Connection handling best practice with PostgreSQLPostgreSQL and the Power of Community
PGConf NYC 2025 is the premier event for the global PostgreSQL community, and Microsoft is proud to be a Platinum sponsor this year. The conference will also feature a keynote from Claire Giordano, Principal PM for PostgreSQL at Microsoft, who will share our vision for Postgres along with lessons from ten PostgreSQL hacker journeys.PostgreSQL Discovery and Assessment in Azure Migrate – Public Preview
We’re excited to announce the public preview of PostgreSQL discovery and assessment in Azure Migrate! This feature helps organizations plan their migration journey to Azure by providing deep insights into on-premises PostgreSQL environments. Why This Matters Migrating PostgreSQL workloads to Azure can be challenging without visibility into your current environment. Azure Migrate now offers a unified experience to: Discover PostgreSQL instances across your infrastructure. Assess migration readiness and identify potential blockers. Get configuration-based SKU recommendations for Azure Database for PostgreSQL. Estimate Azure costs for your PostgreSQL workloads. Key Capabilities Comprehensive Discovery Inventory: Catalog PostgreSQL versions and related components. Discovery: Collect database parameters, configurations, table structures, and storage details. Assessment Features Readiness Rules: Determine if your PostgreSQL instances are: Ready: The instance can be migrated to Azure Database for PostgreSQL without any migration issues. Ready with Conditions: The instance has one or more migration issues. Review the identified issues and apply the recommended remediation steps before migration. Not Ready: The assessment did not identify an Azure Database for PostgreSQL configuration that meets the desired performance and configuration requirements. Review the recommendations provided to make the PostgreSQL instance ready for migration. Unknown: Azure Migrate can't assess readiness because discovery is still in progress or there are issues that need to be resolved. To fix discovery issues, check the Notifications blade for details. If the issue persists, contact Microsoft support. Configuration-Based SKU Recommendations: Based on vCores and memory from the machine and storage from the PostgreSQL instance, Example: Memory Optimized – E20ds_v5 Pricing Estimates: Approximate Azure cost for recommended SKUs. Database Parameter Collections - Deep insights into Database parameters How to Get Started? To begin using the PostgreSQL Discovery and Assessment feature in Azure Migrate, follow this four-step onboarding process: Create an Azure Migrate project Initiate your migration journey by setting up a project in the Azure portal. Configure the Azure Migrate appliance Install Windows-based Azure Migrate appliance to obtain a software inventory of servers, PostgreSQL instances, and their attributes, and perform discovery. Review discovered inventory Examine the detailed attributes of the discovered PostgreSQL instances. Create an assessment Evaluate readiness and get detailed recommendations for migration to Azure Database for PostgreSQL. Benefits of Using Azure Migrate for PostgreSQL Single Pane of Glass: Manage PostgreSQL migrations alongside servers, apps, and other databases. Simple Setup: Lightweight collector, no heavy appliances. Actionable Insights: Readiness rules and SKU recommendations tailored to your configuration. For comprehensive, step-by-step instructions, please refer to the discovery and assessment tutorials in the documentation: Provide server credentials to discover software inventory, dependencies, web apps, and SQL Server instances and databases - Azure Migrate | Microsoft Learn Discovery methods in Azure Migrate - Azure Migrate | Microsoft Learn Assessing On-Premises PostgreSQL for Migration to Azure Flexible Server - Azure Migrate | Microsoft Learn Join the Preview and Share Your Feedback! The PostgreSQL Discovery and Assessment feature in Azure Migrate enables you to effortlessly discover, assess, and plan your PostgreSQL database migrations to Azure. Try the features out in public preview and fast-track your migration journey! If you have any queries, feedback, or suggestions, please let us know by leaving a comment below or by directly contacting us at askazurepostgresql@microsoft.com. We are eager to hear your feedback and support you on your journey to AzureArchitecting Secure PostgreSQL on Azure: Insights from Mercedes-Benz
Authors: Johannes Schuetzner, Software Engineer at Mercedes-Benz & Nacho Alonso Portillo, Principal Program Manager at Microsoft When you think of Mercedes-Benz, you think of innovation, precision, and trust. But behind every iconic vehicle and digital experience is a relentless drive for security and operational excellence. At Mercedes-Benz R&D in Sindelfingen, Germany, Johannes Schuetzner and the team faced a challenge familiar to many PostgreSQL users: how to build a secure, scalable, and flexible database architecture in the cloud—without sacrificing agility or developer productivity. This article shares insights from Mercedes-Benz about how Azure Database for PostgreSQL can be leveraged to enhance your security posture, streamline access management, and empower teams to innovate with confidence. The Challenge: Security Without Compromise “OK, let’s stop intrusions in their tracks,” Schuetzner began his POSETTE talk, setting the tone for a deep dive into network security and access management. Many organizations need to protect sensitive data, ensure compliance, and enable secure collaboration across distributed teams. The typical priorities are clear: Encrypt data in transit and at rest Implement row-level security for granular access Integrate with Microsoft Defender for Cloud for threat protection Focus on network security and access management—where configuration can make the biggest impact Building a Secure Network: Private vs. Public Access Mercedes-Benz explored two fundamental ways to set up their network for Azure Database for PostgreSQL: private access and public access. “With private access, your PostgreSQL server is integrated in a virtual network. With public access, it is accessible by everybody on the public internet,” explained Schuetzner. Public Access: Public endpoint, resolvable via DNS Firewall rules control allowed IP ranges Vulnerable to external attacks; traffic travels over public internet Private Access: Server injected into an Azure VNET Traffic travels securely over the Azure backbone Requires delegated subnet and private DNS VNET peering enables cross-region connectivity “One big benefit of private access is that the network traffic travels over the Azure backbone, so not the public internet,” said Schuetzner. This ensures that sensitive data remain protected, even as applications scaled across regions. An Azure VNET is restricted to an Azure region though and peering them may be complex. Embracing Flexibility: The Power of Private Endpoints Last year, Azure introduced private endpoints for PostgreSQL, a significant milestone in Mercedes-Benz’s database connectivity strategy. It adds a network interface to the resource that can also be reached from other Azure regions. This provides the resources in the VNET associated with the private endpoint to connect to the Postgres server. The network traffic travels securely over the Azure backbone. Private endpoints allow Mercedes-Benz to: Dynamically enable and disable public access during migrations Flexibly provision multiple endpoints for different VNETs and regions Have explicit control over the allowed network accesses Have in-built protection from data exfiltration Automate setup with Terraform and infrastructure-as-code This flexibility can be crucial for supporting large architectures and migration scenarios, all while maintaining robust security. Passwordless Authentication: Simplicity Meets Security Managing database passwords is a pain point for every developer. Mercedes-Benz embraced Azure Entra Authentication (formerly Azure Active Directory) to enable passwordless connections. Passwordless connections do not rely on traditional passwords but are based on more secure authentication methods of Azure Entra. They require less administrational efforts and prevent security breaches. Benefits include: Uniform user management across Azure resources Group-based access control Passwordless authentication for applications and CI/CD pipelines For developers, this means less manual overhead and fewer risks of password leaks. “Once you have set it up, then Azure takes good care of all the details, you don’t have to manage your passwords anymore, also they cannot be leaked anymore accidentally because you don’t have a password,” Schuetzner emphasized. Principle of Least Privilege: Granular Authorization Mercedes-Benz appreciates the principle of least privilege, ensuring applications have only the permissions they need—nothing more. By correlating managed identities with specific roles in PostgreSQL, teams can grant only necessary Data Manipulation Language (DML) permissions (select, insert, update), while restricting Data Definition Language (DDL) operations. This approach minimizes risk and simplifies compliance. Operational Excellence: Automation and Troubleshooting Automation is key to Mercedes-Benz’s success. Using Terraform and integrated in CI/CD pipelines, the team can provision identities, configure endpoints, and manage permissions—all as code. For troubleshooting, tools like Azure Bastion enable secure, temporary access to the database for diagnostics, without exposing sensitive endpoints. The Impact: Security, Agility, and Developer Empowerment By leveraging Azure Database for PostgreSQL, Mercedes-Benz can achieve: Stronger security through private networking and passwordless authentication Flexible, scalable architecture for global operations Streamlined access management and compliance Empowered developers to focus on innovation, not infrastructure Schuetzner concluded, “Private endpoints provide a new network opportunity for Postgres on Azure. There are additional costs, but it’s more flexible and more dynamic. Azure takes good care of all the details, so you don’t have to manage your passwords anymore. It’s basically the ultimate solution for password management.” Mercedes-Benz’s story shows that with the right tools and mindset, you can build secure and scalable solutions on Azure Database for PostgreSQL. For more details, refer to the full POSETTE session.Introducing support for Graph data in Azure Database for PostgreSQL (Preview)
We are excited to announce the addition of Apache AGE extension in Azure Database for PostgreSQL, a significant advancement that provides graph processing capabilities within the PostgreSQL ecosystem. This new extension brings a powerful toolset for developers looking to leverage a graph database with the robust enterprise features of Azure Database for PostgreSQL.8.4KViews6likes7CommentsAugust 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, August was an exciting month for Azure Database for PostgreSQL! We have introduced updates that make your experience smarter and more secure. From simplified Entra ID group login to integrations with LangChain and LangGraph, these updates help with improving access control and seamless integration for your AI agents and applications. Stay tuned as we dive deeper into each of these feature updates. Feature Highlights Enhanced Performance recommendations for Azure Advisor - Generally Available Entra-ID group login using user credentials - Public Preview New Region Buildout: Austria East LangChain and LangGraph connector Active-Active Replication Guide Enhanced Performance recommendations for Azure Advisor - Generally Available Azure Advisor now offers enhanced recommendations to further optimize PostgreSQL server performance, security, and resource management. These key updates are as follows: Index Scan Insights: Detection and recommendations for disabled index and index-only scans to improve query efficiency. Audit Logging Review: Identification of excessive logging via the pgaudit.log parameter, with guidance to reduce overhead. Statistics Monitoring: Alerts on server statistics resets and suggestions to restore accurate performance tracking. Storage Optimization: Analysis of storage usage with recommendations to enable the Storage Autogrow feature for seamless scaling. Connection Management: Evaluation of workloads for short-lived connections and frequent connectivity errors, with recommendations to implement PgBouncer for efficient connection pooling. These enhancements aim to provide deeper operational insights and support proactive performance tuning for PostgreSQL workloads. For more details read the Performance recommendations documentation. Entra-ID group login using user credentials - Public Preview The public preview for Entra-ID group login using user credentials is now available. This feature simplifies user management and improves security within the Azure Database for PostgreSQL. This allows administrators and users to benefit from a more streamlined process like: Changes in Entra-ID group memberships are synchronized on a periodic 30min basis. This scheduled syncing ensures that access controls are kept up to date, simplifying user management and maintaining current permissions. Users can log in with their own credentials, streamlining authentication, and improving auditing and access management for PostgreSQL environments. As organizations continue to adopt cloud-native identity solutions, this update represents a major improvement in operational efficiency and security for PostgreSQL database environments. For more details read the documentation on Entra-ID group login. New Region Buildout: Austria East New region rollout! Azure Database for PostgreSQL flexible server is now available in Austria East, giving customers in and around the region lower latency and data residency options. This continues our mission to bring Azure PostgreSQL closer to where you build and run your apps. For the full list of regions visit: Azure Database for PostgreSQL Regions. LangChain and LangGraph connector We are excited to announce that native LangChain & LangGraph support is now available for Azure Database for PostgreSQL! This integration brings native support for Azure Database for PostgreSQL into LangChain or LangGraph workflows, enabling developers to use Azure PostgreSQL as a secure and high-performance vector store and memory store for their AI agents and applications. Specifically, this package adds support for: Microsoft Entra ID (formerly Azure AD) authentication when connecting to your Azure Database for PostgreSQL instances, and, DiskANN indexing algorithm when indexing your (semantic) vectors. This package makes it easy to connect LangChain to your Azure-hosted PostgreSQL instances whether you're building intelligent agents, semantic search, or retrieval-augmented generation (RAG) systems. Read more at https://aka.ms/azpg-agent-frameworks Active-Active Replication Guide We have published a new blog article that guides you through setting up active-active replication in Azure Database for PostgreSQL using the pglogical extension. This walkthrough covers the fundamentals of active-active replication, key prerequisites for enabling bi-directional replication, and step-by-step demo scripts for the setup. It also compares native and pglogical approaches helping you choose the right strategy for high availability, and multi-region resilience in production environments. Read more about the active-active replication guide on this blog. Azure Postgres Learning Bytes 🎓 Enabling Zone-Redundant High Availability for Azure Database for PostgreSQL Flexible Server Using APIs. High availability (HA) is essential for ensuring business continuity and minimizing downtime in production workloads. With Zone-Redundant HA, Azure Database for PostgreSQL Flexible Server automatically provisions a standby replica in a different availability zone, providing stronger fault tolerance against zone-level failures. This section will guide you on how to enable Zone-Redundant HA using REST APIs. Using REST APIs gives you clear visibility into the exact requests and responses, making it easier to debug issues and validate configurations as you go. You can use any REST API client tool of your choice to perform these operations including Postman, Thunder Client (VS Code extension), curl, etc. to send requests and inspect the results directly. Before enabling Zone-Redundant HA, make sure your server is on the General Purpose or Memory Optimized tier and deployed in a region that supports it. If your server is currently using Same-Zone HA, you must first disable it before switching to Zone-Redundant. Steps to Enable Zone-Redundant HA: Get an ARM Bearer token: Run this in a terminal where Azure CLI is signed in (or use Azure Cloud Shell) az account get-access-token --resource https://management.azure.com --query accessToken -o tsv Paste token in your API client tool Authorization: `Bearer <token>` </token> Inspect the server (GET) using the following URL: https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resourceGroup}}/providers/Microsoft.DBforPostgreSQL/flexibleServers/{{serverName}}?api-version={{apiVersion}} In the JSON response, note: sku.tier → must be 'GeneralPurpose' or 'MemoryOptimized' properties.availabilityZone → '1' or '2' or '3' (depends which availability zone that was specified while creating the primary server, it will be selected by system if the availability zone is not specified) properties.highAvailability.mode → 'Disabled', 'SameZone', or 'ZoneRedundant' properties.highAvailability.state → e.g. 'NotEnabled','CreatingStandby', 'Healthy' If HA is currently SameZone, disable it first (PATCH) using API. Use the same URL in Step 3, in the Body header insert: { "properties": { "highAvailability": { "mode": "Disabled" } } } Enable Zone Redundant HA (PATCH) using API: Use the same URL in Step 3, in the Body header insert: { "properties": { "highAvailability": { "mode": "ZoneRedundant" } } } Monitor until HA is Healthy: Re-run the GET from Step 3 every 30-60 seconds until you see: "highAvailability": { "mode": "ZoneRedundant", "state": "Healthy" } Conclusion That’s all for our August 2025 feature updates! We’re committed to making Azure Database for PostgreSQL better with every release, and your feedback plays a key role in shaping what’s next. 💬 Have ideas, questions, or suggestions? Share them with us: https://aka.ms/pgfeedback 📢 Want to stay informed about the latest features and best practices? Follow us here for the latest announcements, feature releases, and best practices: Azure Database for PostgreSQL Blog More exciting improvements are on the way—stay tuned for what’s coming next!June 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, We have introduced a range of exciting new features and updates to Azure Database for PostgreSQL in June. From general availability of PG 17 to public preview of the SSD v2 storage tier for High Availability, there have been some significant feature announcements across multiple areas in the last month. Stay tuned as we dive deeper into each of these feature updates. Before that, let’s look at POSETTE 2025 highlights. POSETTE 2025 Highlights We hosted POSETTE: An Event for Postgres 2025 in June! This year marked our 4th annual event featuring 45 speakers and a total of 42 talks. PostgreSQL developers, contributors, and community members came together to share insights on topics covering everything from AI-powered applications to deep dives into PostgreSQL internals. If you missed it, you can catch up by watching the POSETTE livestream sessions. If this conference sounds interesting to you and want to be part of it next year, don’t forget to subscribe to POSETTE news. Feature Highlights General Availability of PostgreSQL 17 with 'In-Place' upgrade support General Availability of Online Migration Migration service support for PostgreSQL 17 Public Preview of SSD v2 High Availability New Region: Indonesia Central VS Code Extension for PostgreSQL enhancements Enhanced role management Ansible collection released for latest REST API version General Availability of PostgreSQL 17 with 'In-Place' upgrade support PostgreSQL 17 is now generally available on Azure Database for PostgreSQL flexible server, bringing key community innovations to your workloads. You’ll see faster vacuum operations, richer JSON processing, smarter query planning (including better join ordering and parallel execution), dynamic logical replication controls, and enhanced security & audit-logging features—backed by Azure’s five-year support policy. You can easily upgrade to PostgreSQL 17 using the in-place major version upgrade feature available through the Azure portal and CLI, without changing server endpoints or reconfiguring applications. The process includes built-in validations and rollback safety to help ensure a smooth and reliable upgrade experience. For more details, read the PostgreSQL 17 release announcement blog. General Availability of Online Migration We're excited to announce that Online Migration is now generally available for the Migration service for Azure Database for PostgreSQL! Online migration minimizes downtime by keeping your source database operational during the migration process, with continuous data synchronization until cut over. This is particularly beneficial for mission-critical applications that require minimal downtime during migration. This milestone brings production-ready online migration capabilities supporting various source environments including on-premises PostgreSQL, Azure VMs, Amazon RDS, Amazon Aurora, and Google Cloud SQL. For detailed information about the capabilities and how to get started, visit our Migration service documentation. Migration service support for PostgreSQL 17 Building on our PostgreSQL 17 general availability announcement, the Migration service for Azure Database for PostgreSQL now fully supports PostgreSQL 17. This means you can seamlessly migrate your existing PostgreSQL instances from various source platforms to Azure Database for PostgreSQL flexible server running PostgreSQL 17. With this support, organizations can take advantage of the latest PostgreSQL 17 features and performance improvements while leveraging our online migration capabilities for minimal downtime transitions. The migration service maintains full compatibility with PostgreSQL 17's enhanced security features, improved query planning, and other community innovations. Public Preview of SSD v2 High Availability We’re excited to announce the public preview High availability (HA) support for the Premium SSD v2 storage tier in Azure Database for PostgreSQL flexible server. This support allows you to enable Zone-Redundant HA using Premium SSD v2 during server deployments. In addition to high availability on SSDv2 you now get improved resiliency and 10 second failover times when using Premium SSD v2 with zone-redundant HA, helping customers build resilient, high-performance PostgreSQL applications with minimal overhead. This feature is particularly well-suited for mission-critical workloads, including those in financial services, real-time analytics, retail, and multi-tenant SaaS platforms. Key Benefits of Premium SSD v2: Flexible disk sizing: Scale from 32 GiB to 64 TiB in 1-GiB increments Fast failovers: Planned or unplanned failovers typically around 10 seconds Independent performance configuration: Achieve up to 80,000 IOPS and 1,200 Mbps throughput without resizing your disk. Baseline performance: Free throughput of 125 MB/s and 3,000 IOPS for disks up to 399 GiB, and 500 MB/s and 12,000 IOPS for disks 400 GiB and above at no additional cost. For more details, please refer to the Premium SSD v2 HA blog. New Region: Indonesia Central New region rollout! Azure Database for PostgreSQL flexible server is now available in Indonesia Central, giving customers in and around the region lower latency and data residency options. This continues our mission to bring Azure PostgreSQL closer to where you build and run your apps. For the full list of regions visit: Azure Database for PostgreSQL Regions. VS Code Extension for PostgreSQL enhancements The brand-new VS code extension for PostgreSQL launched in mid-May and has already garnered over 122K installs from the Visual Studio Marketplace! And the kickoff blog about this new IDE for PostgreSQL in VS Code has had over 150K views. This extension makes it easier for developers to seamlessly interact with PostgreSQL databases. We have been committed to make this experience better and have introduced several enhancements to improve reliability and compatibility updates. You can now have better control over service restarts and process terminations on supported operating systems. Additionally, we have added support for parsing additional connection-string formats in the “Create Connection” flow, making it more flexible and user-friendly. We also resolved Entra token-fetching failures for newly created accounts, ensuring a smoother onboarding experience. On the feature front, you can now leverage Entra Security Groups and guest accounts across multiple tenants when establishing new connections, streamlining permission management in complex Entra environments. Don’t forget to update to the latest version in the marketplace to take advantage of these enhancements and visit our GitHub repository to learn more about this month’s release. If you learn best by video, these 2 videos are a great way to learn more about this new VS Code extension: POSETTE 2025: Introducing Microsoft’s VS Code Extension for PostgreSQL Demo of using VS code extension for PostgreSQL Enhanced role management With the introduction of PostgreSQL 16, a strict role hierarchy structure has been implemented. As a result, GRANT statements that were functional in PostgreSQL 11-15 may no longer work in PostgreSQL 16. We have improved the administrative flexibility and addressed this limitation in Azure Database for PostgreSQL flexible server across all PostgreSQL versions. Members of ‘azure_pg_admin’ can now manage, and access objects owned by any role that is non-restricted, giving control and permission over user-defined roles. To learn more about this improvement, please refer to our documentation on roles. Ansible collection released for latest REST API version A new version of Ansible collection for Azure Database for PostgreSQL flexible server is now released. Version 3.6.0 now includes the latest GA REST API features. This update introduces several enhancements, such as support for virtual endpoints, on-demand backups, system-assigned identity, storage auto-grow, and seamless switchover of read replicas to a new site (Read Replicas - Switchover), among many other improvements. To get started with using please visit flexible server Ansible collection link. Azure Postgres Learning Bytes 🎓 Using PostgreSQL VS code extension with agent mode The VS Code extension for PostgreSQL has been trending amongst the developer community. In this month's Learning Bytes section, we want to share how to enable the extension and use GitHub Copilot to create a database in Agent Mode, add dummy data, and visualize it using the Agent Mode and VS Code extension. Step 1: Download the VS code Extension for PostgreSQL Step 2: Check GitHub Copilot and Agent mode is enabled Go to File -> Preferences -> Settings (Ctrl + ,). Search and enable "chat.agent.enabled" and "pgsql copilot.enable". Reload VS Code to apply changes. Step 3: Connect to Azure Database for PostgreSQL Use the extension to enter instance details and establish a connection. Create and view schemas under Databases -> Schemas. Step 4: Visualize and Populate Data Right-click the database to visualize schemas. Ask the agent to insert dummy data or run queries. Conclusion That's all for the June 2025 feature updates! We are dedicated to continuously improve Azure Database for PostgreSQL with every release. Stay updated with the latest updates to our features by following this link. Your feedback is important and helps us continue to improve. If you have any suggestions, ideas, or questions, we’d love to hear from you. Share your thoughts here: aka.ms/pgfeedback We look forward to bringing you even more exciting updates throughout the year, stay tuned!