Azure Database for PostgreSQL Flexible Server
102 TopicsUpgrade Azure Database for PostgreSQL with Minimal Downtime Using Logical Replication
Azure Database for PostgreSQL provides a seamless Major Version Upgrade (MVU) experience for your servers, which is important for security, performance, and feature enhancements. For production workloads, minimizing downtime during this upgrade is essential to maintain business continuity. This blog explores a practical approach to performing a Major Version Upgrade (MVU) with minimal downtime and maximum reliability using logical replication and virtual endpoints. Upgrading without disrupting your applications is critical. With this method, you can: Approach 1: Configure two servers where the publisher runs on the lower version and the subscriber on the higher version, perform MVU and then switch over using virtual endpoints. The time taken to restore the server is specific to your workloads on the primary server. Approach 2: Maintain two servers on different versions, use pg_dump and pg_restore to restore with data for production server, and perform a seamless switchover using virtual endpoints. To enable logical replication on a table, it must have one of the following: A Primary Key, or A Unique Index Approach 1 Approach 2 Restores the instance using the same version. Creates and restores the instance on a higher version. Faster restore with PITR (Point-in-Time Recovery) but requires a few additional steps. Takes longer to restore because it uses pg_dump and pg_restore commands but enables version upgrade during restore. Best suited when speed is the priority to restore the server. Best suited when you want to restore directly to a higher version, and it does not downtime for the MVU operation. Approach 1 Setup: Two servers, one for testing, and one for production. Here are the steps to follow: Create a virtual endpoint on the production server. Perform a Point-in-time-restore (PITR) from the first server (Production) and create your test server. Add a virtual endpoint for the test server. Establish logical replication between the two servers. Perform the Major Version Upgrade (MVU) on the test server. Validate data on the test server. Update virtual endpoints: Remove the endpoints from both servers, then assign the original production endpoint to the test server. Step By Step Guide Environment Setup Two servers are involved: Server 1: Current production server (Publisher) Server 2: Restored server for MVU (Subscriber) Create a virtual endpoint for the production server. Configure Logical Replication & Grant Permissions Enable replication parameters on the publisher: wal_level = logical max_worker_processes = 16 max_replication_slots = 10 max_wal_senders = 10 track_commit_timestamp = on Grant replication role to the user ALTER ROLE <user> WITH REPLICATION; GRANT azure_pg_admin TO <user>; Create tables and insert data CREATE TABLE basic (id INTEGER NOT NULL PRIMARY KEY, a TEXT); INSERT INTO basic VALUES (1, 'apple'), (2, 'banana'); Set Up Logical Replication on the Production Server Create a publication slot on the Production Server: create publication <publisher-name>; alter publication <publisher-name> add table<table>; SELECT pg_create_logical_replication_slot(‘<publisher-name>’, ‘pgoutput’); Choose Restore Point: Determine the latest possible point in time (PIT) to restore the data from the source server. This Point in Time must be, necessarily, after you created the replication slot in Step 3. Provision Target Server via PITR: Use Azure Portal or Azure CLI to trigger a Point-in-Time Restore. This creates the test server based on the production backup. You are provisioning the test server based on the production server's backup capabilities. This test server will initially be a copy of the production server’s data state at a specific point in time. Configure Server Parameters on Test Server wal_level = logical max_worker_processes = 16 max_replication_slots = 10 max_wal_senders = 10 track_commit_timestamp = on Create the target server as subscriber & Advance Replication Origin: This is the crucial step that connects the test server (subscriber) to the production (publisher) server and manually tells the target where in the WAL log stream to begin reading changes, skipping the data already restored. Create Subscription: Creates a logical replication subscription on the test server, linking it to the source and specifying connection details, publication, and replication slot without copying existing data. CREATE SUBSCRIPTION <subscriber-name>;CONNECTION 'host=<host-name>.postgres.database.azure.com port=5432 dbname=postgres user=<username> password=<password>' PUBLICATION <publisher-name> WITH ( copy_data = false, create_slot = false, enabled = false, slot_name = <publisher-name> ); Retrieves the replication origin identifier and name on the test server, which is needed to advance the replication position. SELECT roident, roname FROM pg_replication_origin; Execute this query on the Production server: Fetches the replication slot name and the restart LSN from the source server, indicating where replication should resume. SELECT slot_name, restart_lsn FROM pg_replication_slots WHERE slot_name = <publisher-name>; On the test server execute this command: Manually advances the replication origin on the target server to skip already restored data and start replication from the correct WAL position. SELECT pg_replication_origin_advance(roident, restart_lsn); Enable the target server as a subscriber of the source server With the target server populated and the replication origin advanced, you can start the synchronization. ALTER SUBSCRIPTION <publisher-name> ENABLE; The target server now starts consuming the WAL entries from the source, rapidly closing the gap on all transactions that occurred between the slot creation and the completion of the PITR. Test Replication works Create a virtual endpoint for the test server, and validate the data on the test server Confirm that the synchronization is working by inserting a record on the production server and immediately verifying its presence on the test server. Perform Major Version Upgrade (MVU) Upgrade your test server, and validate all the new extensions and features by using the virtual endpoint for the test server Manage virtual endpoints Once the data and all the new extensions are validated, drop the virtual endpoint on production server and recreate the same virtual endpoint on test server. Key Considerations: Test server initially handles read traffic; writes remain on production server to avoid conflicts. Virtual endpoint creation: ~1–2 minutes per endpoint. Time taken for Point-in-time-restore depends on the workload that you have on the production server Approach 2: This approach enables a Major Version Upgrade (MVU) by combining logical replication with an initial dump and restore process. It minimizes downtime while ensuring data consistency. Create a new Azure Database for PostgreSQL Flexible Server instance using your desired target major version (e.g., PostgreSQL 17). Ensure the new server's configuration (SKU, storage size, and location) is suitable for your eventual production load. This approach enables the core benefit of a side-by-side migration, running two distinct database versions concurrently. The existing application remains connected to the source environment, minimizing risk and allowing the new target to be fully configured offline. Configure Role Privileges on Source and Target Servers ALTER ROLE <replication_user> WITH REPLICATION; GRANT azure_pg_admin TO <replication_user>; Check Prerequisites for Logical Replication Set these parameters on both source and target servers: Set these server parameters to at least the minimum recommended values shown below to enable and support the features required for logical replication. wal_level=logical max_worker_processes=16 max_replication_slots=10 max_wal_senders=10 track_commit_timestamp=on Ensure tables are ready: Each table to be replicated must have a primary key or unique identifier Create Publication and Replication Slot on Source create publication <publisher-name>; alter publication <publisher-name> add table<table>; SELECT pg_create_logical_replication_slot(‘<publisher-name>’, ‘pgoutput’); This slot tracks all changes from this point onward. Generate Schema and Initial Data Dump Run pg_dump after creating the replication slot: Perform the dump after creating the replication slot to capture a static starting point. Using an Azure VM is recommended for optimal network performance. pg_dump -U demo -W -h <hostname>.postgres.database.azure.com -p 5432 -Fc -v -f dump.bak postgres -N pg_catalog -N cron -N information_schema Restore Data into Target (recommended: Azure VM): This populates the target server with the initial dataset. pg_restore -U demo -W -h <hostname>.postgres.database.azure.com -p 5432 --no-owner -Fc -v -d postgres dump.bak --no-acl Catch-Up Mechanism: While the restoration is ongoing, new transactions on the source are safely recorded by the replication slot. It is critical to have sufficient storage on the source to hold the WAL files during this initial period until replication is fully active. Create Subscription and Advance Replication Origin on Target: This step connects the test server (subscriber) to the production server (source) and manually tells the target where in the WAL log stream to begin reading changes, skipping the data already restored. Create subscription: Creates a logical replication subscription on the target server, linking it to the source and specifying connection details, publication, and replication slot without copying existing data. CREATE SUBSCRIPTION <subscription-name> CONNECTION 'host=<hostname>.postgres.database.azure.com port=5432 dbname=postgres user=<username> password=<password>' PUBLICATION <publisher-name> WITH ( copy_data = false, create_slot = false, enabled = false, slot_name = '<publisher-name>); Retrieves the replication origin identifier and name on the target server, which is needed to advance the replication position. SELECT roident, roname FROM pg_replication_origin; Fetches the replication slot name and the restart LSN from the source server, indicating where replication should resume. SELECT slot_name, restart_lsn FROM pg_replication_slots WHERE slot_name = '<publisher-name>; Manually advances the replication origin on the target server to skip already restored data and start replication from the correct WAL position. SELECT pg_replication_origin_advance('<roname>', '<restart_lsn>'); Enable Subscription: With the target server populated and the replication origin advanced, you can start the synchronization. ALTER SUBSCRIPTION <subscription-name> ENABLE; Result: The target server now starts consuming the WAL entries from the source, rapidly closing the gap on all transactions that occurred during the dump and restore process. Validate Replication: Insert a record on the source and confirm it appears on the target: Perform Cutover Stop application traffic to the production database. Wait for the target database to confirm zero replication lag. Disable the subscription (ALTER SUBSCRIPTION logical_sub01 DISABLE;). Connect the application to the new Azure Database for PostgreSQL instance. Utilize Virtual Endpoints or a CNAME DNS record for your database connection string. By simply pointing the endpoint/CNAME to the new server, you can switch your application stack without changing hundreds of individual configuration files, making the final cutover near-instantaneous. Conclusion This MVU strategy using logical replication and virtual endpoints provides a safe, efficient way to upgrade PostgreSQL servers without disrupting workloads. By combining replication, endpoint management, and automation, you can achieve a smooth transition to newer versions while maintaining high availability. For an alternative approach, check out our blog on using the Migration Service for MVU: Hacking the migration service in Azure Database for PostgreSQLAzure PostgreSQL Lesson Learned #3: Fix FATAL: sorry, too many clients already
We encountered a support case involving Azure Database for PostgreSQL Flexible Server where the application started failing with connection errors. This blog explains the root cause, resolution steps, and best practices to prevent similar issues.115Views2likes0CommentsAzure PostgreSQL Lesson Learned#1:Fix Cannot Execute in a Read-Only Transaction After HA Failover
We encountered a support case involving Azure Database for PostgreSQL Flexible Server where the database returned a read-only error after a High Availability (HA) failover. This blog explains the root cause, resolution steps, and best practices to prevent similar issues. The issue occurred when the application attempted write operations immediately after an HA failover. The failover caused the primary role to switch, but the client continued connecting to the old primary (now standby), which is in read-only mode.239Views2likes0CommentsAzure PostgreSQL Lesson Learned #2: Fixing Read Only Mode Storage Threshold Explained
Co-authored with angesalsaa The issue occurred when the server’s storage usage reached approximately 95% of the allocated capacity. Automatic storage scaling was disabled. Symptoms included: Server switching to read-only mode Application errors indicating write failures No prior alerts or warnings received by the customer Example error: ERROR: cannot execute %s in a read-only transaction Root Cause The root cause was the server hitting the configured storage usage threshold (95%), which triggered an automatic transition to read-only mode to prevent data corruption or loss. Storage options - Azure Database for PostgreSQL | Microsoft Learn If your Storage Usage is below 95% but you're still seeing the same error, please refer to this article for more information > Azure PostgreSQL Lesson Learned#1:Fix Cannot Execute in a Read-Only Transaction After HA Failover Contributing factors: Automatic storage scaling was disabled Lack of proactive monitoring on storage usage High data ingestion rate during peak hours Specific conditions: Customer had a custom workload with large batch inserts No alerts configured for storage usage thresholds Mitigation To resolve the issue: Increased the allocated storage manually via Azure Portal No restart is needed after you scale up the storage because it is an online operation but make sure If you grow the disk from any size between 32 GiB and 4 TiB, to any other size in the same range, the operation is performed without causing any server downtime. It's also the case if you grow the disk from any size between 8 TiB and 32 TiB. In all those cases, the operation is performed while the server is online. However, if you increase the size of disk from any value lower or equal to 4096 GiB, to any size higher than 4096 GiB, a server restart is required. In that case, you're required to confirm that you understand the consequences of performing the operation. Scale storage size - Azure Database for PostgreSQL | Microsoft Learn Verified server returned to read-write mode Steps: Navigate to Azure Portal > PostgreSQL Flexible Server > Compute & Storage Increase storage size (e.g., from 100 GB to 150 GB) Post-resolution: Server resumed normal operations Write operations were successful Prevention & Best Practices Enable automatic storage scaling to prevent hitting usage limits > Configure Storage Autogrow - Azure Database for PostgreSQL | Microsoft Learn Set up alerts for storage usage thresholds (e.g., 80%, 90%) Monitor storage metrics regularly using Azure Monitor or custom dashboards Why This Matters Failing to monitor storage and configure scaling can lead to: Application downtime Read-only errors impacting business-critical transactions By following these practices, customers can ensure seamless operations and avoid unexpected read-only transitions. Key Takeaways Symptom: Server switched to read-only mode, causing write failures (ERROR: cannot execute INSERT in a read-only transaction). Root Cause: Storage usage hit 95% threshold, triggering read-only mode to prevent corruption. Contributing Factors: Automatic storage scaling disabled. No alerts for storage thresholds. High ingestion during peak hours with large batch inserts. Mitigation: Increased storage manually via Azure Portal (online operation unless crossing 4 TiB → restart required). Server returned to read-write mode. Prevention & Best Practices: Enable automatic storage scaling. Configure alerts for storage usage (e.g., 80%, 90%). Monitor storage metrics regularly using Azure Monitor or dashboards.123Views0likes0CommentsPrevent Accidental Deletion of an Instance in Azure Postgres
Did you know that accidental deletion of database servers is a leading source of support tickets? Read this blog post to learn how you can safeguard your Azure Database for PostgreSQL Flexible Server instances using ARM’s CanNotDelete lock — an easy best-practice that helps prevent accidental deletions while keeping regular operations seamless. 🌐 Prevent Accidental Deletion of an Instance in Azure PostgresSeptember 2025 Recap: What’s New with Azure Database for PostgreSQL
September 2025 Recap for Azure Database for PostgreSQL September was a big month for Azure Postgres! From the public preview of PostgreSQL 18 (launched same day as the community!) to the GA of Azure Confidential Computing and Near Zero Downtime scaling for HA, this update is packed with new capabilities that make PostgreSQL on Azure more secure, performant, and developer-friendly. 💡 Here’s a quick peek at what’s inside: PostgreSQL 18 (Preview) – early access to the latest community release on Azure Near Zero Downtime Scaling (GA) – compute scaling in under 30 seconds for HA servers Azure Confidential Computing (GA) – hardware-backed data-in-use protection PostgreSQL Discovery & Assessment in Azure Migrate (Preview) – plan your migration smarter LlamaIndex Integration – build AI apps and vector search using Azure Postgres VS Code Extension Enhancements – new Server Dashboard + Copilot Chat integration Catch all the highlights and hands-on guides in the full recap 👉 #PostgreSQL #AzureDatabase #AzurePostgres #CloudDatabases #AI #OpenSource #Microsoft48Views0likes0CommentsSeptember 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, We are back with another round of updates for Azure Database for PostgreSQL! September is packed with powerful enhancements, from the public preview of PostgreSQL 18 to the general availability of Azure Confidential Computing, plus several new capabilities designed to boost performance, security, and developer experience. Stay tuned as we dive deeper into each of these feature updates. Before we dive into the feature highlights, let’s take a look at PGConf NYC 2025 highlights. PGConf NYC 2025 Highlights Our Postgres team was glad to be part of PGConf NYC 2025! As a Platinum sponsor, Microsoft joined the global PostgreSQL community for three days of sessions covering performance, extensibility, cloud, and AI, highlighted by Claire Giordano’s keynote, “What Microsoft is Building for Postgres—2025 in Review,” along with deep dives from core contributors and engineers. If you missed it, you can catch up here: Keynote slides: What Microsoft is Building for Postgres—2025 in Review by Claire Giordano at PGConf NYC 2025 Day 3 wrap-up: Key takeaways, highlights, and insights from the Azure Database for PostgreSQL team. Feature Highlights Near Zero Downtime scaling for High Availability (HA) enabled servers - Generally Available Azure Confidential Computing for Azure Database for PostgreSQL - Generally Available PostgreSQL 18 on Azure Database for PostgreSQL - Public Preview PostgreSQL Discovery & Assessment in Azure Migrate - Public Preview LlamaIndex Integration with Azure Postgres Latest Minor Versions GitHub Samples: Entra ID Token Refresh for PostgreSQL VS Code Extension for PostgreSQL enhancements Near Zero Downtime scaling for High Availability (HA) enabled servers – Generally Available Scaling compute for high availability (HA) enabled Azure Database for PostgreSQL servers just got faster. With Near Zero Downtime (NZD) scaling, compute changes such as vCore or tier modifications are now complete with minimal interruption, typically under 30 seconds using HA failover which maintains the connection string. The service provisions a new primary and standby instance with the updated configuration, synchronizes them with the existing setup, and performs a quick failover. This significantly reduces downtime compared to traditional scaling (which could take 2–10 minutes), improving overall availability. Visit our documentation for full details on how Near Zero Downtime scaling works. Azure Confidential Computing for Azure Database for PostgreSQL - Generally Available Azure Confidential Computing (ACC) Confidential Virtual Machines (CVMs) are now generally available for Azure Database for PostgreSQL. This capability brings hardware-based protection for data in use, ensuring your most sensitive information remains secure, even while being processed. With CVMs, your PostgreSQL flexible server instance runs inside a Trusted Execution Environment (TEE), a secure, hardware-backed enclave that encrypts memory and isolates it from the host OS, hypervisor, and even Azure operators. This means your data enjoys end-to-end protection: at rest, in transit, and in use. Key Benefits: End-to-End Security: Data protected at rest, in transit, and in use Enhanced Privacy: Blocks unauthorized access during processing Compliance Ready: Meets strict security standards for regulated workloads Confidence in Cloud: Hardware-backed isolation for critical data Discover how Azure Confidential Computing enhances PostgreSQL check out the blog announcement. PostgreSQL 18 on Azure Database for PostgreSQL – Public Preview PostgreSQL 18 is now available in public preview on Azure Database for PostgreSQL, launched the same day as the PostgreSQL community release. PostgreSQL 18 introduces new performance, scalability, and developer productivity improvements. With this preview, you get early access to the latest community release on a fully managed Azure service. By running PostgreSQL 18 on flexible server, you can test application compatibility, explore new SQL and performance features, and prepare for upgrades well before general availability. This preview release gives you the opportunity to validate your workloads, extensions, and development pipelines in a dedicated preview environment while taking advantage of the security, high availability, and management capabilities in Azure. With PostgreSQL 18 in preview, you are among the first to experience the next generation of PostgreSQL on Azure, ensuring your applications are ready to adopt it when it reaches full general availability. To learn more about preview, read https://aka.ms/pg18 PostgreSQL Discovery & Assessment in Azure Migrate – Public Preview The PostgreSQL Discovery & Assessment feature is now available in public preview on Azure Migrate, making it easier to plan your migration journey to Azure. Migrating PostgreSQL workloads can be challenging without clear visibility into your existing environment. This feature solves that problem by delivering deep insights into on-premises PostgreSQL deployments, making migration planning easier and more informed. With this feature, you can discover PostgreSQL instances across your infrastructure, assess migration readiness and identify potential blockers, receive configuration-based SKU recommendations for Azure Database for PostgreSQL, and estimate costs for running your workloads in Azure all in one unified experience. Key Benefits: Comprehensive Visibility: Understand your on-prem PostgreSQL landscape Risk Reduction: Identify blockers before migration Optimized Planning: Get tailored SKU and cost insights Faster Migration: Streamlined assessment for a smooth transition Learn more in our blog: PostgreSQL Discovery and Assessment in Azure Migrate LlamaIndex Integration with Azure Postgres The support for native LlamaIndex integration is now available for Azure Database for PostgreSQL! This enhancement brings seamless connectivity between Azure Database for PostgreSQL and LlamaIndex, allowing developers to leverage Azure PostgreSQL as a secure and high-performance vector store for their AI agents and applications. Specifically, this package adds support for: Microsoft Entra ID (formerly Azure AD) authentication when connecting to your Azure Database for PostgreSQL instances, and, DiskANN indexing algorithm when indexing your (semantic) vectors. This package makes it easy to connect LlamaIndex to your Azure PostgreSQL instances whether you're building intelligent agents, semantic search, or retrieval-augmented generation (RAG) systems. Explore the full guide here: https://aka.ms/azpg-llamaindex Latest Postgres minor versions: 17.6, 16.9, 15.13, 14.18 and 13.21 PostgreSQL minor versions 17.6, 16.9, 15.13, 14.18 and 13.21 are now supported by Azure Database for PostgreSQL. These minor version upgrades are automatically performed as part of the monthly planned maintenance in Azure Database for PostgreSQL. The upgrade automation ensures that your databases are always running the latest optimized versions without requiring manual intervention. This release fixes 3 security vulnerabilities and more than 55 bugs reported over the last several months. PostgreSQL minor versions are backward-compatible, so updates won’t affect your applications. For details about the release, see PostgreSQL community announcement. GitHub Samples: Entra ID Token Refresh for PostgreSQL We have introduced code samples for Entra ID token refresh, built specifically for Azure Database for PostgreSQL. These samples simplify implementing automatic token acquisition and refresh, helping you maintain secure, uninterrupted connectivity without manual intervention. By using these examples, you can keep sessions secure, prevent connection drops from expired tokens, and streamline integration with Azure Identity libraries for PostgreSQL workloads. What’s Included: Ready-to-use code snippets for token acquisition and refresh for Python and .NET Guidance for integrating with Azure Identity libraries Explore the samples repository on https://aka.ms/pg-access-token-refresh and start implementing it today. VS Code Extension for PostgreSQL enhancements A new version for VS Code Extension for PostgreSQL is out! This update introduces a Server Dashboard that provides high-level metadata and real-time performance metrics, along with historical insights for Azure Database for PostgreSQL Flexible Server. You can even use GitHub Copilot Chat to ask performance questions in natural language and receive diagnostic SQL queries in response. Additional enhancements include: A new keybinding for “Run Current Statement” in the Query Editor Support for dragging Object Explorer entities into the editor with properly quoted identifiers Ability to connect to databases via socket file paths Key fixes: Preserves the state of the Explain Analyze toolbar toggle Removes inadvertent logging of sensitive information from extension logs Stabilizes memory usage during long-running dashboard sessions Don’t forget to update to the latest version in the marketplace to take advantage of these enhancements and visit our GitHub repository to learn more about this month’s release. We’d love your feedback! Help us improve the Server Dashboard and other features by sharing your thoughts on GitHub . Azure Postgres Learning Bytes 🎓 Setting up logical replication between two servers This section will walk through setting up logical replication between two Azure Database for PostgreSQL flexible server instances. Logical replication replicates data changes from a source (publisher) server to a target (subscriber) server. Prerequisites PostgreSQL versions supported by logical replication (publisher/subscriber compatible). Network connectivity: subscriber must be able to connect to the publisher (VNet/NSG/firewall rules). A replication role on the publisher (or a role with REPLICATION privilege). Step 1: Configure Server Parameters on both publisher and subscriber: On Publisher: wal_level=logical max_worker_processes=16 max_replication_slots=10 max_wal_senders=10 track_commit_timestamp=on On Subscriber: wal_level=logical max_worker_processes=16 max_replication_slots=10 max_wal_senders=10 track_commit_timestamp=on max_worker_processes = 16 max_sync_workers_per_subscription = 6 autovacuum = OFF (during initial copy) max_wal_size = 64GB checkpoint_timeout = 3600 Step 2: Create Publication (Publisher) and alter role with replication privilege ALTER ROLE <replication_user> WITH REPLICATION; CREATE PUBLICATION pub FOR ALL TABLES; Step 3: Create Subscription (Subscriber) CREATE SUBSCRIPTION <subscription-name> CONNECTION 'host=<publisher_host> dbname=<db> user=<user> password=<pwd>' PUBLICATION <publication-name>;</publication-name></pwd></user></db></publisher_host></subscription-name> Step 4: Monitor Publisher: This shows active processes on the publisher, including replication workers. SELECT application_name, wait_event_type, wait_event, query, backend_type FROM pg_stat_activity WHERE state = 'active'; Subscriber: The ‘pg_stat_progress_copy’ table tracks the progress of the initial data copy for each table. SELECT * FROM pg_stat_progress_copy; To explore more details on how to get started with logical replication, visit our blog on Tuning logical replication for Azure Database for PostgreSQL. Conclusion That’s all for the September 2025 feature highlights! We remain committed to making Azure Database for PostgreSQL more powerful and secure with every release. Stay up to date on the latest enhancements by visiting our Azure Database for PostgreSQL blog updates link. Your feedback matters and helps us shape the future of PostgreSQL on Azure. If you have suggestions, ideas, or questions, we’d love to hear from you: https://aka.ms/pgfeedback. We look forward to sharing even more exciting capabilities in the coming months. Stay tuned!PostgreSQL and the Power of Community
PGConf NYC 2025 is the premier event for the global PostgreSQL community, and Microsoft is proud to be a Platinum sponsor this year. The conference will also feature a keynote from Claire Giordano, Principal PM for PostgreSQL at Microsoft, who will share our vision for Postgres along with lessons from ten PostgreSQL hacker journeys.PostgreSQL 18 Preview on Azure Database for PostgreSQL
PostgreSQL 18 Preview on Azure Postgres Flexible Server We’re excited to bring the latest Postgres innovations directly into Azure. With PG18 Preview, you can already test: 🔹 Asynchronous I/O (AIO) → faster queries & lower latency 🔹 Vacuuming enhancements → less bloat, fewer replication conflicts 🔹 UUIDv7 support → better indexing & sort locality 🔹 B-Tree skip scan → more efficient use of multi-column indexes 🔹 Improved logical replication & DDL → easier schema evolution across replicas And that’s just the start — PG18 includes hundreds of community contributions, with 496 from Microsoft engineers alone 💪 👉 Try it out today on Azure Postgres Flexible Server (initially in East Asia), share your feedback, and help shape GA.49Views0likes0Comments