postgresql
191 TopicsAzure PostgreSQL Lesson Learned #8: Post-Upgrade Performance Surprises (The One-Step Fix)
Co‑authored with angesalsaa Symptoms Upgrade from PostgreSQL 12 → higher version succeeds. After migration, workloads show: Queries running slower than before. Unexpected CPU spikes during normal operations. No obvious errors in logs or connectivity issues. Root Cause Missing or stale statistics can lead to bad query plans, which in turn might degrade performance and consume excessive memory. After a major version upgrade, the query planner relies on outdated or default estimates because the optimizer’s learned patterns are not refreshed. This often results in: Sequential scans instead of index scans. Inefficient join strategies. Increased CPU and memory usage. Contributing Factors Large tables with skewed data distributions. Complex queries with multiple joins. Workloads dependent on accurate cost estimates. Specific Conditions We Observed Any source server version can be impacted once you upgrade to higher version. No ANALYZE or VACUUM run post-upgrade. Operational Checks Before troubleshooting, confirm: Query plans differ significantly from pre-upgrade. pg_stats indicates outdated or missing statistics. Mitigation Goal: Refresh statistics so the planner can optimize queries. Run ANALYZE on all tables: ANALYZE; Important Notes: These commands are safe and online. For very large datasets, consider running during low-traffic windows. We recommend running the ANALYZE command in each database to refresh the pg_statistic table. Post-Resolution Queries return to expected performance. CPU utilization stabilizes. Execution plans align with indexes and cost-based optimization. Prevention & Best Practices Always schedule ANALYZE immediately after major upgrades. Automate stats refresh in your upgrade runbook. Validate plans for critical queries before going live. Why This Matters Skipping this step can lead to: Hours of degraded performance. Emergency escalations and customer dissatisfaction. Misdiagnosis as engine regression when it’s just missing stats. Key Takeaways Issue: Post-upgrade query slowness and CPU spikes due to stale/missing statistics. Fix: Run ANALYZE immediately after upgrade. Pro Tip: Automate this in CI/CD or maintenance scripts for zero surprises. References Major Version Upgrades - Azure Database for PostgreSQL | Microsoft Learn26Views0likes0CommentsAnnouncing Azure HorizonDB
Affan Dar, Vice President of Engineering, PostgreSQL at Microsoft Charles Feddersen, Partner Director of Program Management, PostgreSQL at Microsoft Today at Microsoft Ignite, we’re excited to unveil the preview of Azure HorizonDB, a fully managed Postgres-compatible database service designed to meet the needs of modern enterprise workloads. The cloud native architecture of Azure HorizonDB delivers highly scalable shared storage, elastic scale-out compute, and a tiered cache optimized for running cloud applications of any scale. Postgres is transforming industries worldwide and is emerging as the foundation of modern data solutions across all sectors at an unprecedented pace. For developers, it is the database of choice for building new applications with its rich set of extensions, open-source API, and expansive ecosystems of tools and libraries. At the same time, but at the opposite end of the workload spectrum, enterprises around the world are also increasingly turning to Postgres to modernize their existing applications. Azure HorizonDB is designed to support applications across the entire workload spectrum from the first line of code in a new app to the migration of large-scale, mission-critical solutions. Developers benefit from the robust Postgres ecosystem and seamless integration with Azure’s advanced AI capabilities, while enterprises can gain a secure, highly available, and performant cloud database to host their business applications. Whether you’re building from scratch or transforming legacy infrastructure, Azure HorizonDB empowers you to innovate and scale with confidence, today and into the future. Azure HorizonDB introduces new levels of performance and scalability to PostgreSQL. The scale-out compute architecture supports up to 3,072 vCores across primary and replica nodes, and the auto-scaling shared storage supports up to 128TB databases while providing sub-millisecond multi-zone commit latencies. This storage innovation enables Azure HorizonDB to deliver up to 3x more throughput when compared with open-source Postgres for transactional workloads. Azure HorizonDB is enterprise ready on day one. With native support for Entra ID, Private Endpoints, and data encryption, it provides compliance and security for sensitive data stored in the cloud. All data is replicated across availability zones by default and maintenance operations are transparent with near-zero downtime. Backups are fully automated, and integration with Azure Defender for Cloud provides additional protection for highly sensitive data. All up, Azure HorizonDB offers enterprise-grade security, compliance, and reliability, making it ready for business use today. Since the launch of ChatGPT, there has been an explosion of new AI apps being built, and Postgres has become the database of choice due in large part to its vector index support. Azure HorizonDB extends the AI capabilities of Postgres further with two key features. We are introducing advanced filtering capabilities to the DiskANN vector index which enable query predicate pushdowns directly into the vector similarity search. This provides significant performance and scalability improvements over pgvector HNSW while maintaining accuracy and is ideal for similarity search over transactional data in Postgres. The second feature is built-in AI model management that seamlessly integrates generative, embedding, and reranking models from Microsoft Foundry for developers to use in the database with zero configuration. In addition to enhanced vector indexing and simplified model management to build powerful new AI apps, we’re also pleased to announce the general availability of Microsoft’s PostgreSQL Extension for VS Code that provides the tooling for Postgres developers to maximize their productivity. Using this extension, GitHub Copilot is context aware of the Postgres database which means less prompting and higher quality answers, and in the Ignite release, we’ve added live monitoring with one-click GitHub Copilot debugging where Agent mode can launch directly from the performance monitoring dashboard to diagnose Postgres performance issues and guide users to a fix. Alpha Life Sciences are an existing Azure customers “I’m truly excited about how Azure HorizonDB empowers our AI development. Its seamless support for Vector DB, RAG, and Agentic AI allows us to build intelligent features directly on a reliable Postgres foundation. With Azure HorizonDB, I can focus on advancing AI capabilities instead of managing infrastructure complexities. It’s a smart, forward-looking solution that perfectly aligns with how we design and deliver AI-powered applications.” Pengcheng Xu, CTO Alpha Life Sciences For enterprises that are modernizing their applications to Postgres in the cloud, the security and availability of Azure HorizonDB make it an ideal platform. However, these migrations are often complex and time consuming for large legacy codebase conversions. To simplify this and reduce the risk, we’re pleased to announce the preview of GitHub Copilot powered Oracle migration built into the PostgreSQL Extension for VS Code. Built into VS Code, teams of engineers can work with GitHub Copilot to automate the end-to-end conversion of complex database code using rich code editing, version control, text authoring, and deployment in an integrated development environment. Azure HorizonDB is the next generation of fully managed, cloud native PostgreSQL database service. Built on the latest Azure infrastructure with state-of-the-art cloud architecture, Azure HorizonDB is ready to for the most demanding application workloads. In addition to our portfolio of managed Postgres services in Azure, Microsoft is deeply invested into the open source Postgres project and is one of the top corporate upstream contributors and sponsors for the PostgreSQL project, with 19 Postgres project contributors employed by Microsoft. As a hyperscale Postgres vendor, it’s critical to actively participate in the open-source project. It enables us to better support our customers down to the metal in Azure, and to contribute our learnings from running Postgres at scale back to the community. We’re committed to continuing our investment to push the Postgres project forward, and the team is already active in making contributions to Postgres 19 to be released in 2026. Ready to explore Azure HorizonDB? Azure HorizonDB is initially available in Central US, West US3, UK South and Australia East regions. Customers are invited to apply for early preview access to Azure HorizonDB and get hands-on experience with this new service. Participation is limited, apply now at aka.ms/PreviewHorizonDBPostgreSQL for the enterprise: scale, secure, simplify
This week at Microsoft Ignite, along with unveiling the new Azure HorizonDB cloud native database service, we’re announcing multiple improvements to our fully managed open-source Azure Database for PostgreSQL service, delivering significant advances in performance, analytics, security, and AI-assisted migration. Let’s walk through nine of the top Azure Database for PostgreSQL features and improvements we’re announcing at Microsoft Ignite 2025. Feature Highlights New Intel and AMD v6-series SKUs (Preview) Scale to multiple nodes with Elastic Clusters (GA) PostgreSQL 18 (GA) Realtime analytics with Fabric Mirroring (GA) Analytical queries inside PostgreSQL with the pg_duckdb extension (Preview) Adding Parquet to the azure_storage extension (GA) Meet compliance requirements with the credcheck, anon & ip4r extensions (GA) Integrated identity with Entra token-refresh libraries for Python AI-Assisted Oracle to PostgreSQL Migration Tool (Preview) Performance and scale New Intel and AMD v6 series SKUs (Preview) You can run your most demanding Postgres workloads on new Intel and AMD v6 General Purpose and Memory Optimized hardware SKUs, now availble in preview These SKUs deliver massive scale for high-performance OLTP, analytics and complex queries, with improved price performance and higher memory ceilings. AMD Confidential Compute v6 SKUs are also in Public Preview, enabling enhanced security for sensitive workloads while leveraging AMD’s advanced hardware capabilities. Here’s what you need to know: Processors: Powered by 5th Gen Intel® Xeon® processor (code-named Emerald Rapids) and AMD's fourth Generation EPYC™ 9004 processors Scale: VM size options scale up to 192 vCores and 1.8 TiB IO: Using the NVMe protocol for data disk access, IO is parallelized to the number of CPU cores and processed more efficiently, offering significant IO improvements Compute tier: Available in our General Purpose and Memory Optimized tiers. You can scale up to these new compute SKUs as needed with minimal downtime. Learn more: Here's a quick summary of the v6 SKUs we’re launching, with links to more information: Processor SKU Max vCores Max Mem Intel Ddsv6 192 768 GiB Edsv6 192 1.8 TiB AMD Dadsv6 96 384 GiB Eadsv6 96 672 GiB DCadsv6 96 386 GiB ECadsv6 96 672 GiB Scale to multiple nodes with Elastic clusters (GA) Elastic clusters are now generally available in Azure Database for PostgreSQL. Built on Citus open-source technology, elastic clusters bring the horizontal scaling of a distributed database to the enterprise features of Azure Database for PostgreSQL. Elastic clusters enable horizontal scaling of databases running across multiple server nodes in a “shared nothing” architecture. This is ideal for workloads with high-throughput and storage-intensive demands such as multi-tenant SaaS and IoT-based workloads. Elastic clusters come with all the enterprise-level capabilities that organizations rely upon in Azure Database for PostgreSQL, including high availability, read replicas, private networking, integrated security and connection pooling. Built-in sharding support at both row and schema level enables you to distribute your data across a cluster of compute resources and run queries in parallel, dramatically increasing throughput and capacity. Learn more: Elastic clusters in Azure Database for PostgreSQL PostgreSQL 18 (GA) When PostgreSQL 18 was released in September, we made a preview available on Azure on the same day. Now we’re announcing that PostgreSQL 18 is generally available on Azure Database for PostgreSQL, with full Major Version Upgrade (MVU) support, marking our fastest-ever turnaround from open-source release to managed service general availability. This release reinforces our commitment to delivering the latest PostgreSQL community innovations to Azure customers, so you can adopt the latest features, performance improvements, and security enhancements on a fully managed, production-ready platform without delay. Now you can: Deploy PostgreSQL 18 in all public Azure regions. Perform in-place major version upgrades to PG18 with no endpoint or connection string changes. Use Microsoft Entra ID authentication for secure, centralized identity management in all PG versions. Enable Query Store and Index Tuning for built-in performance insights and automated optimization. Leverage the 90+ Postgres extensions supported by Azure Database for PostgreSQL. PostgreSQL 18 also delivers major improvements under the hood, ranging from asynchronous I/O and enhanced vacuuming to improved indexing and partitioning, ensuring Azure continues to lead as the most performant, secure, and developer-friendly PostgreSQL managed service in the cloud. Learn more: PostgreSQL 18 open-source release announcement Supported versions of PostgreSQL in Azure Database for PostgreSQL Analytics Real-time analytics with Fabric Mirroring (GA) With Fabric mirroring in Azure Database for PostgreSQL, now generally available, you can run your Microsoft Fabric analytical workloads and capabilities on near-real-time replicated data, without impacting the performance of your production PostgreSQL databases, and at no extra cost. Mirroring in Fabric connects your operational and analytical platforms with continuous data replication from PostgreSQL to Fabric. Transactions are mirrored to Fabric in near real-time, enabling advanced analytics, machine learning, and reporting on live data sets without waiting for traditional batch ETL processes to complete. This approach eliminates the overhead of custom integrations or data pipelines. Production PostgreSQL servers can run mission-critical transactional workloads without being affected by surges in analytical queries and reporting. With our GA announcement Fabric mirroring is ready for production workloads, with secure networking (VNET integration and Private Endpoints supported), Entra ID authentication for centralized identity management, and support for high availability enabled servers, ensuring business continuity for mirroring sessions. Learn more: Mirroring Azure Database for PostgreSQL flexible server Adding Parquet support to the azure_storage extension (GA) In addition to mirroring data directly to Microsoft Fabric, there are many other scenarios that require moving operational data into data lakes for analytics or archival. The complexity of building and maintaining ETL pipelines can be expensive and time-consuming. Azure Database for PostgreSQL now natively supports Parquet via the azure_storage extension, enabling direct SQL-based read/write to Parquet files in Azure Storage. This makes it easy to import and export data in Postgres without external tools or scripts. Parquet is a popular columnar storage format often used in big data and analytics environments (like Spark and Azure Data Lake) because of its efficient compression and query performance for large datasets. Now you can use the azure_storage extension to can skip an entire step: just issue a SQL command to write to and query from a Parquet file in Azure Blob Storage. Learn more: Azure storage extension in Azure Database for PostgreSQL Analytical queries inside PostgreSQL with the pg_duckdb extension (Preview) DuckDB’s columnar engine excels at high performance scans, aggregations and joins over large tables, making it particularly well-suited for analytical queries. The pg_duckdb extension, now available in preview for Azure Database for PostgreSQL combines PostgreSQL’s transactional performance and reliability with DuckDB’s analytical speed for large datasets. Together pg_duckdb and PostgreSQL are an ideal combination for hybrid OLTP + OLAP environments where you need to run analytical queries directly in PostgreSQL without sacrificing performance., To see the pg_duckdb extension in action check out this demo video: https://aka.ms/pg_duckdb Learn more: pg_duckdb – PostgreSQL extension for DuckDB Security Meet compliance requirements with the credcheck, anon & ip4r extensions (GA) Operating in a regulated industry such as Finance, Healthcare and Government means negotiating compliance requirements like HIPAA and PCI-DSS, GDPR that include protection for personalized data and password complexity, expiration and reuse. This week the anon extension, previously in preview, is now generally available for Azure Database for PostgreSQL adding support for dynamic and static masking, anonymized exports, randomization and many other advanced masking techniques. We’ve also added GA support for the credcheck extension, which provides credential checks for usernames, and password complexity, including during user creation, password change and user renaming. This is particularly useful if your application is not using Entra ID and needs to rely on native PostgreSQL users and passwords. If you need to store and query IP ranges for scenarios like auditing, compliance, access control lists, intrusion detection and threat intelligence, another useful extension announced this week is the ip4r extension which provides a set of data types for IPv4 and IPv6 network addresses. Learn more: PostgreSQL Anonymizer credcheck – PostgreSQL username/password checks IP4R - IPv4/v6 and IPv4/v6 range index type for PostgreSQL The Azure team maintains an active pipeline of new PostgreSQL extensions to onboard and upgrade to Azure Database for PostgreSQL For example, another important extension upgraded this week is pg_squeeze which removes unused space from a table. The updated 1.9.1 version adds important stability improvements. Learn more: List of extensions and modules by name Integrated identity with Entra token-refresh libraries for Python In a modern cloud-connected enterprise, identity becomes the most important security perimeter. Azure Database for PostgreSQL is the only managed PostgreSQL service with full Entra integration, but coding applications to take care of Entra token refresh can be complex. This week we’re announcing a new Python library to simplify Entra token refresh. The library automatically refreshes authentication tokens before they expire, eliminating manual token handling and reducing connection failures. The new python_azure_pg_auth library provides seamless Azure Entra ID authentication and supports the latest psycopg and SQLAlchemy drivers with automatic token acquisition, validation, and refresh. Built-in connection pooling is available for both synchronous and asynchronous workloads. Designed for cross-platform use (Windows, Linux, macOS), the package features clean architecture and flexible installation options for different driver combinations. This is our first milestone in a roadmap to add token refresh for additional programming languages and frameworks. Learn more, with code samples to get started here: https://aka.ms/python-azure-pg-auth Migration AI-Assisted Oracle to PostgreSQL Migration Tool (Preview) Database migration is a challenging and time-consuming process, with multiple manual steps requiring schema and apps specific information. The growing popularity, maturity and low cost of PostgreSQL has led to a healthy demand for migration tooling to simplify these steps. The new AI-assisted Oracle Migration Tool preview announced this week greatly simplifies moving from Oracle databases to Azure Database for PostgreSQL. Available in the VS Code PostgreSQL extension the new migration tool combines GitHub Copilot, Azure OpenAI, and custom Language Model Tools to convert Oracle schema, database code and client applications into PostgreSQL-compatible formats. Unlike traditional migration tools that rely on static rules, Azure’s approach leverages Large Language Models (LLMs) and validates every change against a running Azure Database for PostgreSQL instance. This system not only translates syntax but also detects and fixes errors through iterative re-compilation, flagging any items that require human review. Application codebases like Spring Boot and other popular frameworks are refactored and converted. The system also understands context by querying the target Postgres instance for version and installed extensions. It can even invoke capabilities from other VS Code extensions to validate the converted code. The new AI-assisted workflow reduces risk, eliminates significant manual effort, and enables faster modernization while lowering costs. Learn more: https://aka.ms/pg-migration-tooling Be sure to follow the Microsoft Blog for PostgreSQL for regular updates from the Postgres on Azure team at Microsoft. We publish monthly recaps about new features in Azure Database for PostgreSQL, as well as an annual blog about what’s new in Postgres at Microsoft.539Views9likes0CommentsBuild Smarter with Azure HorizonDB
By: Maxim Lukiyanov, PhD, Principal PM Manager; Abe Omorogbe, Senior Product Manager; Shreya R. Aithal, Product Manager II; Swarathmika Kakivaya, Product Manager II Today, at Microsoft Ignite, we are announcing a new PostgreSQL database service - Azure HorizonDB. You can read the announcement here, and in this blog you can learn more about HorizonDB’s AI features and development tools. Azure HorizonDB is designed for the full spectrum of modern database needs - from quickly building new AI applications, to scaling enterprise workloads to unprecedented levels of performance and availability, to managing your databases efficiently and securely. To help with building new AI applications we are introducing 3 features: DiskANN Advanced Filtering, built-in AI model management, and integration with Microsoft Foundry. To help with database management we are introducing a set of new capabilities in PostgreSQL extension for Visual Studio Code, as well as announcing General Availability of the extension. Let’s dive into AI features first. DiskANN Advanced Filtering We are excited to announce a new enhancement in the Microsoft’s state of the art vector indexing algorithm DiskANN – DiskANN Advanced Filtering. Advanced Filtering addresses a common problem in vector search – combining vector search with filtering. In real-world applications where queries often include constraints like price ranges, ratings, or categories, traditional vector search approaches, such as pgvector’s HNSW, rely on multiple step retrieval and post-filtering, which can make search extremely slow. DiskANN Advanced Filtering solves this by combining filter and search into one operation - while the graph of vectors is traversed during the vector search, each vector is also checked for filter predicate match, ensuring that only the correct vectors are retrieved. Under the hood, it works in a 3-step process: first creating a bitmap of relevant rows using indexes on attributes such as price or rating, then performing a filter-aware graph traversal against the bitmap, and finally, validating and ordering the results for accuracy. This integrated approach delivers dramatically faster and more efficient filtered vector searches. Initial benchmarks show that enabling Advanced Filtering on DiskANN reduces query latency by up to 3x, depending on filter selectivity. AI Model Management Another exciting feature of HorizonDB is AI Model Management. This feature automates Microsoft Foundry model provisioning during database deployment and instantly activates database semantic operators. This eliminates tens of setup and configuration steps and simplifies the development of new AI apps and agents. AI Model Management elevates the experience of using semantic operators within PostgreSQL. When activated, it provisions key models for embedding, semantic ranking and generation via Foundry, installs and configures the azure_ai extension to enable the operators, establishes secure connections, integrates model management, monitoring and cost management within HorizonDB. What would otherwise require significant manual effort and context-switching between Foundry and PostgreSQL for configuration, management, and monitoring is now possible with just a few clicks, all without leaving the PostgreSQL environment. You can also continue to bring your own Foundry models, with a simplified and enhanced process for registering your custom model endpoints in the azure_ai extension. Microsoft Foundry Integration Microsoft Foundry offers a comprehensive technology stack for building AI apps and agents. But building modern agents capable of reasoning, acting, and collaborating is impossible without connection to data. To facilitate that connection, we are excited to announce a new PostgreSQL connector in Microsoft Foundry. The connector is designed using a new standard in data connectivity – Model Context Protocol (MCP). It enables Foundry agents to interact with HorizonDB securely and intelligently, using natural language instead of SQL, and leveraging Microsoft Entra ID to ensure secure connection. In addition to HorizonDB this connector also supports Azure Database for PostgreSQL (ADP). This integration allows Foundry agents to perform tasks like: Exploring database schemas Retrieving records and insights Performing analytical queries Executing vector similarity searches for semantic search use cases All through natural language, without compromising enterprise security or compliance. To get started with Foundry Integration, follow these setup steps to deploy your own HorizonDB (requires participation in Private Preview) or ADP and connect it to Foundry in just a few steps. PostgreSQL extension for VS Code is Generally Available We’re excited to announce that the PostgreSQL extension for Visual Studio Code is now Generally Available. This extension garnered significant popularity within the PostgreSQL community since it’s preview in May’25 reaching more than 200K installs. It is the easiest way to connect to a PostgreSQL database from your favorite editor, manage your databases, and take advantage of built-in AI capabilities without ever leaving VS Code. The extension works with any PostgreSQL whether it's on-premises or in the cloud, and also supports unique features of Azure HorizonDB and Azure Database for PostgreSQL (ADP). One of the key new capabilities is Metrics Intelligence, which uses Copilot and real-time telemetry of HorizonDB or ADP to help you diagnose and fix performance issues in seconds. Instead of digging through logs and query plans, you can open the Performance Dashboard, see a CPU spike, and ask Copilot to investigate. The extension sends a rich prompt that tells Copilot to analyze live metrics, identify the root cause, and propose an actionable fix. For example, Copilot might find a full table scan on a large table, recommend a composite index on the filter columns, create that index, and confirm the query plan now uses it. The result is dramatic: you can investigate and resolve the CPU spike in seconds, with no manual scripting or guesswork, and with no prior PostgreSQL expertise required. The extension also makes it easier to work with graph data. HorizonDB and ADP support open-source graph extension Apache AGE. This turns these services into fully managed graph databases. You can run graph queries against HorizonDB and immediately visualize the results as an interactive graph inside VS Code. This helps you understand relationships in your data faster, whether you’re exploring customer journeys, network topologies, or knowledge graphs - all without switching tools. In Conclusion Azure HorizonDB brings together everything teams need to build, run, and manage modern, AI-powered applications on PostgreSQL. With DiskANN Advanced Filtering, you can deliver low-latency, filtered vector search at scale. With built-in AI Model Management and Microsoft Foundry integration, you can provision models, wire up semantic operators, and connect agents to your data with far fewer steps and far less complexity. And with the PostgreSQL extension for Visual Studio Code, you get an intuitive, AI-assisted experience for performance tuning and graph visualization, right inside the tools you already use. HorizonDB is now available in private preview. If you’re interested in building AI apps and agents on a fully managed, PostgreSQL-compatible service with built-in AI and rich developer tooling, sign-up for Private Preview: https://aka.ms/PreviewHorizonDB.412Views3likes0CommentsUpgrade Azure Database for PostgreSQL with Minimal Downtime Using Logical Replication
Azure Database for PostgreSQL provides a seamless Major Version Upgrade (MVU) experience for your servers, which is important for security, performance, and feature enhancements. For production workloads, minimizing downtime during this upgrade is essential to maintain business continuity. This blog explores a practical approach to performing a Major Version Upgrade (MVU) with minimal downtime and maximum reliability using logical replication and virtual endpoints. Upgrading without disrupting your applications is critical. With this method, you can: Approach 1: Configure two servers where the publisher runs on the lower version and the subscriber on the higher version, perform MVU and then switch over using virtual endpoints. The time taken to restore the server is specific to your workloads on the primary server. Approach 2: Maintain two servers on different versions, use pg_dump and pg_restore to restore with data for production server, and perform a seamless switchover using virtual endpoints. To enable logical replication on a table, it must have one of the following: A Primary Key, or A Unique Index Approach 1 Approach 2 Restores the instance using the same version. Creates and restores the instance on a higher version. Faster restore with PITR (Point-in-Time Recovery) but requires a few additional steps. Takes longer to restore because it uses pg_dump and pg_restore commands but enables version upgrade during restore. Best suited when speed is the priority to restore the server. Best suited when you want to restore directly to a higher version, and it does not downtime for the MVU operation. Approach 1 Setup: Two servers, one for testing, and one for production. Here are the steps to follow: Create a virtual endpoint on the production server. Perform a Point-in-time-restore (PITR) from the first server (Production) and create your test server. Add a virtual endpoint for the test server. Establish logical replication between the two servers. Perform the Major Version Upgrade (MVU) on the test server. Validate data on the test server. Update virtual endpoints: Remove the endpoints from both servers, then assign the original production endpoint to the test server. Step By Step Guide Environment Setup Two servers are involved: Server 1: Current production server (Publisher) Server 2: Restored server for MVU (Subscriber) Create a virtual endpoint for the production server. Configure Logical Replication & Grant Permissions Enable replication parameters on the publisher: wal_level = logical max_worker_processes = 16 max_replication_slots = 10 max_wal_senders = 10 track_commit_timestamp = on Grant replication role to the user ALTER ROLE <user> WITH REPLICATION; GRANT azure_pg_admin TO <user>; Create tables and insert data CREATE TABLE basic (id INTEGER NOT NULL PRIMARY KEY, a TEXT); INSERT INTO basic VALUES (1, 'apple'), (2, 'banana'); Set Up Logical Replication on the Production Server Create a publication slot on the Production Server: create publication <publisher-name>; alter publication <publisher-name> add table<table>; SELECT pg_create_logical_replication_slot(‘<publisher-name>’, ‘pgoutput’); Choose Restore Point: Determine the latest possible point in time (PIT) to restore the data from the source server. This Point in Time must be, necessarily, after you created the replication slot in Step 3. Provision Target Server via PITR: Use Azure Portal or Azure CLI to trigger a Point-in-Time Restore. This creates the test server based on the production backup. You are provisioning the test server based on the production server's backup capabilities. This test server will initially be a copy of the production server’s data state at a specific point in time. Configure Server Parameters on Test Server wal_level = logical max_worker_processes = 16 max_replication_slots = 10 max_wal_senders = 10 track_commit_timestamp = on Create the target server as subscriber & Advance Replication Origin: This is the crucial step that connects the test server (subscriber) to the production (publisher) server and manually tells the target where in the WAL log stream to begin reading changes, skipping the data already restored. Create Subscription: Creates a logical replication subscription on the test server, linking it to the source and specifying connection details, publication, and replication slot without copying existing data. CREATE SUBSCRIPTION <subscriber-name>;CONNECTION 'host=<host-name>.postgres.database.azure.com port=5432 dbname=postgres user=<username> password=<password>' PUBLICATION <publisher-name> WITH ( copy_data = false, create_slot = false, enabled = false, slot_name = <publisher-name> ); Retrieves the replication origin identifier and name on the test server, which is needed to advance the replication position. SELECT roident, roname FROM pg_replication_origin; Execute this query on the Production server: Fetches the replication slot name and the restart LSN from the source server, indicating where replication should resume. SELECT slot_name, restart_lsn FROM pg_replication_slots WHERE slot_name = <publisher-name>; On the test server execute this command: Manually advances the replication origin on the target server to skip already restored data and start replication from the correct WAL position. SELECT pg_replication_origin_advance(roident, restart_lsn); Enable the target server as a subscriber of the source server With the target server populated and the replication origin advanced, you can start the synchronization. ALTER SUBSCRIPTION <publisher-name> ENABLE; The target server now starts consuming the WAL entries from the source, rapidly closing the gap on all transactions that occurred between the slot creation and the completion of the PITR. Test Replication works Create a virtual endpoint for the test server, and validate the data on the test server Confirm that the synchronization is working by inserting a record on the production server and immediately verifying its presence on the test server. Perform Major Version Upgrade (MVU) Upgrade your test server, and validate all the new extensions and features by using the virtual endpoint for the test server Manage virtual endpoints Once the data and all the new extensions are validated, drop the virtual endpoint on production server and recreate the same virtual endpoint on test server. Key Considerations: Test server initially handles read traffic; writes remain on production server to avoid conflicts. Virtual endpoint creation: ~1–2 minutes per endpoint. Time taken for Point-in-time-restore depends on the workload that you have on the production server Approach 2: This approach enables a Major Version Upgrade (MVU) by combining logical replication with an initial dump and restore process. It minimizes downtime while ensuring data consistency. Create a new Azure Database for PostgreSQL Flexible Server instance using your desired target major version (e.g., PostgreSQL 17). Ensure the new server's configuration (SKU, storage size, and location) is suitable for your eventual production load. This approach enables the core benefit of a side-by-side migration, running two distinct database versions concurrently. The existing application remains connected to the source environment, minimizing risk and allowing the new target to be fully configured offline. Configure Role Privileges on Source and Target Servers ALTER ROLE <replication_user> WITH REPLICATION; GRANT azure_pg_admin TO <replication_user>; Check Prerequisites for Logical Replication Set these parameters on both source and target servers: Set these server parameters to at least the minimum recommended values shown below to enable and support the features required for logical replication. wal_level=logical max_worker_processes=16 max_replication_slots=10 max_wal_senders=10 track_commit_timestamp=on Ensure tables are ready: Each table to be replicated must have a primary key or unique identifier Create Publication and Replication Slot on Source create publication <publisher-name>; alter publication <publisher-name> add table<table>; SELECT pg_create_logical_replication_slot(‘<publisher-name>’, ‘pgoutput’); This slot tracks all changes from this point onward. Generate Schema and Initial Data Dump Run pg_dump after creating the replication slot: Perform the dump after creating the replication slot to capture a static starting point. Using an Azure VM is recommended for optimal network performance. pg_dump -U demo -W -h <hostname>.postgres.database.azure.com -p 5432 -Fc -v -f dump.bak postgres -N pg_catalog -N cron -N information_schema Restore Data into Target (recommended: Azure VM): This populates the target server with the initial dataset. pg_restore -U demo -W -h <hostname>.postgres.database.azure.com -p 5432 --no-owner -Fc -v -d postgres dump.bak --no-acl Catch-Up Mechanism: While the restoration is ongoing, new transactions on the source are safely recorded by the replication slot. It is critical to have sufficient storage on the source to hold the WAL files during this initial period until replication is fully active. Create Subscription and Advance Replication Origin on Target: This step connects the test server (subscriber) to the production server (source) and manually tells the target where in the WAL log stream to begin reading changes, skipping the data already restored. Create subscription: Creates a logical replication subscription on the target server, linking it to the source and specifying connection details, publication, and replication slot without copying existing data. CREATE SUBSCRIPTION <subscription-name> CONNECTION 'host=<hostname>.postgres.database.azure.com port=5432 dbname=postgres user=<username> password=<password>' PUBLICATION <publisher-name> WITH ( copy_data = false, create_slot = false, enabled = false, slot_name = '<publisher-name>); Retrieves the replication origin identifier and name on the target server, which is needed to advance the replication position. SELECT roident, roname FROM pg_replication_origin; Fetches the replication slot name and the restart LSN from the source server, indicating where replication should resume. SELECT slot_name, restart_lsn FROM pg_replication_slots WHERE slot_name = '<publisher-name>; Manually advances the replication origin on the target server to skip already restored data and start replication from the correct WAL position. SELECT pg_replication_origin_advance('<roname>', '<restart_lsn>'); Enable Subscription: With the target server populated and the replication origin advanced, you can start the synchronization. ALTER SUBSCRIPTION <subscription-name> ENABLE; Result: The target server now starts consuming the WAL entries from the source, rapidly closing the gap on all transactions that occurred during the dump and restore process. Validate Replication: Insert a record on the source and confirm it appears on the target: Perform Cutover Stop application traffic to the production database. Wait for the target database to confirm zero replication lag. Disable the subscription (ALTER SUBSCRIPTION logical_sub01 DISABLE;). Connect the application to the new Azure Database for PostgreSQL instance. Utilize Virtual Endpoints or a CNAME DNS record for your database connection string. By simply pointing the endpoint/CNAME to the new server, you can switch your application stack without changing hundreds of individual configuration files, making the final cutover near-instantaneous. Conclusion This MVU strategy using logical replication and virtual endpoints provides a safe, efficient way to upgrade PostgreSQL servers without disrupting workloads. By combining replication, endpoint management, and automation, you can achieve a smooth transition to newer versions while maintaining high availability. For an alternative approach, check out our blog on using the Migration Service for MVU: Hacking the migration service in Azure Database for PostgreSQL351Views2likes2CommentsAzure PostgreSQL Lesson Learned#5: Why SKU Changes from Non-Confidential to Confidential Fail
Co-authored with HaiderZ-MSFT Issue Summary The customer attempted to change the server configuration from Standard_D4ds_v5 (non-Confidential Compute) to Standard_DC4ads_v5 (Confidential Compute) in the West Europe region. The goal was to enhance the performance and security profile of the server. However, the SKU change could not be completed due to a mismatch in security profiles between the current and target SKUs. Root Cause The issue occurred because SKU changes from non-Confidential to Confidential Compute types are not supported in Azure Database for PostgreSQL Flexible Server. Each compute type uses different underlying hardware and isolation technologies. As documented in Azure Confidential Computing for PostgreSQL Flexible Server, operations such as Point-in-Time Restore (PITR) from non-Confidential Compute SKUs to Confidential ones aren’t allowed. Similarly, direct SKU transitions between these compute types are not supported due to this security model difference. Mitigation To resolve the issue, the customer was advised to migrate the data to a new server created with the desired compute SKU (Standard_DC4ads_v5). This ensures compatibility while achieving the intended performance and security goals. Steps: Create a new PostgreSQL Flexible Server with the desired SKU (Confidential Compute). Use native PostgreSQL tools to migrate data: pg_dump -h <source_server> -U <user> -Fc -f backup.dump pg_restore -h <target_server> -U <user> -d <database> -c backup.dump 3. Validate connectivity and performance on the new server. 4. Decommission the old server once migration is confirmed successful. Prevention & Best Practices To avoid similar issues in the future: Review documentation before performing SKU changes or scaling operations: Azure Confidential Computing for PostgreSQL Flexible Server Confirm compute type compatibility when planning scale or migration operations. Plan migrations proactively if you anticipate needing a different compute security profile. Use tools such as pg_dump / pg_restore or Azure Database Migration Service. Check regional availability for Confidential Compute SKUs before deployment. Why these matters Understanding the distinction between Confidential and non-Confidential Compute is essential to maintain healthy business progress. By reviewing compute compatibility and following the documented best practices, customers can ensure smooth scaling, enhanced security, and predictable database performance.81Views0likes0CommentsExciting things on the horizon for PostgreSQL fans @ Ignite 2025
If you’re passionate about PostgreSQL or just curious about what’s new, you’ll want to join us at Microsoft Ignite 2025. We have a packed lineup, including sessions exploring cutting-edge features and exclusive giveaways at the PostgreSQL on Azure booth. Haven’t registered yet? Now’s the time – sign up for Microsoft Ignite and start building your schedule. Below are the must-see PostgreSQL on Azure activities, with highlights of what you’ll learn at each. Add these to your agenda today. Sessions can fill up fast! Theater sessions: get a first look, fast I know from experience that attention spans can start to wane after hours-long keynotes, content-rich sessions, and conference socializing. Luckily, we have a couple of theater sessions that offer snackable but substantial information in less time than it will take to grab lunch. And they’re located conveniently on the main conference floor. PostgreSQL on Azure: Your launchpad for intelligent apps and agents (THR705) - See how we’re making PostgreSQL AI-aware for developers to drive app and agent innovation. Includes a demo of vector similarity search, semantic operators baked into Postgres, and more! Simplifying scale-out of PostgreSQL for performant multi-tenant apps (THR706) - Discover a smarter, simpler way to scale PostgreSQL using the new Elastic Clusters feature. If your app or service is growing fast (or you want it to!), add this breakout to learn how Azure makes it easier to scale Postgres and keep it reliable. These talks are a great way to sample what’s new and decide where to dive deeper. Plus, they’re fun and demo-heavy, and who doesn’t love a good demo? Breakout sessions: a deep dive into Postgres innovations Led by Azure product leaders and executives from organizations driving innovation backed by PostgreSQL, these breakout sessions will dive into the coolest new capabilities and real-world use cases. If you want rich, technical content and more live demos, these are for you. Build mission-critical apps that scale with PostgreSQL on Azure (BRK127) - Get a closer look at the next generation of PostgreSQL on Azure. Add this session, if you’re curious about how we’re taking Postgres to the next level to support your mission-critical AI workloads. Modern data, modern apps: Innovation with Microsoft Databases (BRK134) - Gain insider knowledge on the latest innovations across open-source, SQL, and NoSQL databases, and understand how Microsoft’s integrated database portfolio supports next-gen innovation. Nasdaq Boardvantage: AI-driven governance on PostgreSQL and AI Foundry (BRK137) - Discover how a Fortune 100 merges trust with cutting-edge AI leveraging Azure’s AI-enriched and enterprise-ready solutions, including Azure Database for PostgreSQL, Azure Database for MySQL, Azure AI Foundry, Azure Kubernetes Service (AKS), and API Management. AI-assisted migration: The path to powerful performance on PostgreSQL (BRK123) - A before and after migration journey from Oracle to Azure Database for PostgreSQL. See how the new AI-assisted migration experience delivers conversion in a few clicks and minimal downtime. The blueprint for intelligent AI agents backed by PostgreSQL (BRK130) - If you’re into AI development, this session will spark ideas on bridging the gap between raw data and AI reasoning. You’ll leave with practical tips to turbocharge your AI agents with PostgreSQL. Each breakout session is 45 minutes with live demos and Q&A, so you’ll get plenty of detail and interaction with Postgres experts. Hands-on lab: experience coding with Azure superpowers Do you learn best by doing? Then our guided workshop, Build advanced AI agents with PostgreSQL (Lab515), is for you. In each 75-minute session, you’ll get to create a fully functional AI-powered application backed by PostgreSQL on Azure with step-by-step guidance and expert insight on the latest innovations enabling intelligent app development. All the tools and instructions you’ll need are provided. Labs have limited capacity, so be sure to reserve your seat for any of the four labs in advance. This lab is a great way to understand how all the pieces come together on Azure. And you’ll gain practical skills you can apply to your own projects, whether it’s customer support bots, intelligent search in your app, or any scenario where PostgreSQL + AI collide. Expert meet-up booth: meet the team, grab some swag If you still want more Postgres (or a little Postgres souvenir), you can stop by the PostgreSQL on Azure Expert Meetup booth in the Ignite Hub. This will be our homebase on the show floor, where you can: Meet the team: I’ll be there in person, along with engineers, program managers, cloud solution architects, and advocates from our team. Whether you have a burning technical question, want to share feedback, or need guidance for your specific use case, come chat with us. Get a quick demo re-run: Sometimes a 5-minute demo is worth a thousand words, especially after you’ve sat through all those words already in a keynote. The booth will have a monitor and a live environment so we can walk you through select use cases if you have questions - no appointment needed. Swag and giveaways: Ah yes, the goodies! We know conference swag is part of the fun, so we’ve got some special PostgreSQL-themed giveaways at the booth. I won’t spoil all the surprises, but rumor has it there are some limited-edition items up for grabs. Network with peers: The expert meet-up area is also a magnet for PostgreSQL enthusiasts. You might bump into other attendees at the booth who are tackling similar projects or challenges. Ignite is about community as much as content, so come by and spark up a conversation. Meet you there? Ignite is our largest event of the year. We love sharing what we’ve been working on and, most of all, hearing from you, the community. So, on behalf of the Azure for PostgreSQL team, thank you for your interest and support. We can’t wait to show you what’s new and to help you continue to succeed with Postgres. See you in San Francisco!349Views2likes0CommentsOctober 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, We are excited to bring October 2025 recap blog for Azure Database for PostgreSQL! This blog focuses on key announcements around the General Availability of the REST API for 2025, maintenance payload visibility and several new features aimed at improving performance and a guide on minimizing downtime for MVU operation with logical replication. Stay tuned as we dive deeper into each of these feature updates. Get Ready for Ignite 2025! Before we get into the feature breakdown, Ignite is just around the corner! It’s packed with major announcements for Azure Database for PostgreSQL. We’ve prepared a comprehensive guide to all the sessions we have lined up, don’t miss out! Follow this link to explore the Ignite session guide. Feature Highlights Stable REST API release for 2025 – Generally Available Maintenance payload visibility – Generally Available Achieving Zonal resiliency for High-Availability workloads - Preview Japan West now supports zone-redundant HA PgBouncer 1.23.1 version upgrade Perform Major Version upgrade (MVU) with logical replication PgConf EU 2025 – Key Takeaways and Sessions Stable REST API release for 2025 – Generally Available We’ve released the stable REST API version 2025-08-01! This update adds support for PostgreSQL 17 so you can adopt new versions without changing your automation patterns. We also introduced the ability to set the default database name for Elastic Clusters. To improve developer experience, we have renamed operation IDs for clearer navigation and corrected HTTP response codes so scripts and retries behave as expected. Security guidance gets a boost with a new CMK encryption example that demonstrates automatic key version updates. Finally, we have cleaned up the specification itself by renaming files for accuracy, reorganizing the structure for easier browsing and diffs, and enhancing local definition metadata, delivering a clearer, safer, and more capable API for your 2025 roadmaps. Learn how to call or use Azure Database for PostgreSQL REST APIs. Learn about the operations available in our latest GA REST API. Repository for all Released GA APIs. Maintenance payload visibility – Generally Available The Azure Database for PostgreSQL maintenance experience has been enhanced to increase transparency and control. With this update, customers will receive Azure Service Health notifications that include a direct link to the detailed maintenance payload for each patch. This means you’ll know exactly what’s changing – helping you plan ahead, reduce surprises, and maintain confidence in your operations. Additionally, all maintenance payloads are now published in the dedicated Maintenance Release Notes section of our documentation. This enhancement provides greater visibility into upcoming updates and empowers you with the information needed to align maintenance schedules with your business priorities. Achieving Zonal resiliency for High-Availability workloads - Preview High Availability is important to ensure that you have your primary and standby servers deployed with same-zone or zone-redundant HA option. Zonal resiliency helps you protect your workloads against zonal outage. With the latest update, Azure Portal introduces a Zonal Resiliency setting under the High Availability section. This setting can be toggled Enabled or Disabled: Enabled: The system attempts to create the standby server in a different availability zone, activating zone-redundant HA mode. If the selected region does not support zone-redundant HA, you can select the fallback checkbox (shown in the image) to use same-zone HA instead. If you don’t select the checkbox and zonal capacity is unavailable, HA enablement fails. This design enforces zone-redundant HA as the default while providing a controlled fallback to same-zone HA, ensuring workloads achieve resiliency even in regions without multi-zone capacity. The feature offers flexibility while maintaining strong high availability across supported regions. To know more about how to configure high availability follow our documentation link. Japan West now supports zone-redundant HA Azure Database for PostgreSQL now offers Availability Zone support in Japan West, enabling deployment of zone-redundant high availability (HA) configurations in this region. This enhancement empowers customers to achieve greater resiliency and business continuity through robust zone-redundant architecture. We’re committed to bringing Azure PostgreSQL closer to where you build and run your apps, while ensuring robust disaster recovery options. For the full list of regions visit: Azure Database for PostgreSQL Regions. PgBouncer 1.23.1 version upgrade PgBouncer 1.23.1 is now available in Azure Database for PostgreSQL. As a Built-In connection pooling feature, PgBouncer helps you scale thousands of connections with low overhead by efficiently managing idle and short-lived connections. With this update, you benefit from the latest community improvements, including enhanced protocol handling and important stability fixes, giving you a more reliable and resilient connection pooling experience. Because PgBouncer is integrated into Azure Postgres, you don’t need to install or maintain it separately - simply enable it on port 6432 and start reducing connection overhead in your applications. This release keeps your PostgreSQL servers aligned with the community while providing the reliability of a managed Azure service. Learn More - PgBouncer in Azure Database for PostgreSQL. Perform Major Version upgrade (MVU) with logical replication Our Major Version Upgrade feature ensures you always have access to the latest and most powerful capabilities included in each PostgreSQL release. We’ve published a new blog that explains how to minimize downtime during major version upgrades by leveraging logical replication and virtual endpoints. The blog highlights two approaches: Using logical replication and virtual endpoints on a Point-in-Time Restore (PITR) instance Using logical replication and virtual endpoints on a server running different PostgreSQL versions, restored via pg_dump and pg_restore Follow this guide to get started and make your upgrade process smoother: Upgrade Azure Database for PostgreSQL with Minimal Downtime Using Logical Replication PgConf EU 2025 – key takeaways and sessions The Azure Database for PostgreSQL team participated in PGConf EU 2025, delivering insightful sessions on key PostgreSQL advancements. If you missed the highlights, here are a few topics we covered: AIO in PG 18 and beyond, by Andres Freund of Microsoft [Link to slides] Improved Freezing in Postgres Vacuum: From Idea to Commit, by Melanie Plageman of Microsoft [Link to slides] Behind Postgres 18: The People, the Code, & the Invisible Work [Link to Slides] Read the PGConf EU summary blog here. Azure Postgres Learning Bytes 🎓 Handling “Cannot Execute in a Read-Only Transaction” after High Availability (HA) Failover After a High Availability (HA) failover, some applications may see this error: ERROR: cannot execute <command> in a read-only transaction This happens when the application continues connecting to the old primary instance, which becomes read-only after failover. The usual cause is connecting via a static-IP or a private DNS record that doesn’t refresh automatically. Resolution Steps Use FQDN - Always connect using FQDN i.e. “<servername>.postgres.database.azure.com” instead of a hardcoded IP. Validate DNS - Run “nslookup yourservername.postgres.database.azure.com” to confirm resolution to the current primary. Private DNS - Update or automate the A-record refresh after failover. Best Practices Always use FQDN for app database connectivity. Add retry logic for transient failovers. Periodically validate DNS resolution for HA-enabled servers. For more details, refer to this detailed blog post from CSS team. Conclusion We’ll be back soon with more exciting announcements and key feature enhancements for Azure Database for PostgreSQL, so stay tuned! Your feedback is important to us, have suggestions, ideas, or questions? We’d love to hear from you: https://aka.ms/pgfeedback. Follow us here for the latest announcements, feature releases, and best practices: Microsoft Blog for PostgreSQL.441Views2likes0Comments