Azure Database for PostgreSQL Flexible Server
106 TopicsSubgenAI makes AI practical, scalable, and sustainable with Azure Database for PostgreSQL
Authors: Abe Omorogbe, Senior Program Manager at Microsoft and Julia Schröder Langhaeuser, VP of Product Serenity Star at SubgenAI AI agents are thriving in pilots and prototypes. However, scaling them across organizations is more difficult. A recent MIT report shows that 95 percent of projects fail to reach production. Long development cycles, lack of observability, and compliance hurdles leave enterprises struggling to deliver production-ready agents. SubgenAI, a European generative AI company that focuses on democratizing AI for businesses and governments, saw an opportunity to change this. Its flagship platform, Serenity Star, transforms AI agent development from a code-heavy, fragmented process into a streamlined, no-code experience. Built on Microsoft Azure Database for PostgreSQL, Semantic Kernel, and Microsoft Foundry, Serenity Star empowers organizations to deploy production-grade AI agents in minutes, not months. SubgenAI’s mission is to make generative AI accessible, scalable, and secure for every organization. Whether you're a startup or a multinational, Serenity Star offers the tools to build intelligent agents tailored to your business logic, with full control over data and deployment. “Many things must happen around it in the coming years. Serenity Star is designed to solve problems like data control, compliance, and decision ethics—so companies can unleash the full potential of generative AI without compromising trust or profitability” - Lorenzo Serratosa Simplifying complex AI agent development Technical and operational challenges are inherent in enterprise-wide AI agent deployments. Examples include time-consuming iteration cycles, lack of observability and cost control, security concerns, and data sovereignty requirements. Serenity Star addresses these pain points by handling the entire AI agent lifecycle while providing enterprise-grade security and compliance features. Users can focus on defining their agent's purpose and behavior rather than wrestling with technical implementation details. Its framework focuses on four essentials for AI agents: the brain (underlying model), knowledge (accessible information), behavior (programmed responses), and tools (external system integrations). This framework directly influenced the technology stack choices for Serenity Star, with Azure Database for PostgreSQL powering the knowledge retrieval and Semantic Kernel enabling flexible model orchestration. Real-world architecture in action When a user query comes in, Serenity Star uses the vector capabilities of Azure Database for PostgreSQL to retrieve the most relevant knowledge. That context, combined with the user’s input, forms a complete prompt. Semantic Kernel then routes the request to the right large language model, ensuring the agent delivers accurate and context-aware responses. Serenity Star’s native connectors to platforms such as Microsoft Teams, WhatsApp, and Google Tag Manager are also part of this architecture, delivering answers directly in the collaboration and communication tools enterprises already use every day. Figure 1: Serenity Star Architecture This routing and orchestration architecture applies to the multi-tenant SaaS deployments and dedicated customer instances offered by Serenity Star. Azure Database for PostgreSQL provides native Row-Level Security (RLS) capabilities, a key advantage for securely managing multi-tenant environments. Multi-tenant deployments allow organizations to get started quickly with lower overhead, while dedicated instances meet the needs of enterprises with strict compliance and data sovereignty requirements. Optimizing for scale The same architecture that powers retrieval, routing, and multi-channel delivery also provides a foundation for performance at scale. As adoption grows, the team continuously monitors query volume, response times, and resource efficiency across both multi-tenant and dedicated environments. To stay ahead of demand, SubgenAI actively experiments with new Azure Database for PostgreSQL features such as DiskANN for faster vector search. These optimizations keep latency low even as more users and connectors are added. The result is a platform that maintains sub-60-second response times for 99 percent of chart generations, regardless of deployment model or integration point. With this systematic approach to scaling, organizations can deploy fully functional AI agents that are connected to their preferred communication platforms in just 15 minutes instead of hours. For enterprises that have struggled with failed AI projects, Serenity Star offers not only a secure and compliant solution but also one proven to grow with their needs. Why Azure Database for PostgreSQL is a cornerstone The knowledge component of AI agents relies heavily on retrieval-augmented generation (RAG) systems that perform similarity searches against embedded content. This requires a database capable of handling efficient vector search while maintaining enterprise-grade reliability and security. SubgenAI evaluated multiple vector database options. However, Azure Database for PostgreSQL with PGVector emerged as the clear winner. There were several compelling reasons for this. One is its mature technology, which provides immediate credibility with enterprise customers. Two, the ability to scale GenAI use cases with features like DiskANN for accurate and scalable vector search. There, the flexibility and appeal of using an open-source database with a vibrant and fast-moving community. As CPO Leandro Harillo explains: “When we tell them their data runs on Azure Database for PostgreSQL, it’s a relief. It's a well-known technology versus other options that were born with this new AI revolution.” As an open-source relational database management system, Azure Database for PostgreSQL offers extensibility and seamless integration with Microsoft’s enterprise ecosystem. It has a trusted reputation that appeals to organizations with strict data sovereignty and compliance requirements such as those in healthcare and insurance where reliability and governance are non-negotiable. The integration with Azure's broader ecosystem also simplified implementation. With Serenity Star built entirely on Azure infrastructure, Azure Database for PostgreSQL provided seamless connectivity and consistent performance characteristics. The fast response times necessary for real-time agent interactions are the result, along with maintaining the reliability demanded by enterprise customers. Semantic Kernel: Enabling model flexibility at scale Enterprise AI success requires the ability to experiment with different models and adapt quickly as technology evolves. Semantic Kernel makes this possible, supporting over 300 LLMs and embedding models through a unified interface. With Serenity Star, organizations can make genuine choices about their AI implementations without vendor lock-in. Companies can use embedding models from OpenAI through Azure deployments, ensuring their information remains in their own infrastructure while accessing cutting-edge capabilities. If business requirements change or new models emerge, switching becomes a configuration change rather than a development project. Semantic Kernel's comprehensive connector ecosystem also accelerated SubgenAI's own development process. Interfaces for different vector databases enabled rapid prototyping and comparison during the evaluation phase. “Semantic Kernel helped us to be able to try the different ones and choose the one that fit better for us,” notes Julia Schroder, VP of Product. The SubgenAI team has also extended Semantic Kernel to support more features in Azure Database for PostgreSQL, which is easier because of how well-known and popular PostgreSQL is. SubgenAI has also contributed improvements back to the community. This collaborative approach ensures the platform benefits from the latest developments while helping advance the broader ecosystem. Proven impact of Azure Database for PostgreSQL across industries Because organizations struggle to deliver production-ready agents because of long development cycles, lack of observability, and compliance, the effectiveness of Azure Database for PostgreSQL and other Azure services is reflected in deployment metrics and customer feedback. Production-ready agents typically require around 30 iterations for basic implementations. Complex use cases demand significantly more refinement. One GenAI customer in medical education required over 200 iterations to perfect an agent that evaluates medical students through complex case analysis. Azure PostgreSQL and other Azure services support hour-long iteration cycles rather than week-long sprints, which made this level of refinement economically feasible. Cost efficiency is another significant advantage. SubgenAI provisions and configures models in Microsoft Foundry, which eliminates idling GPU resources while providing detailed cost breakdowns. Users can see exactly how tokens are consumed across prompt text, RAG context, and tool usage, enabling data-driven optimization decisions. Consulting partnerships validate the platform's market position. One consulting firm with 50,000 employees is delighted with the easier implementation, faster deployment, and reliable production performance. Conclusion The combination of Azure Database for PostgreSQL and Semantic Kernel has enabled SubgenAI to address the fundamental challenges that cause 95 percent of enterprise AI projects to fail. Organizations using Serenity Star bypass the traditional barriers of lengthy development cycles, limited observability, and compliance hurdles that typically derail AI initiatives. The platform's architecture delivers measurable results, including a 50 percent reduction in coding time, support for complex agents requiring 200+ iterations, and deployment capabilities that compress months-long projects into 15-minute implementations. Azure Database for PostgreSQL provides the enterprise-grade foundation that customers in regulated industries require, while Semantic Kernel ensures organizations retain flexibility as AI technology evolves. This technological partnership creates a reliable pathway for companies to deploy production-ready AI agents without sacrificing data sovereignty or operational control. Through the reliability of Azure Database for PostgreSQL and the flexibility of Semantic Kernel, Serenity Star delivers an enterprise-ready foundation that makes AI practical, scalable, and sustainable.131Views1like0CommentsAzure PostgreSQL Lesson Learned #10: Why PITR Networking Rules Matter
Co‑authored with angesalsaa Symptoms Customer attempted to restore a server configured with public access into a private virtual network. Restore operation failed with an error indicating unsupported configuration. Root Cause Azure enforces strict networking rules during PITR to maintain security and consistency: Public access servers can only be restored to public access. Private access servers can be restored to the same virtual network or a different virtual network, but not to public access. Why This Happens Networking mode is tied to the original server configuration. Mixing public and private access during restore could expose sensitive data or break connectivity assumptions. Contributing Factors Customer assumed PITR could switch networking modes. No prior review of Azure documentation on restore limitations. Specific Conditions We Observed Source server: Private access with VNet integration. Target restore: Attempted to switch to public access. Operational Checks Before initiating PITR: Confirm the source server’s networking mode (Public vs Private). Review restore options in the Azure portal → Restore. Mitigation Goal: Align restore strategy with networking rules. If source is Public: Restore only to Public access. If source is Private: Restore to same or different VNet (within the same region). Post-Resolution Customer successfully restored to a different VNet after adjusting expectations. Prevention & Best Practices Document networking mode for all PostgreSQL servers. Train teams on PITR limitations before disaster recovery drills. Avoid assumptions always check official guidance. Why This Matters Ignoring these rules can delay recovery during critical incidents. Knowing the constraints upfront ensures faster restores and compliance with security policies. Key Takeaways Issue: PITR does not allow switching between Public and Private access. Fix: Restore within the same networking category as the source server. References Backup and Restore in Azure Database for PostgreSQL Flexible Server87Views0likes0CommentsPostgreSQL for the enterprise: scale, secure, simplify
This week at Microsoft Ignite, along with unveiling the new Azure HorizonDB cloud native database service, we’re announcing multiple improvements to our fully managed open-source Azure Database for PostgreSQL service, delivering significant advances in performance, analytics, security, and AI-assisted migration. Let’s walk through nine of the top Azure Database for PostgreSQL features and improvements we’re announcing at Microsoft Ignite 2025. Feature Highlights New Intel and AMD v6-series SKUs (Preview) Scale to multiple nodes with Elastic Clusters (GA) PostgreSQL 18 (GA) Realtime analytics with Fabric Mirroring (GA) Analytical queries inside PostgreSQL with the pg_duckdb extension (Preview) Adding Parquet to the azure_storage extension (GA) Meet compliance requirements with the credcheck, anon & ip4r extensions (GA) Integrated identity with Entra token-refresh libraries for Python AI-Assisted Oracle to PostgreSQL Migration Tool (Preview) Performance and scale New Intel and AMD v6 series SKUs (Preview) You can run your most demanding Postgres workloads on new Intel and AMD v6 General Purpose and Memory Optimized hardware SKUs, now availble in preview These SKUs deliver massive scale for high-performance OLTP, analytics and complex queries, with improved price performance and higher memory ceilings. AMD Confidential Compute v6 SKUs are also in Public Preview, enabling enhanced security for sensitive workloads while leveraging AMD’s advanced hardware capabilities. Here’s what you need to know: Processors: Powered by 5th Gen Intel® Xeon® processor (code-named Emerald Rapids) and AMD's fourth Generation EPYC™ 9004 processors Scale: VM size options scale up to 192 vCores and 1.8 TiB IO: Using the NVMe protocol for data disk access, IO is parallelized to the number of CPU cores and processed more efficiently, offering significant IO improvements Compute tier: Available in our General Purpose and Memory Optimized tiers. You can scale up to these new compute SKUs as needed with minimal downtime. Learn more: Here's a quick summary of the v6 SKUs we’re launching, with links to more information: Processor SKU Max vCores Max Mem Intel Ddsv6 192 768 GiB Edsv6 192 1.8 TiB AMD Dadsv6 96 384 GiB Eadsv6 96 672 GiB DCadsv6 96 386 GiB ECadsv6 96 672 GiB Scale to multiple nodes with Elastic clusters (GA) Elastic clusters are now generally available in Azure Database for PostgreSQL. Built on Citus open-source technology, elastic clusters bring the horizontal scaling of a distributed database to the enterprise features of Azure Database for PostgreSQL. Elastic clusters enable horizontal scaling of databases running across multiple server nodes in a “shared nothing” architecture. This is ideal for workloads with high-throughput and storage-intensive demands such as multi-tenant SaaS and IoT-based workloads. Elastic clusters come with all the enterprise-level capabilities that organizations rely upon in Azure Database for PostgreSQL, including high availability, read replicas, private networking, integrated security and connection pooling. Built-in sharding support at both row and schema level enables you to distribute your data across a cluster of compute resources and run queries in parallel, dramatically increasing throughput and capacity. Learn more: Elastic clusters in Azure Database for PostgreSQL PostgreSQL 18 (GA) When PostgreSQL 18 was released in September, we made a preview available on Azure on the same day. Now we’re announcing that PostgreSQL 18 is generally available on Azure Database for PostgreSQL, with full Major Version Upgrade (MVU) support, marking our fastest-ever turnaround from open-source release to managed service general availability. This release reinforces our commitment to delivering the latest PostgreSQL community innovations to Azure customers, so you can adopt the latest features, performance improvements, and security enhancements on a fully managed, production-ready platform without delay. ^Note: MVU to PG18 is currently available in the NorthCentralUS and WestCentralUS regions, with additional regions being enabled over the next few weeks Now you can: Deploy PostgreSQL 18 in all public Azure regions. Perform in-place major version upgrades to PG18 with no endpoint or connection string changes. Use Microsoft Entra ID authentication for secure, centralized identity management in all PG versions. Enable Query Store and Index Tuning for built-in performance insights and automated optimization. Leverage the 90+ Postgres extensions supported by Azure Database for PostgreSQL. PostgreSQL 18 also delivers major improvements under the hood, ranging from asynchronous I/O and enhanced vacuuming to improved indexing and partitioning, ensuring Azure continues to lead as the most performant, secure, and developer-friendly PostgreSQL managed service in the cloud. Learn more: PostgreSQL 18 open-source release announcement Supported versions of PostgreSQL in Azure Database for PostgreSQL Analytics Real-time analytics with Fabric Mirroring (GA) With Fabric mirroring in Azure Database for PostgreSQL, now generally available, you can run your Microsoft Fabric analytical workloads and capabilities on near-real-time replicated data, without impacting the performance of your production PostgreSQL databases, and at no extra cost. Mirroring in Fabric connects your operational and analytical platforms with continuous data replication from PostgreSQL to Fabric. Transactions are mirrored to Fabric in near real-time, enabling advanced analytics, machine learning, and reporting on live data sets without waiting for traditional batch ETL processes to complete. This approach eliminates the overhead of custom integrations or data pipelines. Production PostgreSQL servers can run mission-critical transactional workloads without being affected by surges in analytical queries and reporting. With our GA announcement Fabric mirroring is ready for production workloads, with secure networking (VNET integration and Private Endpoints supported), Entra ID authentication for centralized identity management, and support for high availability enabled servers, ensuring business continuity for mirroring sessions. Learn more: Mirroring Azure Database for PostgreSQL flexible server Adding Parquet support to the azure_storage extension (GA) In addition to mirroring data directly to Microsoft Fabric, there are many other scenarios that require moving operational data into data lakes for analytics or archival. The complexity of building and maintaining ETL pipelines can be expensive and time-consuming. Azure Database for PostgreSQL now natively supports Parquet via the azure_storage extension, enabling direct SQL-based read/write to Parquet files in Azure Storage. This makes it easy to import and export data in Postgres without external tools or scripts. Parquet is a popular columnar storage format often used in big data and analytics environments (like Spark and Azure Data Lake) because of its efficient compression and query performance for large datasets. Now you can use the azure_storage extension to can skip an entire step: just issue a SQL command to write to and query from a Parquet file in Azure Blob Storage. Learn more: Azure storage extension in Azure Database for PostgreSQL Analytical queries inside PostgreSQL with the pg_duckdb extension (Preview) DuckDB’s columnar engine excels at high performance scans, aggregations and joins over large tables, making it particularly well-suited for analytical queries. The pg_duckdb extension, now available in preview for Azure Database for PostgreSQL combines PostgreSQL’s transactional performance and reliability with DuckDB’s analytical speed for large datasets. Together pg_duckdb and PostgreSQL are an ideal combination for hybrid OLTP + OLAP environments where you need to run analytical queries directly in PostgreSQL without sacrificing performance., To see the pg_duckdb extension in action check out this demo video: https://aka.ms/pg_duckdb Learn more: pg_duckdb – PostgreSQL extension for DuckDB Security Meet compliance requirements with the credcheck, anon & ip4r extensions (GA) Operating in a regulated industry such as Finance, Healthcare and Government means negotiating compliance requirements like HIPAA and PCI-DSS, GDPR that include protection for personalized data and password complexity, expiration and reuse. This week the anon extension, previously in preview, is now generally available for Azure Database for PostgreSQL adding support for dynamic and static masking, anonymized exports, randomization and many other advanced masking techniques. We’ve also added GA support for the credcheck extension, which provides credential checks for usernames, and password complexity, including during user creation, password change and user renaming. This is particularly useful if your application is not using Entra ID and needs to rely on native PostgreSQL users and passwords. If you need to store and query IP ranges for scenarios like auditing, compliance, access control lists, intrusion detection and threat intelligence, another useful extension announced this week is the ip4r extension which provides a set of data types for IPv4 and IPv6 network addresses. Learn more: PostgreSQL Anonymizer credcheck – PostgreSQL username/password checks IP4R - IPv4/v6 and IPv4/v6 range index type for PostgreSQL The Azure team maintains an active pipeline of new PostgreSQL extensions to onboard and upgrade to Azure Database for PostgreSQL For example, another important extension upgraded this week is pg_squeeze which removes unused space from a table. The updated 1.9.1 version adds important stability improvements. Learn more: List of extensions and modules by name Integrated identity with Entra token-refresh libraries for Python In a modern cloud-connected enterprise, identity becomes the most important security perimeter. Azure Database for PostgreSQL is the only managed PostgreSQL service with full Entra integration, but coding applications to take care of Entra token refresh can be complex. This week we’re announcing a new Python library to simplify Entra token refresh. The library automatically refreshes authentication tokens before they expire, eliminating manual token handling and reducing connection failures. The new python_azure_pg_auth library provides seamless Azure Entra ID authentication and supports the latest psycopg and SQLAlchemy drivers with automatic token acquisition, validation, and refresh. Built-in connection pooling is available for both synchronous and asynchronous workloads. Designed for cross-platform use (Windows, Linux, macOS), the package features clean architecture and flexible installation options for different driver combinations. This is our first milestone in a roadmap to add token refresh for additional programming languages and frameworks. Learn more, with code samples to get started here: https://aka.ms/python-azure-pg-auth Migration AI-Assisted Oracle to PostgreSQL Migration Tool (Preview) Database migration is a challenging and time-consuming process, with multiple manual steps requiring schema and apps specific information. The growing popularity, maturity and low cost of PostgreSQL has led to a healthy demand for migration tooling to simplify these steps. The new AI-assisted Oracle Migration Tool preview announced this week greatly simplifies moving from Oracle databases to Azure Database for PostgreSQL. Available in the VS Code PostgreSQL extension the new migration tool combines GitHub Copilot, Azure OpenAI, and custom Language Model Tools to convert Oracle schema, database code and client applications into PostgreSQL-compatible formats. Unlike traditional migration tools that rely on static rules, Azure’s approach leverages Large Language Models (LLMs) and validates every change against a running Azure Database for PostgreSQL instance. This system not only translates syntax but also detects and fixes errors through iterative re-compilation, flagging any items that require human review. Application codebases like Spring Boot and other popular frameworks are refactored and converted. The system also understands context by querying the target Postgres instance for version and installed extensions. It can even invoke capabilities from other VS Code extensions to validate the converted code. The new AI-assisted workflow reduces risk, eliminates significant manual effort, and enables faster modernization while lowering costs. Learn more: https://aka.ms/pg-migration-tooling Be sure to follow the Microsoft Blog for PostgreSQL for regular updates from the Postgres on Azure team at Microsoft. We publish monthly recaps about new features in Azure Database for PostgreSQL, as well as an annual blog about what’s new in Postgres at Microsoft.1.1KViews9likes0CommentsUpgrade Azure Database for PostgreSQL with Minimal Downtime Using Logical Replication
Azure Database for PostgreSQL provides a seamless Major Version Upgrade (MVU) experience for your servers, which is important for security, performance, and feature enhancements. For production workloads, minimizing downtime during this upgrade is essential to maintain business continuity. This blog explores a practical approach to performing a Major Version Upgrade (MVU) with minimal downtime and maximum reliability using logical replication and virtual endpoints. Upgrading without disrupting your applications is critical. With this method, you can: Approach 1: Configure two servers where the publisher runs on the lower version and the subscriber on the higher version, perform MVU and then switch over using virtual endpoints. The time taken to restore the server is specific to your workloads on the primary server. Approach 2: Maintain two servers on different versions, use pg_dump and pg_restore to restore with data for production server, and perform a seamless switchover using virtual endpoints. To enable logical replication on a table, it must have one of the following: A Primary Key, or A Unique Index Approach 1 Approach 2 Restores the instance using the same version. Creates and restores the instance on a higher version. Faster restore with PITR (Point-in-Time Recovery) but requires a few additional steps. Takes longer to restore because it uses pg_dump and pg_restore commands but enables version upgrade during restore. Best suited when speed is the priority to restore the server. Best suited when you want to restore directly to a higher version, and it does not downtime for the MVU operation. Approach 1 Setup: Two servers, one for testing, and one for production. Here are the steps to follow: Create a virtual endpoint on the production server. Perform a Point-in-time-restore (PITR) from the first server (Production) and create your test server. Add a virtual endpoint for the test server. Establish logical replication between the two servers. Perform the Major Version Upgrade (MVU) on the test server. Validate data on the test server. Update virtual endpoints: Remove the endpoints from both servers, then assign the original production endpoint to the test server. Step By Step Guide Environment Setup Two servers are involved: Server 1: Current production server (Publisher) Server 2: Restored server for MVU (Subscriber) Create a virtual endpoint for the production server. Configure Logical Replication & Grant Permissions Enable replication parameters on the publisher: wal_level = logical max_worker_processes = 16 max_replication_slots = 10 max_wal_senders = 10 track_commit_timestamp = on Grant replication role to the user ALTER ROLE <user> WITH REPLICATION; GRANT azure_pg_admin TO <user>; Create tables and insert data CREATE TABLE basic (id INTEGER NOT NULL PRIMARY KEY, a TEXT); INSERT INTO basic VALUES (1, 'apple'), (2, 'banana'); Set Up Logical Replication on the Production Server Create a publication slot on the Production Server: create publication <publisher-name>; alter publication <publisher-name> add table<table>; SELECT pg_create_logical_replication_slot(‘<publisher-name>’, ‘pgoutput’); Choose Restore Point: Determine the latest possible point in time (PIT) to restore the data from the source server. This Point in Time must be, necessarily, after you created the replication slot in Step 3. Provision Target Server via PITR: Use Azure Portal or Azure CLI to trigger a Point-in-Time Restore. This creates the test server based on the production backup. You are provisioning the test server based on the production server's backup capabilities. This test server will initially be a copy of the production server’s data state at a specific point in time. Configure Server Parameters on Test Server wal_level = logical max_worker_processes = 16 max_replication_slots = 10 max_wal_senders = 10 track_commit_timestamp = on Create the target server as subscriber & Advance Replication Origin: This is the crucial step that connects the test server (subscriber) to the production (publisher) server and manually tells the target where in the WAL log stream to begin reading changes, skipping the data already restored. Create Subscription: Creates a logical replication subscription on the test server, linking it to the source and specifying connection details, publication, and replication slot without copying existing data. CREATE SUBSCRIPTION <subscriber-name>;CONNECTION 'host=<host-name>.postgres.database.azure.com port=5432 dbname=postgres user=<username> password=<password>' PUBLICATION <publisher-name> WITH ( copy_data = false, create_slot = false, enabled = false, slot_name = <publisher-name> ); Retrieves the replication origin identifier and name on the test server, which is needed to advance the replication position. SELECT roident, roname FROM pg_replication_origin; Execute this query on the Production server: Fetches the replication slot name and the restart LSN from the source server, indicating where replication should resume. SELECT slot_name, restart_lsn FROM pg_replication_slots WHERE slot_name = <publisher-name>; On the test server execute this command: Manually advances the replication origin on the target server to skip already restored data and start replication from the correct WAL position. SELECT pg_replication_origin_advance(roident, restart_lsn); Enable the target server as a subscriber of the source server With the target server populated and the replication origin advanced, you can start the synchronization. ALTER SUBSCRIPTION <publisher-name> ENABLE; The target server now starts consuming the WAL entries from the source, rapidly closing the gap on all transactions that occurred between the slot creation and the completion of the PITR. Test Replication works Create a virtual endpoint for the test server, and validate the data on the test server Confirm that the synchronization is working by inserting a record on the production server and immediately verifying its presence on the test server. Perform Major Version Upgrade (MVU) Upgrade your test server, and validate all the new extensions and features by using the virtual endpoint for the test server Manage virtual endpoints Once the data and all the new extensions are validated, drop the virtual endpoint on production server and recreate the same virtual endpoint on test server. Key Considerations: Test server initially handles read traffic; writes remain on production server to avoid conflicts. Virtual endpoint creation: ~1–2 minutes per endpoint. Time taken for Point-in-time-restore depends on the workload that you have on the production server Approach 2: This approach enables a Major Version Upgrade (MVU) by combining logical replication with an initial dump and restore process. It minimizes downtime while ensuring data consistency. Create a new Azure Database for PostgreSQL Flexible Server instance using your desired target major version (e.g., PostgreSQL 17). Ensure the new server's configuration (SKU, storage size, and location) is suitable for your eventual production load. This approach enables the core benefit of a side-by-side migration, running two distinct database versions concurrently. The existing application remains connected to the source environment, minimizing risk and allowing the new target to be fully configured offline. Configure Role Privileges on Source and Target Servers ALTER ROLE <replication_user> WITH REPLICATION; GRANT azure_pg_admin TO <replication_user>; Check Prerequisites for Logical Replication Set these parameters on both source and target servers: Set these server parameters to at least the minimum recommended values shown below to enable and support the features required for logical replication. wal_level=logical max_worker_processes=16 max_replication_slots=10 max_wal_senders=10 track_commit_timestamp=on Ensure tables are ready: Each table to be replicated must have a primary key or unique identifier Create Publication and Replication Slot on Source create publication <publisher-name>; alter publication <publisher-name> add table<table>; SELECT pg_create_logical_replication_slot(‘<publisher-name>’, ‘pgoutput’); This slot tracks all changes from this point onward. Generate Schema and Initial Data Dump Run pg_dump after creating the replication slot: Perform the dump after creating the replication slot to capture a static starting point. Using an Azure VM is recommended for optimal network performance. pg_dump -U demo -W -h <hostname>.postgres.database.azure.com -p 5432 -Fc -v -f dump.bak postgres -N pg_catalog -N cron -N information_schema Restore Data into Target (recommended: Azure VM): This populates the target server with the initial dataset. pg_restore -U demo -W -h <hostname>.postgres.database.azure.com -p 5432 --no-owner -Fc -v -d postgres dump.bak --no-acl Catch-Up Mechanism: While the restoration is ongoing, new transactions on the source are safely recorded by the replication slot. It is critical to have sufficient storage on the source to hold the WAL files during this initial period until replication is fully active. Create Subscription and Advance Replication Origin on Target: This step connects the test server (subscriber) to the production server (source) and manually tells the target where in the WAL log stream to begin reading changes, skipping the data already restored. Create subscription: Creates a logical replication subscription on the target server, linking it to the source and specifying connection details, publication, and replication slot without copying existing data. CREATE SUBSCRIPTION <subscription-name> CONNECTION 'host=<hostname>.postgres.database.azure.com port=5432 dbname=postgres user=<username> password=<password>' PUBLICATION <publisher-name> WITH ( copy_data = false, create_slot = false, enabled = false, slot_name = '<publisher-name>); Retrieves the replication origin identifier and name on the target server, which is needed to advance the replication position. SELECT roident, roname FROM pg_replication_origin; Fetches the replication slot name and the restart LSN from the source server, indicating where replication should resume. SELECT slot_name, restart_lsn FROM pg_replication_slots WHERE slot_name = '<publisher-name>; Manually advances the replication origin on the target server to skip already restored data and start replication from the correct WAL position. SELECT pg_replication_origin_advance('<roname>', '<restart_lsn>'); Enable Subscription: With the target server populated and the replication origin advanced, you can start the synchronization. ALTER SUBSCRIPTION <subscription-name> ENABLE; Result: The target server now starts consuming the WAL entries from the source, rapidly closing the gap on all transactions that occurred during the dump and restore process. Validate Replication: Insert a record on the source and confirm it appears on the target: Perform Cutover Stop application traffic to the production database. Wait for the target database to confirm zero replication lag. Disable the subscription (ALTER SUBSCRIPTION logical_sub01 DISABLE;). Connect the application to the new Azure Database for PostgreSQL instance. Utilize Virtual Endpoints or a CNAME DNS record for your database connection string. By simply pointing the endpoint/CNAME to the new server, you can switch your application stack without changing hundreds of individual configuration files, making the final cutover near-instantaneous. Conclusion This MVU strategy using logical replication and virtual endpoints provides a safe, efficient way to upgrade PostgreSQL servers without disrupting workloads. By combining replication, endpoint management, and automation, you can achieve a smooth transition to newer versions while maintaining high availability. For an alternative approach, check out our blog on using the Migration Service for MVU: Hacking the migration service in Azure Database for PostgreSQL523Views2likes2CommentsExciting things on the horizon for PostgreSQL fans @ Ignite 2025
If you’re passionate about PostgreSQL or just curious about what’s new, you’ll want to join us at Microsoft Ignite 2025. We have a packed lineup, including sessions exploring cutting-edge features and exclusive giveaways at the PostgreSQL on Azure booth. Haven’t registered yet? Now’s the time – sign up for Microsoft Ignite and start building your schedule. Below are the must-see PostgreSQL on Azure activities, with highlights of what you’ll learn at each. Add these to your agenda today. Sessions can fill up fast! Theater sessions: get a first look, fast I know from experience that attention spans can start to wane after hours-long keynotes, content-rich sessions, and conference socializing. Luckily, we have a couple of theater sessions that offer snackable but substantial information in less time than it will take to grab lunch. And they’re located conveniently on the main conference floor. PostgreSQL on Azure: Your launchpad for intelligent apps and agents (THR705) - See how we’re making PostgreSQL AI-aware for developers to drive app and agent innovation. Includes a demo of vector similarity search, semantic operators baked into Postgres, and more! Simplifying scale-out of PostgreSQL for performant multi-tenant apps (THR706) - Discover a smarter, simpler way to scale PostgreSQL using the new Elastic Clusters feature. If your app or service is growing fast (or you want it to!), add this breakout to learn how Azure makes it easier to scale Postgres and keep it reliable. These talks are a great way to sample what’s new and decide where to dive deeper. Plus, they’re fun and demo-heavy, and who doesn’t love a good demo? Breakout sessions: a deep dive into Postgres innovations Led by Azure product leaders and executives from organizations driving innovation backed by PostgreSQL, these breakout sessions will dive into the coolest new capabilities and real-world use cases. If you want rich, technical content and more live demos, these are for you. Build mission-critical apps that scale with PostgreSQL on Azure (BRK127) - Get a closer look at the next generation of PostgreSQL on Azure. Add this session, if you’re curious about how we’re taking Postgres to the next level to support your mission-critical AI workloads. Modern data, modern apps: Innovation with Microsoft Databases (BRK134) - Gain insider knowledge on the latest innovations across open-source, SQL, and NoSQL databases, and understand how Microsoft’s integrated database portfolio supports next-gen innovation. Nasdaq Boardvantage: AI-driven governance on PostgreSQL and AI Foundry (BRK137) - Discover how a Fortune 100 merges trust with cutting-edge AI leveraging Azure’s AI-enriched and enterprise-ready solutions, including Azure Database for PostgreSQL, Azure Database for MySQL, Azure AI Foundry, Azure Kubernetes Service (AKS), and API Management. AI-assisted migration: The path to powerful performance on PostgreSQL (BRK123) - A before and after migration journey from Oracle to Azure Database for PostgreSQL. See how the new AI-assisted migration experience delivers conversion in a few clicks and minimal downtime. The blueprint for intelligent AI agents backed by PostgreSQL (BRK130) - If you’re into AI development, this session will spark ideas on bridging the gap between raw data and AI reasoning. You’ll leave with practical tips to turbocharge your AI agents with PostgreSQL. Each breakout session is 45 minutes with live demos and Q&A, so you’ll get plenty of detail and interaction with Postgres experts. Hands-on lab: experience coding with Azure superpowers Do you learn best by doing? Then our guided workshop, Build advanced AI agents with PostgreSQL (Lab515), is for you. In each 75-minute session, you’ll get to create a fully functional AI-powered application backed by PostgreSQL on Azure with step-by-step guidance and expert insight on the latest innovations enabling intelligent app development. All the tools and instructions you’ll need are provided. Labs have limited capacity, so be sure to reserve your seat for any of the four labs in advance. This lab is a great way to understand how all the pieces come together on Azure. And you’ll gain practical skills you can apply to your own projects, whether it’s customer support bots, intelligent search in your app, or any scenario where PostgreSQL + AI collide. Expert meet-up booth: meet the team, grab some swag If you still want more Postgres (or a little Postgres souvenir), you can stop by the PostgreSQL on Azure Expert Meetup booth in the Ignite Hub. This will be our homebase on the show floor, where you can: Meet the team: I’ll be there in person, along with engineers, program managers, cloud solution architects, and advocates from our team. Whether you have a burning technical question, want to share feedback, or need guidance for your specific use case, come chat with us. Get a quick demo re-run: Sometimes a 5-minute demo is worth a thousand words, especially after you’ve sat through all those words already in a keynote. The booth will have a monitor and a live environment so we can walk you through select use cases if you have questions - no appointment needed. Swag and giveaways: Ah yes, the goodies! We know conference swag is part of the fun, so we’ve got some special PostgreSQL-themed giveaways at the booth. I won’t spoil all the surprises, but rumor has it there are some limited-edition items up for grabs. Network with peers: The expert meet-up area is also a magnet for PostgreSQL enthusiasts. You might bump into other attendees at the booth who are tackling similar projects or challenges. Ignite is about community as much as content, so come by and spark up a conversation. Meet you there? Ignite is our largest event of the year. We love sharing what we’ve been working on and, most of all, hearing from you, the community. So, on behalf of the Azure for PostgreSQL team, thank you for your interest and support. We can’t wait to show you what’s new and to help you continue to succeed with Postgres. See you in San Francisco!430Views2likes0CommentsAzure PostgreSQL Lesson Learned #3: Fix FATAL: sorry, too many clients already
We encountered a support case involving Azure Database for PostgreSQL Flexible Server where the application started failing with connection errors. This blog explains the root cause, resolution steps, and best practices to prevent similar issues.272Views4likes0CommentsAzure PostgreSQL Lesson Learned#1:Fix Cannot Execute in a Read-Only Transaction After HA Failover
We encountered a support case involving Azure Database for PostgreSQL Flexible Server where the database returned a read-only error after a High Availability (HA) failover. This blog explains the root cause, resolution steps, and best practices to prevent similar issues. The issue occurred when the application attempted write operations immediately after an HA failover. The failover caused the primary role to switch, but the client continued connecting to the old primary (now standby), which is in read-only mode.349Views2likes0CommentsAzure PostgreSQL Lesson Learned #2: Fixing Read Only Mode Storage Threshold Explained
Co-authored with angesalsaa The issue occurred when the server’s storage usage reached approximately 95% of the allocated capacity. Automatic storage scaling was disabled. Symptoms included: Server switching to read-only mode Application errors indicating write failures No prior alerts or warnings received by the customer Example error: ERROR: cannot execute %s in a read-only transaction Root Cause The root cause was the server hitting the configured storage usage threshold (95%), which triggered an automatic transition to read-only mode to prevent data corruption or loss. Storage options - Azure Database for PostgreSQL | Microsoft Learn If your Storage Usage is below 95% but you're still seeing the same error, please refer to this article for more information > Azure PostgreSQL Lesson Learned#1:Fix Cannot Execute in a Read-Only Transaction After HA Failover Contributing factors: Automatic storage scaling was disabled Lack of proactive monitoring on storage usage High data ingestion rate during peak hours Specific conditions: Customer had a custom workload with large batch inserts No alerts configured for storage usage thresholds Mitigation To resolve the issue: Increased the allocated storage manually via Azure Portal No restart is needed after you scale up the storage because it is an online operation but make sure If you grow the disk from any size between 32 GiB and 4 TiB, to any other size in the same range, the operation is performed without causing any server downtime. It's also the case if you grow the disk from any size between 8 TiB and 32 TiB. In all those cases, the operation is performed while the server is online. However, if you increase the size of disk from any value lower or equal to 4096 GiB, to any size higher than 4096 GiB, a server restart is required. In that case, you're required to confirm that you understand the consequences of performing the operation. Scale storage size - Azure Database for PostgreSQL | Microsoft Learn Verified server returned to read-write mode Steps: Navigate to Azure Portal > PostgreSQL Flexible Server > Compute & Storage Increase storage size (e.g., from 100 GB to 150 GB) Post-resolution: Server resumed normal operations Write operations were successful Prevention & Best Practices Enable automatic storage scaling to prevent hitting usage limits > Configure Storage Autogrow - Azure Database for PostgreSQL | Microsoft Learn Set up alerts for storage usage thresholds (e.g., 80%, 90%) Monitor storage metrics regularly using Azure Monitor or custom dashboards Why This Matters Failing to monitor storage and configure scaling can lead to: Application downtime Read-only errors impacting business-critical transactions By following these practices, customers can ensure seamless operations and avoid unexpected read-only transitions. Key Takeaways Symptom: Server switched to read-only mode, causing write failures (ERROR: cannot execute INSERT in a read-only transaction). Root Cause: Storage usage hit 95% threshold, triggering read-only mode to prevent corruption. Contributing Factors: Automatic storage scaling disabled. No alerts for storage thresholds. High ingestion during peak hours with large batch inserts. Mitigation: Increased storage manually via Azure Portal (online operation unless crossing 4 TiB → restart required). Server returned to read-write mode. Prevention & Best Practices: Enable automatic storage scaling. Configure alerts for storage usage (e.g., 80%, 90%). Monitor storage metrics regularly using Azure Monitor or dashboards.255Views0likes0CommentsPrevent Accidental Deletion of an Instance in Azure Postgres
Did you know that accidental deletion of database servers is a leading source of support tickets? Read this blog post to learn how you can safeguard your Azure Database for PostgreSQL Flexible Server instances using ARM’s CanNotDelete lock — an easy best-practice that helps prevent accidental deletions while keeping regular operations seamless. 🌐 Prevent Accidental Deletion of an Instance in Azure Postgres