azure database for postgresql flexible server
113 TopicsPremium SSD v2 Is Now Generally Available for Azure Database for PostgreSQL
We are excited to announce the General Availability (GA) of Premium SSD v2 for Azure Database for PostgreSQL flexible server. With Premium SSD v2, you can achieve up to 4× higher IOPS, significantly lower latency, and better price-performance for I/O-intensive PostgreSQL workloads. With independent scaling of storage and performance, you can now eliminate overprovisioning and unlock predictable, high-performance PostgreSQL at scale. This release is especially impactful for OLTP, SaaS, and high‑concurrency applications that require consistent performance and reliable scaling under load. In this post, we will cover: Why Premium SSD v2: Core capabilities such as flexible disk sizing, higher performance, and independent scaling of capacity and I/O. Premium SSD v2 vs. Premium SSD: A side‑by‑side overview of what’s new and what’s improved. Pricing: Pricing estimates. Performance: Benchmarking results across two workload scenarios. Migration options: How to move from Premium SSD to Premium SSD v2 using restore and read‑replica approaches. Availability and support: Regional availability, supported features, current limitations, and how to get started. Why Premium SSD v2? Flexible Disk Size - Storage can be provisioned from 32 GiB to 64 TiB in 1 GiB increments, allowing you to pay only for required capacity without scaling disk size for performance. High Performance -Achieve up to 80,000 IOPS and 1,200 MiB/s throughput on a single disk, enabling high-throughput OLTP and mixed workloads. Adapt instantly to workload changes: With Premium SSD v2, performance is no longer tied to disk size. Independently tune IOPS and throughput without downtime, ensuring your database keeps up with real-time demand. Free baseline performance: Premium SSD v2 includes built-in baseline performance at no additional cost. Disks up to 399 GiB automatically include 3,000 IOPS and 125 MiB/s, while disks sized 400 GiB and larger include up to 12,000 IOPS and 500 MiB/s. Premium SSD v2 vs. Premium SSD: What’s new? Pricing Pricing for Premium SSD v2 is similar to Premium SSD, but will vary depending on the storage, IOPS, and bandwidth configuration set for a Premium SSD v2 disk. Pricing information is available on the pricing page or pricing calculator. Performance Premium SSD v2 is designed for IO‑intensive workloads that require sub‑millisecond disk latencies, high IOPS, and high throughput at a lower cost. To demonstrate the performance impact, we ran pgbench on Azure Database for PostgreSQL using the test profile below. Test Setup To minimize external variability and ensure a fair comparison: Client virtual machines and the database server were deployed in the same availability zone in the East US region. Compute, region, and availability zones were kept identical. The only variable changed was the storage tier. TPC-B benchmark using pgbench with a database size of 350 GiB. Test Scenario 1: Breaking the IOPS Ceiling with Premium SSD v2 Premium SSD v2 eliminates the traditional storage bottleneck by scaling linearly up to 80,000 IOPS, while Premium SSD plateaus early due to fixed performance limits. To demonstrate this, we configured each storage tier with its maximum supported IOPS and throughput while keeping all other variables constant. Premium SSD v2 achieves up to 4x higher IOPS at nearly half the cost, without requiring large disk sizes. Note: Premium SSD requires a 32 TiB disk to reach 20K IOPS, while SSD v2 achieves 80K IOPS even on a 160 GiB disk though we used 1 TiB disk in this test for a bigger scaling factor for pgbench test. We ran pgbench across five workload profiles, ranging from 32 to 256 concurrent clients, with each test running for 20 minutes. The results go beyond incremental improvements and highlight a material shift in how applications scale with Premium SSD v2. Throughput Scaling As concurrency increases, Premium SSD quickly reaches its IOPS limits while Premium SSD v2 continues to scale. At 32 clients: Premium SSD v2 achieved 10,562 TPS vs 4,123 TPS on Premium SSD representing a 156% performance improvement. At 256 clients: At higher load, Premium SSD v2 achieved over 43,000 TPS representing a 279% improvement compared to the 11,465 TPS observed on Premium SSD. Latency Stability Throughput is an indication of how much work is done while latency reflects how quickly users experience it. Premium SSD v2 maintains consistently low latency even as workload increases. Reduced Wait Times: 61–74% lower latency across all test phases. Consistency under Load: Premium SSD latency increased to 22.3 ms, while Premium SSD v2 maintained a latency of 5.8 ms, remaining stable even under peak load. IOPS Behavior The table below illustrates the IOPS behavior observed during benchmarking for both storage tiers. Dimension Premium SSD Premium SSD v2 IOPS Lower baseline performance, Hits limits early ~2× higher IOPS at low concurrency, Up to 4× higher IOPS at peak load IOPS Plateau Throughput stalls at ~20k IOPS for 64 clients -256 clients Scales from ~29k IOPS (32 clients) to ~80k IOPS (256 clients) Additional Clients Adding clients does not increase throughput Additional clients continue to drive higher throughput Primary Bottleneck Storage becomes the bottleneck early No single bottleneck observed Scaling Behavior Stops scaling early True linear scaling with workload demand Resource Utilization Disk saturation leaves CPU and memory underutilized Balanced utilization across IOPS, CPU, and memory Key Takeaway Storage limits performance before compute is fully used Unlocks higher throughput and lower latency by fully utilizing compute resources Test Scenario 2: Better Performance at same price At the same price point, Premium SSD v2 delivers higher throughput and lower latency than Premium SSD without requiring any application changes. To demonstrate this, we ran multiple pgbench tests using two workload configurations 8 clients / 8 threads and 32 clients / 32 threads with each run lasting 20 minutes. Results were consistent across all runs, with Premium SSD v2 consistently outperforming Premium SSD. Both configurations cost $578/month, the only difference is storage performance. Results: Moderate concurrency (8 clients) Premium SSD v2 delivered approximately 154% higher throughput (Transactions Per Second) than Premium SSD (1,813 TPS vs. 715 TPS), while average latency decreased by about 60% (from ~11.1 ms to ~4.4 ms). High concurrency (32 clients) The performance gap increases as concurrency grows, Premium SSD v2 delivered about 169% higher throughput than Premium SSD (3,643 TPS vs. ~1,352 TPS) and reduced average latency by around 67% (from ~26.3 ms to ~8.7 ms). IOPS Behavior In the 8‑client, 8‑thread test, Premium SSD reached its IOPS ceiling early, operating at 100% utilization, while Premium SSD v2 retained approximately 30% headroom under the same workload delivering 8,037 IOPS vs 3,761 IOPS with Premium SSD. When the workload increased to 32 clients and 32 threads, both tiers approached their IOPS limits however, Premium SSD v2 sustained a significantly higher performance ceiling, delivering approximately 2.75x higher IOPS (13,620 vs. 4,968) under load. Key Takeaway: With Premium SSD v2, you do not need to choose between cost and performance you get both. At the same price, applications run faster, scale further, and maintain lower latency without any code changes. Migrate from Premium SSD to Premium SSD v2 Migrating is simple and fast. You can migrate from Premium SSD to Premium SSD v2 using the two strategies below with minimal downtime. These methods are generally quicker than logical migration strategies, such as exporting and restoring data using pg_dump and pg_restore. Restore from Premium SSD to Premium SSD v2 Migrate using Read Replicas When migrating from Premium SSD to Premium SSD v2, using a virtual endpoint helps keep downtime to a minimum and allows applications to continue operating without requiring configuration changes after the migration. After the migration completes, you can stop the original server until your backup requirements are met. Once the required backup retention period has elapsed and all new backups are available on the new server, the original server can be safely deleted. Region Availability & Features Supported Premium SSD v2 is available in 48 regions worldwide for Azure Database for PostgreSQL – Flexible Server. For the most up‑to‑date information on regional availability, supported features, and current limitations, refer to the official Premium SSD v2 documentation. Getting Started: To learn more, review the official documentation for storage configuration available with Azure Database for PostgreSQL. Your feedback is important to us, have suggestions, ideas, or questions? We would love to hear from you: https://aka.ms/pgfeedback.325Views2likes0CommentsHandling Unique Constraint Conflicts in Logical Replication
Authors: Ashutosh Sharma, Senior Software Engineer, and Gauri Kasar, Product Manager Logical replication can keep your PostgreSQL environments in sync, helping replicate selected tables with minimal impact on the primary workload. But what happens when your subscriber hits a duplicate key error and replication grinds to a halt? If you’ve seen a unique‑constraint violation while replicating between Azure Database for PostgreSQL servers, you’re not alone. This blog covers common causes, prevention tips, and practical recovery options. In PostgreSQL logical replication, the subscriber can fail with a unique-constraint error when it tries to apply a change that would create a duplicate key. duplicate key value violates unique constraint Understanding why this happens? When an INSERT or UPDATE would create a value that already exists in a column (or set of columns) protected by a UNIQUE constraint (including a PRIMARY KEY). In logical replication, this most commonly occurs because of local writes on the subscriber or if the table is being subscribed from multiple publishers. These conflicts are resolved on the subscriber side. Local writes on the subscriber: a row with the same primary key/unique key is inserted on the subscriber before the apply worker processes the corresponding change from the publisher. Multi-origin / multi-master without conflict-free keys: two origins generate (or replicate) the same unique key. Initial data synchronization issues: the subscriber already contains data when the subscription is created with initial copy enabled, resulting in duplicate inserts during the initial table sync. How to avoid this? Avoid local writes on subscribed tables (treat the subscriber as read-only for replicated relations). Avoid subscribing to the same table from multiple publishers unless you have explicit conflict handling and a conflict-free key design. Enabling server logs can help you identify and troubleshoot unique‑constraint conflicts more effectively. Refer to the official documentation to configure and access PostgreSQL logs. How to handle conflicts (recovery options) Option 1: Delete the conflicting row on the subscriber Use the subscriber logs to identify the key (or row) causing the conflict, then delete the row on the subscriber with a DELETE statement. Resume apply and repeat if more conflicts appear. Option 2: Use conflict logs and skip the conflicting transaction (PostgreSQL 17+) Starting with PostgreSQL 17, logical replication provides detailed conflict logging on the subscriber, making it easier to understand why replication stopped and which transaction caused the failure. When a replicated INSERT would violate a non‑deferrable unique constraint on the subscriber for example, when a row with the same key already exists the apply worker detects this as an insert_exists conflict and stops replication. In this case, PostgreSQL logs the conflict along with the transaction’s finish LSN, which uniquely identifies the failing transaction. ERROR: conflict detected on relation "public.t2": conflict=insert_exists ... in transaction 754, finished at 0/034F4090 ALTER SUBSCRIPTION <subscription_name> SKIP (lsn = '0/034F4090'); Option 3: Rebuild (re-sync) the table Rebuilding (re‑syncing) a table is the safest and most deterministic way to resolve logical replication conflicts caused by pre‑existing data differences or local writes on the subscriber. This approach is especially useful when a table repeatedly fails with unique‑constraint violations and it is unclear which rows are out of sync. Step 1 (subscriber): Disable the subscription. ALTER SUBSCRIPTION <subscription_name> DISABLE; Step 2 (subscriber): Remove the local copy of the table so it can be re-copied. TRUNCATE TABLE <conflicting_table>; Step 3 (publisher): Ensure the publication will (re)send the table (one approach is to recreate the publication entry for that table). ALTER PUBLICATION <pub_with_conflicting_table> DROP TABLE <conflicting_table>; CREATE PUBLICATION <pub_with_conflicting_table_rebuild> FOR TABLE <conflicting_table>; Step 4 (subscriber): Create a new subscription (or refresh the existing one) to re-copy the table. CREATE SUBSCRIPTION <sub_rebuild> CONNECTION '<connection_string>' PUBLICATION <pub_with_conflicting_table_rebuild>; Step 5 (subscriber): Re-enable the original subscription (if applicable). ALTER SUBSCRIPTION <subscription_name> ENABLE; Conclusion In most cases, these conflicts occur due to local changes on the subscriber or differences in data that existed before logical replication was fully synchronized. It is recommended to avoid direct modifications on subscribed tables and ensure that the replication setup is properly planned, especially when working with tables that have unique constraints.107Views1like0CommentsFebruary 2026 Recap: Azure Database for PostgreSQL
Hello Azure Community, We’re excited to share the February 2026 recap for Azure Database for PostgreSQL, featuring a set of updates focused on speed, simplicity, and better visibility. From Terraform support for Elastic Clusters and a refreshed VM SKU selection experience in the Azure portal to built‑in Grafana dashboards, these improvements make it easier to build, operate, and scale PostgreSQL on Azure. This recap also includes practical GIN index tuning guidance, enhancements to the PostgreSQL VS Code extension, and improved connectivity for azure_pg_admin users. Features Terraform support for Elastic Clusters - Generally Available Dashboards with Grafana - Generally Available Easier way to choose VM SKUs on portal – Generally Available What’s New in the PostgreSQL VS Code Extension Priority Connectivity to azure_pg_admin users Guide on 'gin_pending_list_limit' indexes Terraform support for Elastic Clusters Terraform now supports provisioning and managing Azure Database for PostgreSQL Elastic Clusters, enabling customers to define and operate elastic clusters using infrastructure‑as‑code workflows. With this support, it is now easier to create, scale, and manage multi‑node PostgreSQL clusters through Terraform, making it easier to automate deployments, replicate environments, and integrate elastic clusters into CI/CD pipelines. This improves operational consistency and simplifies management for horizontally scalable PostgreSQL workloads. Learn more about building and scaling with Azure Database for PostgreSQL elastic clusters. Dashboards with Grafana — Now Built-In Grafana dashboards are now natively integrated into the Azure Portal for Azure Database for PostgreSQL. This removes the need to deploy or manage a separate Grafana instance. With just a few clicks, you can visualize key metrics and logs side by side, correlate events by timestamp, and gain deep insights into performance, availability, and query behavior all in one place. Whether you're troubleshooting a spike, monitoring trends, or sharing insights with your team, this built-in experience simplifies day-to-day observability with no added cost or complexity. Try it under Azure Portal > Dashboards with Grafana in your PostgreSQL server view. For more details, see the blog post: Dashboards with Grafana — Now in Azure Portal for PostgreSQL. Easier way to choose VM SKUs on portal We’ve improved the VM SKU selection experience in the Azure portal to make it easier to find and compare the right compute options for your PostgreSQL workload. The updated experience organizes SKUs in a clearer, more scannable view, helping you quickly compare key attributes like vCores and memory without extra clicks. This streamlined approach reduces guesswork and makes selecting the right SKU faster and more intuitive. What’s New in the PostgreSQL VS Code Extension The VS Code extension for PostgreSQL helps developers and database administrators work with PostgreSQL directly from VS Code. It provides capabilities for querying, schema exploration, diagnostics, and Azure PostgreSQL management allowing users to stay within their editor while building and troubleshooting. This release focuses on improving developer productivity and diagnostics. It introduces new visualization capabilities, Copilot-powered experiences, enhanced schema navigation, and deeper Azure PostgreSQL management directly from VS Code. New Features & Enhancements Query Plan Visualization: Graphical execution plans can now be viewed directly in the editor, making it easier to diagnose slow queries without leaving VS Code. AGE Graph Rendering: Support is now available for automatically rendering graph visualizations from Cypher queries, improving the experience of working with graph data in PostgreSQL. Object Explorer Search: A new graphical search experience in Object Explorer allows users to quickly find tables, views, functions, and other objects across large schemas, addressing one of the highest-rated user feedback requests. Azure PostgreSQL Backup Management: Users can now manage Azure Database for PostgreSQL backups directly from the Server Dashboard, including listing backups and configuring retention policies. Server Logs Dashboard: A new Server Dashboard view surfaces Azure Database for PostgreSQL server logs and retention settings for faster diagnostics. Logs can be opened directly in VS Code and analyzed using the built-in GitHub Copilot integration. This release also includes several reliability improvements and bug fixes, including resolving connection pool exhaustion issues, fixing Docker container creation failures when no password is provided, and improving stability around connection profiles and schema-related operations. Priority Connectivity to azure_pg_admin Users Members of the azure_pg_admin role can now use connections from the pg_use_reserved_connections pool. This ensures that an admin always has at least one available connection, even if all standard client connections from the server connection pool are in use. By making sure admin users can log in when the client connection pool is full, this change prevents lockout situations and lets admins handle emergencies without competing for available open connection slots. Guide on 'gin_pending_list_limit' indexes Struggling with slow GIN index inserts in PostgreSQL? This post dives into the often-overlooked gin_pending_list_limit parameter and how it directly impacts insert performance. Learn how GIN’s pending list works, why the right limit matters, and practical guidance on tuning it to strike the perfect balance between write performance and index maintenance overhead. For a deeper dive into gin_pending_list_limit and tuning guidance, see the full blog here. Learning Bytes Create Azure Database for PostgreSQL elastic clusters with terraform: Elastic clusters in Azure Database for PostgreSQL let you scale PostgreSQL horizontally using a managed, multi‑node architecture. With Elastic cluster now generally available, you can provision and manage elastic clusters using infrastructure‑as‑code, making it easier to automate deployments, standardize environments, and integrate PostgreSQL into CI/CD workflows. Elastic clusters are a good fit when you need: Horizontal scale for large or fast‑growing PostgreSQL workloads Multi‑tenant applications or sharded data models Repeatable and automated deployments across environments The following example shows a basic Terraform configuration to create an Azure Database for PostgreSQL flexible server configured as an elastic cluster. resource "azurerm_postgresql_flexible_server" "elastic_cluster" { name = "pg-elastic-cluster" resource_group_name = <rg-name> location = <region> administrator_login = var.admin_username administrator_password = var.admin_password version = "17" sku_name = "GP_Standard_D4ds_v5" storage_mb = 131072 cluster { size = 3 } } Conclusion That’s a wrap for the February 2026 Azure Database for PostgreSQL recap. We’re continuing to focus on making PostgreSQL on Azure easier to build, operate, and scale whether that’s through better automation with Terraform, improved observability, or a smoother day‑to‑day developer and admin experience. Your feedback is important to us, have suggestions, ideas, or questions? We’d love to hear from you: https://aka.ms/pgfeedback.378Views2likes1CommentMicrosoft at PGConf India 2026
I’m genuinely excited about PGConf India 2026. Over the past few editions, the conference has continued to grow year over year—both in size and in impact—and it has firmly established itself as one of the key events on the global PostgreSQL calendar. That momentum was very evident again in the depth, breadth, and overall quality of the program for PGConf India 2026. Microsoft is proud to be a diamond sponsor for the conference again this year. At Microsoft, we continue our contributions to the upstream PostgreSQL open-source project—as well as to serve our customers with our Postgres managed service offerings, both Azure Database for PostgreSQL and our newest Postgres offering, Azure HorizonDB . On the open-source front, Microsoft had 540 commits in PG18, including major features like Asynchronous IO. We’re also excited to grow our Postgres open-source contributors team, and so happy to welcome Noah Misch to our team. Noah is a Postgres committer who has deep expertise in PostgreSQL security and is focused on correctness and reliability in PostgreSQL’s core. Microsoft at PGConf India 2026: Highlights from Our Speakers PGConf India has several tracks, all of which have some great talks I am looking forward to. First, the plug. 😊 Microsoft has some amazing talks this year, and we have 8 different talks spread across all the tracks. Postgres on Azure : Scaling with Azure HorizonDB, AI, and Developer Workflows, by Aditya Duvuri & Divya Bhargov Resizing shared buffer pool in a running PostgreSQL server: important, yet impossible, by Ashutosh Bapat Ten Postgres Hacker Journeys—and what they teach us, by Claire Giordano How Postgres can leverage disk bandwidth for better TPS, by Nikhil Chawla AWSM FSM! Free Space Maps Decoded by Nikhil Sontakke Journey of developing a Performance Optimization Feature in PostgreSQL, by Rahila Syed Build Agentic AI with Semantic Kernel and Graph RAG on PostgreSQL, by Shriram Muthukrishnan & Palak Chaturvedi All things Postgres @ Microsoft (2026 edition) by Sumedh Pathak Claire is an amazing speaker and has done a lot of work over the last several years documenting and understanding PostgreSQL committers and hackers. Her talk will definitely have some key insights and nuggets of information. Rahila’s talk will go in depth on performance optimization features and how best to test and benchmark them, and all the tools and tricks she has used as part of the feature development. This should be a must-see talk for anyone doing performance work. Diving Deep: Case Studies & Technical Tracks One of the tracks I’m really excited about is the Case Study track. I see these as similar to ‘Experience’ papers in academia. An experience paper documents what actually happened when applying a technique or system in the real world, what worked, what didn’t, and why. One of the talks I’m looking forward to is ‘Operating Postgres Logical Replication at Massive Scale’ by Sai Srirampur from Clickhouse. Logical Replication is an extremely useful tool, and I’m curious to learn more about pitfalls and lessons learnt when running this at large scale. Another interesting one I’m curious to hear is ‘Understanding the importance of the commit log through a database corruption’ by Amit Kumar Singh from EDB. The Database Engine Developers track allows us to go deep into the PostgreSQL code base and get a better understanding of how PostgreSQL is built. Even if you are not a database developer, this track is useful to understand how and why PostgreSQL does things, helping you be a better user of the database. With the rise of larger machines and memory available in the Cloud, different and newer memory architectures/tiers and serverless product offerings, there is a lot of deep dive in PostgreSQL’s memory architecture. There are some great talks focused on this area, which should be must-see for anyone interested in this topic: Resizing shared buffer pool in a running PostgreSQL server: important, yet impossible by Ashutosh Bapat from Microsoft From Disk to Data: Exploring PostgreSQL's Buffer Management by Lalit Choudhary from PurnaBIT Beyond shared_buffers: On-Demand Memory in Modern PostgreSQL by Vaibhav Popat from Google Finally, the Database Administration and Application Developer tracks have some really great content as well. They cover a wide range of topics, from PII data, HA/DR, Query Tuning to connection pooling and understanding conflict detection and resolution. PostgreSQL in India: A Community Effort Worth Celebrating Conferences like these are a rich source of information, dramatically increasing my personal understanding of the product and the ecosystem. Separately, they are also a great way to meet other practitioners in the space and connect with people in the industry. For people in Bangalore, another great option is the PostgreSQL Bangalore Meetup, and I’m super happy that Microsoft was able to join the ranks of other companies to host the eighth iteration of this meetup. Finally, I would be remiss in not mentioning the hard work done by the PGConf India organizing team including Pavan Deolasse, Ashish Mehra, Nikhil Sontakke, Hari Kiran, and Rushabh Lathia who are making all of this happen. Also, a big shout out to the PGConf India Program Committee (Amul Sul, Dilip Kumar, Marc Linster, Thomas Munro, Vigneshwaran C) for putting together an amazing set of talks. I look forward to meeting all of you in Bangalore! Be sure to drop by the Microsoft booth to say hello (and to snag a free pair of our famous socks). I’d love to learn more about how you’re using Postgres.291Views3likes0CommentsJanuary 2026 Recap: Azure Database for PostgreSQL
We just dropped the 𝗝𝗮𝗻𝘂𝗮𝗿𝘆 𝟮𝟬𝟮𝟲 𝗿𝗲𝗰𝗮𝗽 for Azure Database for PostgreSQL and this one’s all about developer velocity, resiliency, and production-ready upgrades. January 2026 Recap: Azure Database for PostgreSQL • PostgreSQL 18 support via Terraform (create + upgrade) • Premium SSD v2 (Preview) with HA, replicas, Geo-DR & MVU • Latest PostgreSQL minor version releases • Ansible module GA with latest REST API features • Zone-redundant HA now configurable via Azure CLI • SDKs GA (Go, Java, JS, .NET, Python) on stable APIs Read the full January 2026 recap here and see what’s new (and what’s coming) - January 2026 Recap: Azure Database for PostgreSQL