Blog Post

Azure Database Support Blog
3 MIN READ

Azure PostgreSQL Lesson Learned #14: Hitting the Max Storage Limits Blocking Further Scale‑Up

angesalsaa's avatar
angesalsaa
Icon for Microsoft rankMicrosoft
Jan 28, 2026

We investigated a customer case where the server was unable to scale up storage any further after reaching 32 TiB, even though the workload required additional capacity. The platform blocked all scale‑up attempts because the server was deployed in a region where 32 TiB is the maximum supported storage limit for that tier. This article explains why this occurs, the regional limitations involved, and how to plan the appropriate migration path to overcome this constraint.

Co‑authored with HaiderZ-MSFT​ 

Case Overview

A customer attempted to increase storage for their Azure Database for PostgreSQL Flexible Server but was unable to exceed 32 TiB. Investigation confirmed that the server was deployed in a region where the maximum supported storage is capped at 32 TiB, meaning no additional scale‑up was possible.

This limitation required exploring alternative storage options and potential redeployment strategies to support growing workload demands.

Symptoms: How the Problem Appears

  • Storage size maxes out at 32 TiB
  • Portal does not allow selecting any higher value

This behavior is expected because the platform enforces the maximum storage supported by the region and tier.

Root Cause: Regional Storage Limit of 32 TiB Reached

Azure Database for PostgreSQL Flexible Server supports up to 32 TiB of storage on Premium SSD within the customer’s region. Once this limit is reached:

  • Further storage growth is not possible
  • Storage cannot be downgraded or changed in-place
  • The customer must migrate to a tier that supports larger disks (e.g., Premium SSD v2, where available)

Premium SSD v2 supports:

  • Up to 64 TiB
  • 1 GiB granular sizing
  • Higher throughput and IOPS

However, its availability and supported capabilities can vary by region.

Step‑By‑Step Troubleshooting & Migration Guidance

STEP 1 — Validate Current Storage

Azure Portal → Server → Compute + Storage
Confirms storage is already at the maximum the region can offer 

STEP 2 — Confirm Tier Limitations

Check documentation for storage caps by tier and region from Storage options | Microsoft Learn

STEP 3 — Attempt a PITR‑Based Redeployment

Migration from Premium SSD → Premium SSD v2 is possible via following these steps: https://learn.microsoft.com/en-us/azure/postgresql/compute-storage/concepts-storage-migrate-ssd-to-ssd-v2?tabs=portal-restore-custom-point

Important:
The PITR wizard always deploys the restored server in the same region as the source server.
There is no option to change regions during PITR.

Therefore, customers must check if Premium SSD v2 becomes selectable during PITR within that same region.

If SSD v2 is not listed in the dropdown, it means:

  • The region does not support SSD v2 for PostgreSQL Flexible Server
  • The customer cannot exceed 32 TiB on Flexible Server in that region

Final Outcome

Tip: Alternative Path — Use Dump & Restore to a New Server With Larger Storage

If Premium SSD v2 does not appear as an available storage option during the Migration from Premium SSD → Premium SSD v2 workflow, our customer still has another viable path to exceed the 32 TiB limit:

➡ Option: Perform a pg_dump / pg_restore to a New Server with Higher Storage Capacity

Data can be migrated by creating a brand‑new Flexible Server in a region that supports higher storage tiers (including Premium SSD v2), then migrating the data using standard PostgreSQL backup tools

Best Practices

  • Add proactive storage alerts (70%, 80%, 90%).
  • Validate regional storage limits before provisioning servers.
  • Architect for growth by selecting a region/tier that aligns with future capacity needs.
  • Request Premium SSD v2 quota increases early when planning large workloads.

Helpful References

Updated Jan 28, 2026
Version 1.0
No CommentsBe the first to comment