azurestorage
3 TopicsAzure PostgreSQL Lesson Learned #14: Hitting the Max Storage Limits Blocking Further Scale‑Up
Co‑authored with HaiderZ-MSFT Case Overview A customer attempted to increase storage for their Azure Database for PostgreSQL Flexible Server but was unable to exceed 32 TiB. Investigation confirmed that the server was deployed in a region where the maximum supported storage is capped at 32 TiB, meaning no additional scale‑up was possible. This limitation required exploring alternative storage options and potential redeployment strategies to support growing workload demands. Symptoms: How the Problem Appears Storage size maxes out at 32 TiB Portal does not allow selecting any higher value This behavior is expected because the platform enforces the maximum storage supported by the region and tier. Root Cause: Regional Storage Limit of 32 TiB Reached Azure Database for PostgreSQL Flexible Server supports up to 32 TiB of storage on Premium SSD within the customer’s region. Once this limit is reached: Further storage growth is not possible Storage cannot be downgraded or changed in-place The customer must migrate to a tier that supports larger disks (e.g., Premium SSD v2, where available) Premium SSD v2 supports: Up to 64 TiB 1 GiB granular sizing Higher throughput and IOPS However, its availability and supported capabilities can vary by region. Step‑By‑Step Troubleshooting & Migration Guidance STEP 1 — Validate Current Storage Azure Portal → Server → Compute + Storage Confirms storage is already at the maximum the region can offer STEP 2 — Confirm Tier Limitations Check documentation for storage caps by tier and region from Storage options | Microsoft Learn STEP 3 — Attempt a PITR‑Based Redeployment Migration from Premium SSD → Premium SSD v2 is possible via following these steps: https://learn.microsoft.com/en-us/azure/postgresql/compute-storage/concepts-storage-migrate-ssd-to-ssd-v2?tabs=portal-restore-custom-point ⚠ Important: The PITR wizard always deploys the restored server in the same region as the source server. There is no option to change regions during PITR. Therefore, customers must check if Premium SSD v2 becomes selectable during PITR within that same region. If SSD v2 is not listed in the dropdown, it means: The region does not support SSD v2 for PostgreSQL Flexible Server The customer cannot exceed 32 TiB on Flexible Server in that region Final Outcome The server reached the maximum supported storage (32 TiB) for its region and storage could not be increased further. To exceed this limit, the customer needs to move to Premium SSD v2 (if supported) Migration must be done via following these steps: https://learn.microsoft.com/en-us/azure/postgresql/compute-storage/concepts-storage-migrate-ssd-to-ssd-v2?tabs=portal-restore-custom-point Tip: Alternative Path — Use Dump & Restore to a New Server With Larger Storage If Premium SSD v2 does not appear as an available storage option during the Migration from Premium SSD → Premium SSD v2 workflow, our customer still has another viable path to exceed the 32 TiB limit: ➡ Option: Perform a pg_dump / pg_restore to a New Server with Higher Storage Capacity Data can be migrated by creating a brand‑new Flexible Server in a region that supports higher storage tiers (including Premium SSD v2), then migrating the data using standard PostgreSQL backup tools Best Practices Add proactive storage alerts (70%, 80%, 90%). Validate regional storage limits before provisioning servers. Architect for growth by selecting a region/tier that aligns with future capacity needs. Request Premium SSD v2 quota increases early when planning large workloads. Helpful References Premium SSD v2 for Azure PostgreSQL Flexible Server https://learn.microsoft.com/azure/postgresql/compute-storage/concepts-storage-premium-ssd-v2 How to Migrate from Premium SSD → Premium SSD v2 (PITR) https://learn.microsoft.com/en-us/azure/postgresql/compute-storage/concepts-storage-migrate-ssd-to-ssd-v2?tabs=portal-restore-custom-point94Views0likes0CommentsUnlocking Flexibility with Azure Files Provisioned V2
In this episode of E2E:10-Minute Drill, host Pierre Roman sits down with Will Gries, Principal PM in Azure Storage, to explore the newly released Azure Files Provisioned V2 billing model. This model introduces a game-changing approach to cloud file storage by allowing users to provision storage, IOPS, and throughput independently—a major leap forward in flexibility and cost optimization. 📺 Watch the full episode: https://youtu.be/Tb6y0fvJBMs Previously, Standard Azure Files used a pay-as-you-go model where you pay per GB of storage plus transaction fees for every file operation (reads, writes, lists, etc.). That often made bills hard to predict. There was also a Premium tier (Provisioned V1 on SSDs, where you pre-allocated capacity; that gave you fixed performance and no transaction charges, but you might have to over-provision storage to get more IOPS, whether you needed that extra space or not. Provisioned V2 changes the game... You can now pre-provision your storage, IOPS, and throughput you need for a file share. That’s what you pay for – and nothing more. There are no per-operation fees at all in V2. It’s like moving from a metered phone plan to an unlimited plan: a stable bill each month, and you can adjust your “plan” up or down as needed. Key Benefits of Provisioned V2 Predictable (and Lower) Costs: No more paying for every single read/write. You pay a known monthly rate based on the resources you reserve. This means no surprise spikes in cost when your usage increases. In many cases, Provisioned V2 actually lowers the total cost for active workloads. Microsoft has noted that common workloads might save on the order of 30–50% compared to the old pay-as-you-go model, thanks to lower storage prices and zero transaction fees. High Performance on Demand: Each file share can now scale up to 50,000 IOPS and 5 GiB/sec throughput, and support up to 256 TiB of data in a single share. That’s a big jump from the old limits. More importantly, you’re in control of the performance: if you need more IOPS or bandwidth, you can dial it up anytime (and dial it down later if you overshot). Provisioned V2 also includes burst capacity for short spikes, so your share can automatically handle occasional surges above your baseline IOPS. Bottom line – your Azure Files can now handle much larger and more IO-intensive workloads without breaking a sweat. Simpler Management & Planning: Forget about juggling Hot vs Cool vs Transaction Optimized tiers or guessing at how many transactions you’ll run. With V2, every Standard file share works the same way – you just decide how much capacity and performance to provision. This makes it much easier to plan and budget. You can monitor each share’s usage with new per-share metrics (Azure shows you how much of your provisioned IOPS/throughput you’re using), which helps right-size your settings. If you’re syncing on-prem file servers to Azure with Azure File Sync, the predictable costs and higher limits of V2 make your hybrid setup easier to manage and potentially cheaper, too. Azure Files provisioned v2 billing model for flexibility, cost savings, and predictability | Microsoft Community Hub Provisioned V2 makes Azure Files more cloud-friendly and enterprise-ready. Whether you’re a new user or have been using Azure Files for years, this model offers a win-win: you get more control and performance, and you eliminate the unpredictable bills. If you have heavy usage, you’ll appreciate the cost savings and headroom. If you have lighter usage, you’ll enjoy the simplicity and peace of mind. Overall, if you use Azure Files (or are planning to), Provisioned V2 is likely to make your life easier and your storage costs lower. It’s a welcome upgrade that addresses a lot of customer pain points in cloud file storage. If you're looking to optimize your Azure storage strategy, this episode is a must-watch. 🔗 Explore all episodes: https://aka.ms/E2E-10min-Drill Resources: Azure Storage Blog – Provisioned V2 Announcement (Jan 2025):“Azure Files provisioned v2 billing model for flexibility, cost savings, and predictability” – Official blog post introducing Provisioned V2, with details on the new limits and pricing model. (Microsoft Tech Community)Microsoft Azure Blog (Apr 2025):“Azure Files: More performance, more control, more value for your file data” – Azure blog highlighting the increased performance and value offered by Azure Files (including the new billing model). Microsoft Learn – Understand Azure Files Billing Models: Documentation explaining Azure Files billing, with sections on the Provisioned V2 model and how it differs from previous models. Cheer! Pierre326Views0likes0CommentsHow Azure Storage Powers AI Workloads: Behind the Scenes with OpenAI, Blobfuse & More
In the latest episode of E2E: 10-minute Drill, I sat down with Vamshi from the Azure Storage team to explore how Azure Blob Storage is fueling the AI revolution. From training massive foundation models like ChatGPT to enabling enterprise-grade AI solutions. Whether you're building your own LLM, fine-tuning models with proprietary data, or just curious about how Microsoft supports OpenAI’s infrastructure, this episode is packed with insights. 🎥 Watch the Full Episode 👉Watch on YouTube 🔍 Key Highlights Azure Blob Storage is the backbone of AI workloads, storing everything from training data to user-generated content in apps like ChatGPT and DALL·E. Microsoft’s collaboration with OpenAI has led to innovations like Azure Scaled Accounts and Blobfuse2, now available to all Azure customers. Enterprises can now securely bring their own data to Azure AI services, with enhanced access control and performance at exabyte scale. 📂 Documentation & Resources 🚀 Azure Blob Storage Overview - https://learn.microsoft.com/azure/storage/blobs/ 📝 Blobfuse2 (Linux FUSE Adapter for Azure Blob Storage) - https://learn.microsoft.com/azure/storage/blobs/blobfuse2-introduction 🧠 Azure OpenAI Service - https://learn.microsoft.com/azure/cognitive-services/openai/overview 🔐 Azure Role-Based Access Control (RBAC) - https://learn.microsoft.com/azure/role-based-access-control/overview 💬 Why It Matters As AI becomes a core workload for infrastructure teams, understanding how to scale, secure, and optimize your data pipelines is critical. This episode offers a behind-the-scenes look at how Microsoft is enabling developers and enterprises to build the next generation of intelligent applications—using the same tools that power OpenAI. 📣 Stay Connected Subscribe to the ITOpsTalk YouTube Channel and follow the E2E:10-minute Drill series for more conversations on cloud, AI, and innovation. And, As always if you have any questions or comments, please leave them below. I'll make sure we get back to you. Cheers!! Pierre284Views0likes0Comments