high availability
279 TopicsReal‑World Cloud & Azure SQL Database Examples Using Kepner‑Tregoe
The Kepner‑Tregoe (KT) methodology is especially effective in modern cloud environments like Azure SQL Database, where incidents are often multi‑dimensional, time‑bound, and affected by asynchronous and self‑healing behaviors. Below are practical examples illustrating how KT can be applied in real Azure SQL scenarios. Example 1: Azure SQL Geo‑Replication Lag Observed on Read‑Only Replica Scenario An application team reports that changes committed on the primary Azure SQL Database are not visible on the geo‑replica used for reporting for up to 30–40 minutes. The primary database performance remains healthy. Applying KT – Problem Analysis What is happening? Read‑only geo‑replica is temporarily behind the primary. What is not happening? No primary outage, no data corruption, no failover. Where does it occur? Only on the geo‑secondary, during specific time windows. When does it occur? Repeatedly around the same time each hour. What is the extent? Lag spikes, then returns to zero. KT Insight By separating data visibility delay from primary health, teams avoid misdiagnosing the issue as a platform outage. Public DMVs (such as sys.dm_geo_replication_link_status and sys.dm_database_replica_states) confirm this as a transient redo lag scenario, not a service availability issue. Example 2: Error 3947 – Transaction Aborted Due to HA Replica Redo Lag Scenario Applications intermittently hit error 3947 (“The transaction was aborted because the secondary failed to catch up redo”), while primary latency remains stable. Applying KT – Situation Appraisal What needs immediate action? Ensure application retry logic is functioning. What can wait? Deep analysis—since workload resumes normally after retries. What should not be escalated prematurely? Platform failover or data integrity concerns. KT Insight KT helps distinguish protective platform behavior from defects. Error 3947 is a deliberate safeguard in synchronous HA models to maintain consistency—not an outage or bug. Example 3: Performance Degradation During Business‑Critical Reporting Scenario Customer reports slow reporting queries on a readable secondary during peak hours, coinciding with replication lag spikes. Applying KT – Decision Analysis Possible actions: Route reporting queries back to primary during spike window Scale up replica resources Move batch processing off peak hours KT Decision Framework Musts: No data inconsistency, minimal user impact Wants: Low cost, fast mitigation, minimal architecture change Decision Temporarily route latency‑sensitive reads to the primary while continuing investigation. This decision is defensible, documented, and reversible. Example 4: Preventing Recurrence with Potential Problem Analysis Scenario Recurring redo lag spikes happen daily at the same minute past the hour. Applying KT – Potential Problem Analysis What could go wrong? Hourly batch job may generate large log bursts How likely is it? High (pattern repeats daily) What is the impact? Temporary stale reads on replicas Preventive actions: Break batch jobs into smaller units Shift non‑critical workloads outside reporting hours Monitor redo queue size proactively KT Insight Rather than responding reactively each day, teams use KT to anticipate and reduce the likelihood and impact of recurrence. Example 5: Coordinated Incident Management Across Regions Scenario An Azure SQL issue spans EMEA, APAC, and US support teams, with intermittent symptoms and high stakeholder visibility. Applying KT – Situation Appraisal KT helps teams: Prioritize which signals are critical vs. noise Decide when to involve engineering vs. continue monitoring Communicate clearly with customers using facts, not assumptions This prevents “analysis paralysis” or conflicting interpretations across time zones. Why KT Works Well in Cloud and Azure SQL Environments Cloud platforms contain self‑healing, asynchronous behaviors that can be misinterpreted Multiple metrics may conflict without structured reasoning KT brings discipline, shared language, and defensible conclusions It complements tooling (DMVs, metrics, alerts)—it doesn’t replace them Closing Thought In cloud operations, how you think is as important as what you observe. Kepner‑Tregoe provides a timeless, structured way to reason about complex Azure SQL Database behaviors—helping teams respond faster, communicate better, and avoid unnecessary escalations.Azure SQL Database High Availability: Architecture, Design, and Built‑in Resilience
High availability (HA) is a core pillar of Azure SQL Database. Unlike traditional SQL Server deployments—where availability architectures must be designed, implemented, monitored, and maintained manually—Azure SQL Database delivers built‑in high availability by design. By abstracting infrastructure complexity while still offering enterprise‑grade resilience, Azure SQL Database enables customers to achieve strict availability SLAs with minimal operational overhead. In this article, we’ll cover: Azure SQL Database high‑availability design principles How HA is implemented across service tiers: General Purpose Business Critical Hyperscale Failover behavior and recovery mechanisms Architecture illustrations explaining how availability is achieved Supporting Microsoft Learn and documentation references What High Availability Means in Azure SQL Database High availability in Azure SQL Database ensures that: Databases remain accessible during infrastructure failures Hardware, software, and network faults are handled automatically Failover occurs without customer intervention Data durability is maintained using replication, quorum, and consensus models This is possible through the separation of: Compute Storage Control plane orchestration Azure SQL Database continuously monitors health signals across these layers and automatically initiates recovery or failover when required. Azure SQL Database High Availability – Shared Concepts Regardless of service tier, Azure SQL Database relies on common high‑availability principles: Redundant replicas Synchronous and asynchronous replication Automatic failover orchestration Built‑in quorum and consensus logic Transparent reconnect via the Azure SQL Gateway Applications connect through the Azure SQL Gateway, which automatically routes traffic to the current primary replica—shielding clients from underlying failover events. High Availability Architecture – General Purpose Tier The General-Purpose tier uses a compute–storage separation model, relying on Azure Premium Storage for data durability. Key Characteristics Single compute replica Storage replicated three times using Azure Storage Read‑Access Geo‑Redundant Storage (RA‑GRS) optional Stateless compute that can be restarted or moved Fast recovery using storage reattachment Architecture Diagram – General Purpose Tier Description: Clients connect via the Azure SQL Gateway, which routes traffic to the primary compute node. The compute layer is stateless, while Azure Premium Storage provides triple‑replicated durable storage. Failover Behavior Compute failure triggers creation of a new compute node Database files are reattached from storage Typical recovery time: seconds to minutes 📚 Reference: https://learn.microsoft.com/azure/azure-sql/database/service-tier-general-purpose High Availability Architecture – Business Critical Tier The Business-Critical tier is designed for mission‑critical workloads requiring low latency and fast failover. Key Characteristics Multiple replicas (1 primary + up to 3 secondaries) Always On availability group–like architecture Local SSD storage on each replica Synchronous replication Automatic failover within seconds Architecture Diagram – Business Critical Tier Description: The primary replica synchronously replicates data to secondary replicas. Read‑only replicas can offload read workloads. Azure SQL Gateway transparently routes traffic to the active primary replica. Failover Behavior If the primary replica fails, a secondary is promoted automatically No storage reattachment is required Client connections are redirected automatically Typical failover time: seconds 📚 Reference: https://learn.microsoft.com/azure/azure-sql/database/service-tier-business-critical High Availability Architecture – Hyperscale Tier The Hyperscale tier introduces a distributed storage and compute architecture, optimized for very large databases and rapid scaling scenarios. Key Characteristics Decoupled compute and page servers Multiple read replicas Fast scale‑out and fast recovery Durable log service ensures transaction integrity Architecture Diagram – Hyperscale Tier Description: The compute layer processes queries, while durable log services and distributed page servers manage data storage independently, enabling rapid failover and scaling. Failover Behavior Compute failure results in rapid creation of a new compute replica Page servers remain intact Log service ensures zero data loss 📚 Reference: https://learn.microsoft.com/azure/azure-sql/database/service-tier-hyperscale How Azure SQL Database Handles Failures Azure SQL Database continuously monitors critical health signals, including: Heartbeats IO latency Replica health Storage availability Automatic Recovery Actions Restarting failed processes Promoting secondary replicas Recreating compute nodes Redirecting client connections Applications should implement retry logic and transient‑fault handling to fully benefit from these mechanisms. 📚 Reference: https://learn.microsoft.com/azure/architecture/best-practices/transient-faults Zone Redundancy and High Availability Azure SQL Database can be configured with zone redundancy, distributing replicas across Availability Zones in the same region. Benefits Protection against datacenter‑level failures Increased SLA Transparent resilience without application changes 📚 Reference: https://learn.microsoft.com/azure/azure-sql/database/high-availability-sla Summary Azure SQL Database delivers high availability by default, removing the traditional operational burden associated with SQL Server HA designs. Service Tier HA Model Typical Failover General Purpose Storage‑based durability Minutes Business Critical Multi‑replica, synchronous Seconds Hyperscale Distributed compute & storage Seconds By selecting the appropriate service tier and enabling zone redundancy where required, customers can meet even the most demanding availability and resilience requirements with minimal complexity. Additional References Azure SQL Database HA overview https://learn.microsoft.com/azure/azure-sql/database/high-availability-overview Azure SQL Database SLAs https://azure.microsoft.com/support/legal/sla/azure-sql-database Application resiliency guidance https://learn.microsoft.com/azure/architecture/framework/resiliency274Views0likes0CommentsUnlocking AI-Driven Data Access: Azure Database for MySQL Support via the Azure MCP Server
Step into a new era of data-driven intelligence with the fusion of Azure MCP Server and Azure Database for MySQL, where your MySQL data is no longer just stored, but instantly conversational, intelligent and action-ready. By harnessing the open-standard Model Context Protocol (MCP), your AI agents can now query, analyze and automate in natural language, accessing tables, surfacing insights and acting on your MySQL-driven business logic as easily as chatting with a colleague. It’s like giving your data a voice and your applications a brain, all within Azure’s trusted cloud platform. We are excited to announce that we have added support for Azure Database for MySQL in Azure MCP Server. The Azure MCP Server leverages the Model Context Protocol (MCP) to allow AI agents to seamlessly interact with various Azure services to perform context-aware operations such as querying databases and managing cloud resources. Building on this foundation, the Azure MCP Server now offers a set of tools that AI agents and apps can invoke to interact with Azure Database for MySQL - enabling them to list and query databases, retrieve schema details of tables, and access server configurations and parameters. These capabilities are delivered through the same standardized interface used for other Azure services, making it easier to the adopt the MCP standard for leveraging AI to work with your business data and operations across the Azure ecosystem. Before we delve into these new tools and explore how to get started with them, let’s take a moment to refresh our understanding of MCP and the Azure MCP Server - what they are, how they work, and why they matter. MCP architecture and key components The Model Context Protocol (MCP) is an emerging open protocol designed to integrate AI models with external data sources and services in a scalable, standardized, and secure manner. MCP dictates a client-server architecture with four key components: MCP Host, MCP Client, MCP Server and external data sources, services and APIs that provide the data context required to enhance AI models. To explain briefly, an MCP Host (AI apps and agents) includes an MCP client component that connects to one or more MCP Servers. These servers are lightweight programs that securely interface with external data sources, services and APIs and exposes them to MCP clients in the form of standardized capabilities called tools, resources and prompts. Learn more: MCP Documentation What is Azure MCP Server? Azure offers a multitude of cloud services that help developers build robust applications and AI solutions to address business needs. The Azure MCP Server aims to expose these powerful services for agentic usage, allowing AI systems to perform operations that are context-aware of your Azure resources and your business data within them, while ensuring adherence to the Model Context Protocol. It supports a wide range of Azure services and tools including Azure AI Search, Azure Cosmos DB, Azure Storage, Azure Monitor, Azure CLI and Developer CLI extensions. This means that you can empower AI agents, apps and tools to: Explore your Azure resources, such as listing and retrieving details on your Azure subscriptions, resource groups, services, databases, and tables. Search, query and analyze your data and logs. Execute CLI and Azure Developer CLI commands directly, and more! Learn more: Azure MCP Server GitHub Repository Introducing new Azure MCP Server tools to interact with Azure Database for MySQL The Azure MCP Server now includes the following tools that allow AI agents to interact with Azure Database for MySQL and your valuable business data residing in these servers, in accordance with the MCP standard: Tool Description Example Prompts azmcp_mysql_server_list List all MySQL servers in a subscription & resource group "List MySQL servers in resource group 'prod-rg'." "Show MySQL servers in region 'eastus'." azmcp_mysql_server_config_get Retrieve the configuration of a MySQL server "What is the backup retention period for server 'my-mysql-server'?" "Show storage allocation for server 'my-mysql-server'." azmcp_mysql_server_param_get Retrieve a specific parameter of a MySQL server "Is slow_query_log enabled on server my-mysql-server?" "Get innodb_buffer_pool_size for server my-mysql-server." azmcp_mysql_server_param_set Set a specific parameter of a MySQL server to a specific value "Set max_connections to 500 on server my-mysql-server." "Set wait_timeout to 300 on server my-mysql-server." azmcp_mysql_table_list List all tables in a MySQL database "List tables starting with 'tmp_' in database 'appdb'." "How many tables are in database 'analytics'?" azmcp_mysql_table_schema_get Get the schema of a specific table in a MySQL database "Show indexes for table 'transactions' in database 'billing'." "What is the primary key for table 'users' in database 'auth'?" azmcp_mysql_database_query Executes a SELECT query on a MySQL Database. The query must start with SELECT and cannot contain any destructive SQL operations for security reasons. “How many orders were placed in the last 30 days in the salesdb.orders table?” “Show the number of new users signed up in the last week in appdb.users grouped by day.” These interactions are secured using Microsoft Entra authentication, which enables seamless, identity-based access to Azure Database for MySQL - eliminating the need for password storage and enhancing overall security. How are these new tools in the Azure MCP Server different from the standalone MCP Server for Azure Database for MySQL? We have integrated the key capabilities of the Azure Database for MySQL MCP server into the Azure MCP Server, making it easier to connect your agentic apps not only to Azure Database for MySQL but also to other Azure services through one unified and secure interface! How to get started Installing and running the Azure MCP Server is quick and easy! Use GitHub Copilot in Visual Studio Code to gain meaningful insights from your business data in Azure Database for MySQL. Pre-requisites Install Visual Studio Code. Install GitHub Copilot and GitHub Copilot Chat extensions. An Azure Database for MySQL with Microsoft Entra authentication enabled. Ensure that the MCP Server is installed on a system with network connectivity and credentials to connect to Azure Database for MySQL. Installation and Testing Please use this guide for installation: Azure MCP Server Installation Guide Try the following prompts with your Azure Database for MySQL: Azure Database for MySQL tools for Azure MCP Server Try it out and share your feedback! Start using Azure MCP Server with the MySQL tools today and let our cloud services become your AI agent’s most powerful ally. We’re counting on your feedback - every comment, suggestion, or bug-report helps us build better tools together. Stay tuned: more features and capabilities are on the horizon! Feel free to comment below or write to us with your feedback and queries at AskAzureDBforMySQL@service.microsoft.com.181Views1like0CommentsIgnite 2025: Advancing Azure Database for MySQL with Powerful New Capabilities
At Ignite 2025, we’re introducing a wave of powerful new capabilities for Azure Database for MySQL, designed to help organizations modernize, scale, and innovate faster than ever before. From enhanced high availability and seamless serverless integrations to AI-powered insights and greater flexibility for developers, these advancements reflect our commitment to delivering a resilient, intelligent data platform. Join us as we unveil what’s next for MySQL on Azure - and discover how industry leaders are already building the future with confidence. Enhanced Failover Performance with Dedicated SLB for High-Availability Servers We’re excited to announce the General Availability of Dedicated Standard Load Balancer (SLB) for HA-enabled servers in Azure Database for MySQL. This enhancement introduces a dedicated SLB to High Availability configurations for servers created with public access or private link. By managing the MySQL data traffic path, SLB eliminates the need for DNS updates during failover, significantly reducing failover time. Previously, failover relied on DNS changes, which caused delays due to DNS TTL (30 seconds) and client-side DNS caching. What’s new with GA: The FQDN consistently resolves to the SLB IP address before and after failover. Load-balancing rules automatically route traffic to the active node. Removes DNS cache dependency, delivering faster failovers. Note: This feature is not supported for servers using private access with VNet integration. Learn more Build serverless, event-driven apps at scale – now GA with Trigger Bindings for Azure Functions We’re excited to announce the General Availability of Azure Database for MySQL Trigger bindings for Azure Functions, completing the full suite of Input, Output, and Trigger capabilities. This feature lets you build real-time, event-driven applications by automatically invoking Azure Functions when MySQL table rows are created or updated - eliminating custom polling and boilerplate code. With native support across multiple languages, developers can now deliver responsive, serverless solutions that scale effortlessly and accelerate innovation. Learn more Enable AI agents to query Azure Database for MySQL using Azure MCP Server We’re excited to announce that Azure MCP Server now supports Azure Database for MySQL, enabling AI agents to query and manage MySQL data using natural language through the open Model Context Protocol (MCP). Instead of writing SQL, you can simply ask questions like “Show the number of new users signed up in the last week in appdb.users grouped by day.”, all secured with Microsoft Entra authentication for enterprise-grade security. This integration delivers a unified, secure interface for building intelligent, context-aware workflows across Azure services - accelerating insights and automation. Learn more Greater networking flexibility with Custom Port Support Custom port support for Azure Database for MySQL is now generally available, giving organizations the flexibility to configure a custom port (between 25001 and 26000) during new server creation. This enhancement streamlines integration with legacy applications, supports strict network security policies, and helps avoid port conflicts in complex environments. Supported across all network configurations - including public access, private access, and Private Link - custom port provisioning ensures every new MySQL server can be tailored to your needs. The managed experience remains seamless, with all administrative capabilities and integrations working as before. Learn more Streamline migrations and compatibility with Lower Case Table Names support Azure Database for MySQL now supports configuring lower_case_table_names server parameter during initial server creation for MySQL 8.0 and above, ensuring seamless alignment with your organization’s naming conventions. This setting is automatically inherited for restores and replicas, and cannot be modified. Key Benefits: Simplifies migrations by aligning naming conventions and reducing complexity. Enhances compatibility with legacy systems that depend on case-insensitive table names. Minimizes support dependency, enabling faster and smoother onboarding. Learn more Unlock New Capabilities with Private Preview Features at Ignite 2025 We’re excited to announce that you can now explore two powerful capabilities in early access - Reader Endpoint for seamless read scaling and Server Rename for greater flexibility in server management. Scale reads effortlessly with Reader Endpoint (Private Preview) We’re excited to announce that the Reader Endpoint feature for Azure Database for MySQL is now ready for private preview. Reader Endpoint provides a dedicated read-only endpoint for read replicas, enabling automatic connection-based load balancing of read-only traffic across multiple replicas. This simplifies application architecture by offering a single endpoint for read operations, improving scalability and fault tolerance. Azure Database for MySQL supports up to 10 read replicas per primary server. By routing read-only traffic through the reader endpoint, application teams can efficiently manage connections and optimize performance without handling individual replica endpoints. Reader endpoints continuously monitor the health of replicas and automatically exclude any replica that exceeds the configured replication lag threshold or becomes unavailable. To enroll in the preview, please submit your details using this form. Limitations During Private Preview: Only performance-based routing is supported in this preview. Certain settings such as routing method and the option to attach new replicas to the reader endpoint can only be configured at creation time. Only one reader endpoint can be created per replica group. Including the primary server as a fallback for read traffic when no replicas are available is not supported in this preview. Get flexibility in server management with Server Rename (Private Preview) We’re excited to announce the Private Preview of Server Rename for Azure Database for MySQL. This feature lets you update the name of an existing MySQL server without recreating it, migrating data, or disrupting applications - making it easier to adopt clear, consistent naming. It provides a near zero-downtime path to a new hostname of the server. To enroll in the preview, please submit your details using this form. Limitations During Private Preview: Primary server with read replicas: Renaming a primary server that has read replicas keeps replication healthy. However, the SHOW SLAVE STATUS output on the replicas will still display the old primary server's name. This is a display inconsistency only and does not affect replication. Renaming is currently unsupported for servers using Customer Managed Key (CMK) encryption or Microsoft Entra Authentication (Entra Id). Real-World Success: Azure Database for MySQL Powers Resilient Applications at Scale Factorial Factorial, a leading HR software provider, uses Azure Database for MySQL alongside Azure Kubernetes Service to deliver secure, scalable HR solutions for thousands of businesses worldwide. By leveraging Azure Database for MySQL’s reliability and seamless integration with cloud-native technologies, Factorial ensures high availability and rapid innovation for its customers. Learn more YES (Youth Employment Service) South Africa’s largest youth employment initiative, YES, operates at national scale by leveraging Azure Database for MySQL to deliver a resilient, centralized platform for real-time job matching, learning management, and career services - connecting thousands of young people and employers, and helping nearly 45 percent of participants secure permanent roles within six months. Learn more Nasdaq At Ignite 2025, Nasdaq will showcase how it uses Azure Database for MySQL - alongside Azure Database for PostgreSQL and other Azure products - to power a secure, resilient architecture that safeguards confidential data while unlocking new agentic AI capabilities. Learn more These examples demonstrate that Azure Database for MySQL is trusted by industry leaders to build resilient, scalable applications - empowering organizations to innovate and grow with confidence. We Value Your Feedback Azure Database for MySQL is built for scale, resilience, and performance - ready to support your most demanding workloads. With every update, we’re focused on simplifying development, migration, and management so you can build with confidence. Explore the latest features and enhancements to see how Azure Database for MySQL meets your data needs today and in the future. We welcome your feedback and invite you to share your experiences or suggestions at AskAzureDBforMySQL@service.microsoft.com Stay up to date by visiting What's new in Azure Database for MySQL, and follow us on YouTube | LinkedIn | X for ongoing updates. Thank you for choosing Azure Database for MySQL!386Views2likes0CommentsOctober 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, We are excited to bring October 2025 recap blog for Azure Database for PostgreSQL! This blog focuses on key announcements around the General Availability of the REST API for 2025, maintenance payload visibility and several new features aimed at improving performance and a guide on minimizing downtime for MVU operation with logical replication. Stay tuned as we dive deeper into each of these feature updates. Get Ready for Ignite 2025! Before we get into the feature breakdown, Ignite is just around the corner! It’s packed with major announcements for Azure Database for PostgreSQL. We’ve prepared a comprehensive guide to all the sessions we have lined up, don’t miss out! Follow this link to explore the Ignite session guide. Feature Highlights Stable REST API release for 2025 – Generally Available Maintenance payload visibility – Generally Available Achieving Zonal resiliency for High-Availability workloads - Preview Japan West now supports zone-redundant HA PgBouncer 1.23.1 version upgrade Perform Major Version upgrade (MVU) with logical replication PgConf EU 2025 – Key Takeaways and Sessions Stable REST API release for 2025 – Generally Available We’ve released the stable REST API version 2025-08-01! This update adds support for PostgreSQL 17 so you can adopt new versions without changing your automation patterns. We also introduced the ability to set the default database name for Elastic Clusters. To improve developer experience, we have renamed operation IDs for clearer navigation and corrected HTTP response codes so scripts and retries behave as expected. Security guidance gets a boost with a new CMK encryption example that demonstrates automatic key version updates. Finally, we have cleaned up the specification itself by renaming files for accuracy, reorganizing the structure for easier browsing and diffs, and enhancing local definition metadata, delivering a clearer, safer, and more capable API for your 2025 roadmaps. Learn how to call or use Azure Database for PostgreSQL REST APIs. Learn about the operations available in our latest GA REST API. Repository for all Released GA APIs. Maintenance payload visibility – Generally Available The Azure Database for PostgreSQL maintenance experience has been enhanced to increase transparency and control. With this update, customers will receive Azure Service Health notifications that include a direct link to the detailed maintenance payload for each patch. This means you’ll know exactly what’s changing – helping you plan ahead, reduce surprises, and maintain confidence in your operations. Additionally, all maintenance payloads are now published in the dedicated Maintenance Release Notes section of our documentation. This enhancement provides greater visibility into upcoming updates and empowers you with the information needed to align maintenance schedules with your business priorities. Achieving Zonal resiliency for High-Availability workloads - Preview High Availability is important to ensure that you have your primary and standby servers deployed with same-zone or zone-redundant HA option. Zonal resiliency helps you protect your workloads against zonal outage. With the latest update, Azure Portal introduces a Zonal Resiliency setting under the High Availability section. This setting can be toggled Enabled or Disabled: Enabled: The system attempts to create the standby server in a different availability zone, activating zone-redundant HA mode. If the selected region does not support zone-redundant HA, you can select the fallback checkbox (shown in the image) to use same-zone HA instead. If you don’t select the checkbox and zonal capacity is unavailable, HA enablement fails. This design enforces zone-redundant HA as the default while providing a controlled fallback to same-zone HA, ensuring workloads achieve resiliency even in regions without multi-zone capacity. The feature offers flexibility while maintaining strong high availability across supported regions. To know more about how to configure high availability follow our documentation link. Japan West now supports zone-redundant HA Azure Database for PostgreSQL now offers Availability Zone support in Japan West, enabling deployment of zone-redundant high availability (HA) configurations in this region. This enhancement empowers customers to achieve greater resiliency and business continuity through robust zone-redundant architecture. We’re committed to bringing Azure PostgreSQL closer to where you build and run your apps, while ensuring robust disaster recovery options. For the full list of regions visit: Azure Database for PostgreSQL Regions. PgBouncer 1.23.1 version upgrade PgBouncer 1.23.1 is now available in Azure Database for PostgreSQL. As a Built-In connection pooling feature, PgBouncer helps you scale thousands of connections with low overhead by efficiently managing idle and short-lived connections. With this update, you benefit from the latest community improvements, including enhanced protocol handling and important stability fixes, giving you a more reliable and resilient connection pooling experience. Because PgBouncer is integrated into Azure Postgres, you don’t need to install or maintain it separately - simply enable it on port 6432 and start reducing connection overhead in your applications. This release keeps your PostgreSQL servers aligned with the community while providing the reliability of a managed Azure service. Learn More - PgBouncer in Azure Database for PostgreSQL. Perform Major Version upgrade (MVU) with logical replication Our Major Version Upgrade feature ensures you always have access to the latest and most powerful capabilities included in each PostgreSQL release. We’ve published a new blog that explains how to minimize downtime during major version upgrades by leveraging logical replication and virtual endpoints. The blog highlights two approaches: Using logical replication and virtual endpoints on a Point-in-Time Restore (PITR) instance Using logical replication and virtual endpoints on a server running different PostgreSQL versions, restored via pg_dump and pg_restore Follow this guide to get started and make your upgrade process smoother: Upgrade Azure Database for PostgreSQL with Minimal Downtime Using Logical Replication PgConf EU 2025 – key takeaways and sessions The Azure Database for PostgreSQL team participated in PGConf EU 2025, delivering insightful sessions on key PostgreSQL advancements. If you missed the highlights, here are a few topics we covered: AIO in PG 18 and beyond, by Andres Freund of Microsoft [Link to slides] Improved Freezing in Postgres Vacuum: From Idea to Commit, by Melanie Plageman of Microsoft [Link to slides] Behind Postgres 18: The People, the Code, & the Invisible Work [Link to Slides] Read the PGConf EU summary blog here. Azure Postgres Learning Bytes 🎓 Handling “Cannot Execute in a Read-Only Transaction” after High Availability (HA) Failover After a High Availability (HA) failover, some applications may see this error: ERROR: cannot execute <command> in a read-only transaction This happens when the application continues connecting to the old primary instance, which becomes read-only after failover. The usual cause is connecting via a static-IP or a private DNS record that doesn’t refresh automatically. Resolution Steps Use FQDN - Always connect using FQDN i.e. “<servername>.postgres.database.azure.com” instead of a hardcoded IP. Validate DNS - Run “nslookup yourservername.postgres.database.azure.com” to confirm resolution to the current primary. Private DNS - Update or automate the A-record refresh after failover. Best Practices Always use FQDN for app database connectivity. Add retry logic for transient failovers. Periodically validate DNS resolution for HA-enabled servers. For more details, refer to this detailed blog post from CSS team. Conclusion We’ll be back soon with more exciting announcements and key feature enhancements for Azure Database for PostgreSQL, so stay tuned! Your feedback is important to us, have suggestions, ideas, or questions? We’d love to hear from you: https://aka.ms/pgfeedback. Follow us here for the latest announcements, feature releases, and best practices: Microsoft Blog for PostgreSQL.649Views2likes0CommentsLarge Transaction Interrupted by Failover, Secondary Database Reports REVERTING
First published on MSDN on Nov 25, 2014 If a large transaction in an availability database on the primary replica is interrupted by a failover of the availability group, once failover has occurred, the database state on your secondary (old primary replica) reports to be in a NOT SYNCHRONIZING or REVERTING state for a long period of time.10KViews1like1CommentJune 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, We have introduced a range of exciting new features and updates to Azure Database for PostgreSQL in June. From general availability of PG 17 to public preview of the SSD v2 storage tier for High Availability, there have been some significant feature announcements across multiple areas in the last month. Stay tuned as we dive deeper into each of these feature updates. Before that, let’s look at POSETTE 2025 highlights. POSETTE 2025 Highlights We hosted POSETTE: An Event for Postgres 2025 in June! This year marked our 4th annual event featuring 45 speakers and a total of 42 talks. PostgreSQL developers, contributors, and community members came together to share insights on topics covering everything from AI-powered applications to deep dives into PostgreSQL internals. If you missed it, you can catch up by watching the POSETTE livestream sessions. If this conference sounds interesting to you and want to be part of it next year, don’t forget to subscribe to POSETTE news. Feature Highlights General Availability of PostgreSQL 17 with 'In-Place' upgrade support General Availability of Online Migration Migration service support for PostgreSQL 17 Public Preview of SSD v2 High Availability New Region: Indonesia Central VS Code Extension for PostgreSQL enhancements Enhanced role management Ansible collection released for latest REST API version General Availability of PostgreSQL 17 with 'In-Place' upgrade support PostgreSQL 17 is now generally available on Azure Database for PostgreSQL flexible server, bringing key community innovations to your workloads. You’ll see faster vacuum operations, richer JSON processing, smarter query planning (including better join ordering and parallel execution), dynamic logical replication controls, and enhanced security & audit-logging features—backed by Azure’s five-year support policy. You can easily upgrade to PostgreSQL 17 using the in-place major version upgrade feature available through the Azure portal and CLI, without changing server endpoints or reconfiguring applications. The process includes built-in validations and rollback safety to help ensure a smooth and reliable upgrade experience. For more details, read the PostgreSQL 17 release announcement blog. General Availability of Online Migration We're excited to announce that Online Migration is now generally available for the Migration service for Azure Database for PostgreSQL! Online migration minimizes downtime by keeping your source database operational during the migration process, with continuous data synchronization until cut over. This is particularly beneficial for mission-critical applications that require minimal downtime during migration. This milestone brings production-ready online migration capabilities supporting various source environments including on-premises PostgreSQL, Azure VMs, Amazon RDS, Amazon Aurora, and Google Cloud SQL. For detailed information about the capabilities and how to get started, visit our Migration service documentation. Migration service support for PostgreSQL 17 Building on our PostgreSQL 17 general availability announcement, the Migration service for Azure Database for PostgreSQL now fully supports PostgreSQL 17. This means you can seamlessly migrate your existing PostgreSQL instances from various source platforms to Azure Database for PostgreSQL flexible server running PostgreSQL 17. With this support, organizations can take advantage of the latest PostgreSQL 17 features and performance improvements while leveraging our online migration capabilities for minimal downtime transitions. The migration service maintains full compatibility with PostgreSQL 17's enhanced security features, improved query planning, and other community innovations. Public Preview of SSD v2 High Availability We’re excited to announce the public preview High availability (HA) support for the Premium SSD v2 storage tier in Azure Database for PostgreSQL flexible server. This support allows you to enable Zone-Redundant HA using Premium SSD v2 during server deployments. In addition to high availability on SSDv2 you now get improved resiliency and 10 second failover times when using Premium SSD v2 with zone-redundant HA, helping customers build resilient, high-performance PostgreSQL applications with minimal overhead. This feature is particularly well-suited for mission-critical workloads, including those in financial services, real-time analytics, retail, and multi-tenant SaaS platforms. Key Benefits of Premium SSD v2: Flexible disk sizing: Scale from 32 GiB to 64 TiB in 1-GiB increments Fast failovers: Planned or unplanned failovers typically around 10 seconds Independent performance configuration: Achieve up to 80,000 IOPS and 1,200 Mbps throughput without resizing your disk. Baseline performance: Free throughput of 125 MB/s and 3,000 IOPS for disks up to 399 GiB, and 500 MB/s and 12,000 IOPS for disks 400 GiB and above at no additional cost. For more details, please refer to the Premium SSD v2 HA blog. New Region: Indonesia Central New region rollout! Azure Database for PostgreSQL flexible server is now available in Indonesia Central, giving customers in and around the region lower latency and data residency options. This continues our mission to bring Azure PostgreSQL closer to where you build and run your apps. For the full list of regions visit: Azure Database for PostgreSQL Regions. VS Code Extension for PostgreSQL enhancements The brand-new VS code extension for PostgreSQL launched in mid-May and has already garnered over 122K installs from the Visual Studio Marketplace! And the kickoff blog about this new IDE for PostgreSQL in VS Code has had over 150K views. This extension makes it easier for developers to seamlessly interact with PostgreSQL databases. We have been committed to make this experience better and have introduced several enhancements to improve reliability and compatibility updates. You can now have better control over service restarts and process terminations on supported operating systems. Additionally, we have added support for parsing additional connection-string formats in the “Create Connection” flow, making it more flexible and user-friendly. We also resolved Entra token-fetching failures for newly created accounts, ensuring a smoother onboarding experience. On the feature front, you can now leverage Entra Security Groups and guest accounts across multiple tenants when establishing new connections, streamlining permission management in complex Entra environments. Don’t forget to update to the latest version in the marketplace to take advantage of these enhancements and visit our GitHub repository to learn more about this month’s release. If you learn best by video, these 2 videos are a great way to learn more about this new VS Code extension: POSETTE 2025: Introducing Microsoft’s VS Code Extension for PostgreSQL Demo of using VS code extension for PostgreSQL Enhanced role management With the introduction of PostgreSQL 16, a strict role hierarchy structure has been implemented. As a result, GRANT statements that were functional in PostgreSQL 11-15 may no longer work in PostgreSQL 16. We have improved the administrative flexibility and addressed this limitation in Azure Database for PostgreSQL flexible server across all PostgreSQL versions. Members of ‘azure_pg_admin’ can now manage, and access objects owned by any role that is non-restricted, giving control and permission over user-defined roles. To learn more about this improvement, please refer to our documentation on roles. Ansible collection released for latest REST API version A new version of Ansible collection for Azure Database for PostgreSQL flexible server is now released. Version 3.6.0 now includes the latest GA REST API features. This update introduces several enhancements, such as support for virtual endpoints, on-demand backups, system-assigned identity, storage auto-grow, and seamless switchover of read replicas to a new site (Read Replicas - Switchover), among many other improvements. To get started with using please visit flexible server Ansible collection link. Azure Postgres Learning Bytes 🎓 Using PostgreSQL VS code extension with agent mode The VS Code extension for PostgreSQL has been trending amongst the developer community. In this month's Learning Bytes section, we want to share how to enable the extension and use GitHub Copilot to create a database in Agent Mode, add dummy data, and visualize it using the Agent Mode and VS Code extension. Step 1: Download the VS code Extension for PostgreSQL Step 2: Check GitHub Copilot and Agent mode is enabled Go to File -> Preferences -> Settings (Ctrl + ,). Search and enable "chat.agent.enabled" and "pgsql copilot.enable". Reload VS Code to apply changes. Step 3: Connect to Azure Database for PostgreSQL Use the extension to enter instance details and establish a connection. Create and view schemas under Databases -> Schemas. Step 4: Visualize and Populate Data Right-click the database to visualize schemas. Ask the agent to insert dummy data or run queries. Conclusion That's all for the June 2025 feature updates! We are dedicated to continuously improve Azure Database for PostgreSQL with every release. Stay updated with the latest updates to our features by following this link. Your feedback is important and helps us continue to improve. If you have any suggestions, ideas, or questions, we’d love to hear from you. Share your thoughts here: aka.ms/pgfeedback We look forward to bringing you even more exciting updates throughout the year, stay tuned!873Views3likes0Comments