performance
116 TopicsBuilding a Restaurant Management System with Azure Database for MySQL
In this hands-on tutorial, we'll build a Restaurant Management System using Azure Database for MySQL. This project is perfect for beginners looking to understand cloud databases while creating something practical.1.3KViews5likes5CommentsCascading Read Replicas Now Generally Available!
We’re excited to announce the General Availability of cascading read replicas in Azure Database for PostgreSQL. This capability allows you to create read replicas for your Azure Database for PostgreSQL instance not only from a primary server, but also from existing read replicas, enabling multi‑level replication chains. Coordinating read‑heavy database workloads across multiple regions can be challenging, especially when you’re trying to deliver low‑latency read response experiences to users spread across different geographic locations. One effective way to address this is by placing read replicas closer to where your users are, allowing applications to serve read requests with significantly reduced latency and improved performance. What are cascading read replicas? With cascading read replicas, you can scale read‑intensive workloads more effectively, distribute read traffic efficiently, and support advanced deployment topologies such as globally distributed applications. Each read replica can act as a source for additional replicas, forming a tree‑like replication structure. For example, if your primary server is deployed in one region, you can create direct replicas in nearby regions and then cascade additional replicas to more distant locations. This approach helps spread read traffic evenly while minimizing latency for users around the world. We support up to 2 levels of replication with this feature. Level 1 will be all the read replicas and level 2 will be cascading read replicas. Why use cascading read replicas? Improved scalability Cascading read replicas support multi‑level replication, making it easier to handle high volumes of read traffic without overloading a single instance by scaling up to 30 read replicas. Geographic distribution By placing replicas closer to your global user base, you can significantly reduce read latency and deliver faster, more responsive application experiences. Efficient read traffic distribution Distributing read workloads across multiple replicas helps balance load, improving overall performance and reliability. Additionally, cascading read replicas offer operational flexibility. If you observe replication lag, you can perform a switchover operation between a cascading read replica with its source or intermediate replica, helping you maintain optimal performance and availability for your replicas. How does replication work with cascading read replicas? The primary server acts as a source for the read replica. Data is asynchronously replicated to these replicas. When we add cascading replicas, the previous replicas act as a data source for replication. In the diagram above, “primary-production-server” is the primary server with three read replicas. One of these replicas, “readreplica01”, serves as the source for another read replica, “readreplica11” which is a cascading read replica. With cascading read replicas, you can add up to five read replicas per source and replicate data across two levels, as shown in the diagram. This allows you to create up to 30 read replicas in total five read replicas directly from the primary server, and up to 25 additional replicas at the second level (each second-level replica can have up to five read replicas). If you notice replication lag between an intermediate read replica and a cascading read replica, you can use a switchover operation to swap “readreplica01” and “readreplica11”, helping reduce the impact of lag. To learn more about cascading read replicas, please refer to our documentation: Cascading read replicas Deploying cascading read replicas on Azure portal Navigate to the “Replication” tab and then click on “Create replica” highlighted in red as shown below: After creating a read replica as the below screenshot shows that you have 1 read replica that is attached to the primary instance. Click on the created replica and navigate to the replication tab, source server is “read-replica-01” and we will be creating a cascading read replica under this. Once cascading read replica is created you can see the role of “read-replica-01” has now changed to Source, Replica. You can perform site swap operation by clicking on the promote button for cascading read replica. Deploy cascading read replica with terraform: Before you start, make sure you have: An existing primary PostgreSQL Flexible Server At least one read replica already created from the primary AzureRM provider with latest version Proper permissions on the Azure subscription and resource group Configure the AzureRM Provider: Start by configuring the AzureRM provider in your Terraform project. terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "~> 3.80" } } } provider "azurerm" { features {} } Reference the existing read replica server using the data block to reference the replica server. data "azurerm_postgresql_flexible_server" "source_replica" { name = "my-read-replica-1" resource_group_name = "my-resource-group" } Now create a new PostgreSQL Flexible Server and point it to the replica using create_source_server_id. resource "azurerm_postgresql_flexible_server" "cascading_replica" { name = "my-cascading-replica" resource_group_name = "my-resource-group" location = data.azurerm_postgresql_flexible_server.source_replica.location version = data.azurerm_postgresql_flexible_server.source_replica.version delegated_subnet_id = data.azurerm_postgresql_flexible_server.source_replica.delegated_subnet_id private_dns_zone_id = data.azurerm_postgresql_flexible_server.source_replica.private_dns_zone_id create_mode = "Replica" create_source_server_id = data.azurerm_postgresql_flexible_server.source_replica.id storage_mb = 32768 sku_name = "Standard_D4s_v3" depends_on = [ data.azurerm_postgresql_flexible_server.source_replica ] } Apply the Terraform Configuration terraform init terraform plan terraform apply Key Considerations Cascading read replicas allow for up to 5 read replicas and two levels of replication. Creating cascading read replicas is supported in PostgreSQL version 14 and above. Promote operation is not supported for intermediate read replicas with cascading read replicas. Conclusion Cascading read replicas in Azure Database for PostgreSQL offer a scalable way to distribute your read traffic across the same and different regions, reducing the read workload on primary database. For globally distributed applications, this can improve read latency as well as resilience and performance. This design supports horizontal scaling as your application demand grows, ensuring you can handle a high volume of read requests without compromising speed. Get started with this feature today and scale your read workloads.206Views1like0CommentsHandling Unique Constraint Conflicts in Logical Replication
Authors: Ashutosh Sharma, Senior Software Engineer, and Gauri Kasar, Product Manager Logical replication can keep your PostgreSQL environments in sync, helping replicate selected tables with minimal impact on the primary workload. But what happens when your subscriber hits a duplicate key error and replication grinds to a halt? If you’ve seen a unique‑constraint violation while replicating between Azure Database for PostgreSQL servers, you’re not alone. This blog covers common causes, prevention tips, and practical recovery options. In PostgreSQL logical replication, the subscriber can fail with a unique-constraint error when it tries to apply a change that would create a duplicate key. duplicate key value violates unique constraint Understanding why this happens? When an INSERT or UPDATE would create a value that already exists in a column (or set of columns) protected by a UNIQUE constraint (including a PRIMARY KEY). In logical replication, this most commonly occurs because of local writes on the subscriber or if the table is being subscribed from multiple publishers. These conflicts are resolved on the subscriber side. Local writes on the subscriber: a row with the same primary key/unique key is inserted on the subscriber before the apply worker processes the corresponding change from the publisher. Multi-origin / multi-master without conflict-free keys: two origins generate (or replicate) the same unique key. Initial data synchronization issues: the subscriber already contains data when the subscription is created with initial copy enabled, resulting in duplicate inserts during the initial table sync. How to avoid this? Avoid local writes on subscribed tables (treat the subscriber as read-only for replicated relations). Avoid subscribing to the same table from multiple publishers unless you have explicit conflict handling and a conflict-free key design. Enabling server logs can help you identify and troubleshoot unique‑constraint conflicts more effectively. Refer to the official documentation to configure and access PostgreSQL logs. How to handle conflicts (recovery options) Option 1: Delete the conflicting row on the subscriber Use the subscriber logs to identify the key (or row) causing the conflict, then delete the row on the subscriber with a DELETE statement. Resume apply and repeat if more conflicts appear. Option 2: Use conflict logs and skip the conflicting transaction (PostgreSQL 17+) Starting with PostgreSQL 17, logical replication provides detailed conflict logging on the subscriber, making it easier to understand why replication stopped and which transaction caused the failure. When a replicated INSERT would violate a non‑deferrable unique constraint on the subscriber for example, when a row with the same key already exists the apply worker detects this as an insert_exists conflict and stops replication. In this case, PostgreSQL logs the conflict along with the transaction’s finish LSN, which uniquely identifies the failing transaction. ERROR: conflict detected on relation "public.t2": conflict=insert_exists ... in transaction 754, finished at 0/034F4090 ALTER SUBSCRIPTION <subscription_name> SKIP (lsn = '0/034F4090'); Option 3: Rebuild (re-sync) the table Rebuilding (re‑syncing) a table is the safest and most deterministic way to resolve logical replication conflicts caused by pre‑existing data differences or local writes on the subscriber. This approach is especially useful when a table repeatedly fails with unique‑constraint violations and it is unclear which rows are out of sync. Step 1 (subscriber): Disable the subscription. ALTER SUBSCRIPTION <subscription_name> DISABLE; Step 2 (subscriber): Remove the local copy of the table so it can be re-copied. TRUNCATE TABLE <conflicting_table>; Step 3 (publisher): Ensure the publication will (re)send the table (one approach is to recreate the publication entry for that table). ALTER PUBLICATION <pub_with_conflicting_table> DROP TABLE <conflicting_table>; CREATE PUBLICATION <pub_with_conflicting_table_rebuild> FOR TABLE <conflicting_table>; Step 4 (subscriber): Create a new subscription (or refresh the existing one) to re-copy the table. CREATE SUBSCRIPTION <sub_rebuild> CONNECTION '<connection_string>' PUBLICATION <pub_with_conflicting_table_rebuild>; Step 5 (subscriber): Re-enable the original subscription (if applicable). ALTER SUBSCRIPTION <subscription_name> ENABLE; Conclusion In most cases, these conflicts occur due to local changes on the subscriber or differences in data that existed before logical replication was fully synchronized. It is recommended to avoid direct modifications on subscribed tables and ensure that the replication setup is properly planned, especially when working with tables that have unique constraints.148Views1like0CommentsIgnite 2022: Announcing new features in Azure Database for MySQL – Flexible Server
Today, we're pleased to announce a set of new and exciting features in Azure Database for MySQL - Flexible Server that further improve the service's availability, performance, security, management, and developer experiences!6.9KViews3likes0CommentsMicrosoft Azure innovation powers leading price-performance for MySQL database in the cloud
As part of our commitment to ensuring that Microsoft Azure is the best place to run MySQL workloads, Microsoft is excited to announce that Azure Database for MySQL - Flexible Server just achieved a new, faster performance benchmark.7.2KViews5likes0CommentsAzure Database for MySQL - Flexible Server accelerated logs (Preview)
The new Accelerated Logs (preview) feature significantly enhances the performance of Azure Database for MySQL flexible servers, offering a dynamic solution that is designed for high throughput needs that also reduces latency and optimizes cost efficiency. This innovative feature is available with the Business-Critical service tier at no additional cost.3.5KViews2likes0CommentsUnlocking AI-Driven Data Access: Azure Database for MySQL Support via the Azure MCP Server
Step into a new era of data-driven intelligence with the fusion of Azure MCP Server and Azure Database for MySQL, where your MySQL data is no longer just stored, but instantly conversational, intelligent and action-ready. By harnessing the open-standard Model Context Protocol (MCP), your AI agents can now query, analyze and automate in natural language, accessing tables, surfacing insights and acting on your MySQL-driven business logic as easily as chatting with a colleague. It’s like giving your data a voice and your applications a brain, all within Azure’s trusted cloud platform. We are excited to announce that we have added support for Azure Database for MySQL in Azure MCP Server. The Azure MCP Server leverages the Model Context Protocol (MCP) to allow AI agents to seamlessly interact with various Azure services to perform context-aware operations such as querying databases and managing cloud resources. Building on this foundation, the Azure MCP Server now offers a set of tools that AI agents and apps can invoke to interact with Azure Database for MySQL - enabling them to list and query databases, retrieve schema details of tables, and access server configurations and parameters. These capabilities are delivered through the same standardized interface used for other Azure services, making it easier to the adopt the MCP standard for leveraging AI to work with your business data and operations across the Azure ecosystem. Before we delve into these new tools and explore how to get started with them, let’s take a moment to refresh our understanding of MCP and the Azure MCP Server - what they are, how they work, and why they matter. MCP architecture and key components The Model Context Protocol (MCP) is an emerging open protocol designed to integrate AI models with external data sources and services in a scalable, standardized, and secure manner. MCP dictates a client-server architecture with four key components: MCP Host, MCP Client, MCP Server and external data sources, services and APIs that provide the data context required to enhance AI models. To explain briefly, an MCP Host (AI apps and agents) includes an MCP client component that connects to one or more MCP Servers. These servers are lightweight programs that securely interface with external data sources, services and APIs and exposes them to MCP clients in the form of standardized capabilities called tools, resources and prompts. Learn more: MCP Documentation What is Azure MCP Server? Azure offers a multitude of cloud services that help developers build robust applications and AI solutions to address business needs. The Azure MCP Server aims to expose these powerful services for agentic usage, allowing AI systems to perform operations that are context-aware of your Azure resources and your business data within them, while ensuring adherence to the Model Context Protocol. It supports a wide range of Azure services and tools including Azure AI Search, Azure Cosmos DB, Azure Storage, Azure Monitor, Azure CLI and Developer CLI extensions. This means that you can empower AI agents, apps and tools to: Explore your Azure resources, such as listing and retrieving details on your Azure subscriptions, resource groups, services, databases, and tables. Search, query and analyze your data and logs. Execute CLI and Azure Developer CLI commands directly, and more! Learn more: Azure MCP Server GitHub Repository Introducing new Azure MCP Server tools to interact with Azure Database for MySQL The Azure MCP Server now includes the following tools that allow AI agents to interact with Azure Database for MySQL and your valuable business data residing in these servers, in accordance with the MCP standard: Tool Description Example Prompts azmcp_mysql_server_list List all MySQL servers in a subscription & resource group "List MySQL servers in resource group 'prod-rg'." "Show MySQL servers in region 'eastus'." azmcp_mysql_server_config_get Retrieve the configuration of a MySQL server "What is the backup retention period for server 'my-mysql-server'?" "Show storage allocation for server 'my-mysql-server'." azmcp_mysql_server_param_get Retrieve a specific parameter of a MySQL server "Is slow_query_log enabled on server my-mysql-server?" "Get innodb_buffer_pool_size for server my-mysql-server." azmcp_mysql_server_param_set Set a specific parameter of a MySQL server to a specific value "Set max_connections to 500 on server my-mysql-server." "Set wait_timeout to 300 on server my-mysql-server." azmcp_mysql_table_list List all tables in a MySQL database "List tables starting with 'tmp_' in database 'appdb'." "How many tables are in database 'analytics'?" azmcp_mysql_table_schema_get Get the schema of a specific table in a MySQL database "Show indexes for table 'transactions' in database 'billing'." "What is the primary key for table 'users' in database 'auth'?" azmcp_mysql_database_query Executes a SELECT query on a MySQL Database. The query must start with SELECT and cannot contain any destructive SQL operations for security reasons. “How many orders were placed in the last 30 days in the salesdb.orders table?” “Show the number of new users signed up in the last week in appdb.users grouped by day.” These interactions are secured using Microsoft Entra authentication, which enables seamless, identity-based access to Azure Database for MySQL - eliminating the need for password storage and enhancing overall security. How are these new tools in the Azure MCP Server different from the standalone MCP Server for Azure Database for MySQL? We have integrated the key capabilities of the Azure Database for MySQL MCP server into the Azure MCP Server, making it easier to connect your agentic apps not only to Azure Database for MySQL but also to other Azure services through one unified and secure interface! How to get started Installing and running the Azure MCP Server is quick and easy! Use GitHub Copilot in Visual Studio Code to gain meaningful insights from your business data in Azure Database for MySQL. Pre-requisites Install Visual Studio Code. Install GitHub Copilot and GitHub Copilot Chat extensions. An Azure Database for MySQL with Microsoft Entra authentication enabled. Ensure that the MCP Server is installed on a system with network connectivity and credentials to connect to Azure Database for MySQL. Installation and Testing Please use this guide for installation: Azure MCP Server Installation Guide Try the following prompts with your Azure Database for MySQL: Azure Database for MySQL tools for Azure MCP Server Try it out and share your feedback! Start using Azure MCP Server with the MySQL tools today and let our cloud services become your AI agent’s most powerful ally. We’re counting on your feedback - every comment, suggestion, or bug-report helps us build better tools together. Stay tuned: more features and capabilities are on the horizon! Feel free to comment below or write to us with your feedback and queries at AskAzureDBforMySQL@service.microsoft.com.349Views2likes0CommentsIgnite 2025: Advancing Azure Database for MySQL with Powerful New Capabilities
At Ignite 2025, we’re introducing a wave of powerful new capabilities for Azure Database for MySQL, designed to help organizations modernize, scale, and innovate faster than ever before. From enhanced high availability and seamless serverless integrations to AI-powered insights and greater flexibility for developers, these advancements reflect our commitment to delivering a resilient, intelligent data platform. Join us as we unveil what’s next for MySQL on Azure - and discover how industry leaders are already building the future with confidence. Enhanced Failover Performance with Dedicated SLB for High-Availability Servers We’re excited to announce the General Availability of Dedicated Standard Load Balancer (SLB) for HA-enabled servers in Azure Database for MySQL. This enhancement introduces a dedicated SLB to High Availability configurations for servers created with public access or private link. By managing the MySQL data traffic path, SLB eliminates the need for DNS updates during failover, significantly reducing failover time. Previously, failover relied on DNS changes, which caused delays due to DNS TTL (30 seconds) and client-side DNS caching. What’s new with GA: The FQDN consistently resolves to the SLB IP address before and after failover. Load-balancing rules automatically route traffic to the active node. Removes DNS cache dependency, delivering faster failovers. Note: This feature is not supported for servers using private access with VNet integration. Learn more Build serverless, event-driven apps at scale – now GA with Trigger Bindings for Azure Functions We’re excited to announce the General Availability of Azure Database for MySQL Trigger bindings for Azure Functions, completing the full suite of Input, Output, and Trigger capabilities. This feature lets you build real-time, event-driven applications by automatically invoking Azure Functions when MySQL table rows are created or updated - eliminating custom polling and boilerplate code. With native support across multiple languages, developers can now deliver responsive, serverless solutions that scale effortlessly and accelerate innovation. Learn more Enable AI agents to query Azure Database for MySQL using Azure MCP Server We’re excited to announce that Azure MCP Server now supports Azure Database for MySQL, enabling AI agents to query and manage MySQL data using natural language through the open Model Context Protocol (MCP). Instead of writing SQL, you can simply ask questions like “Show the number of new users signed up in the last week in appdb.users grouped by day.”, all secured with Microsoft Entra authentication for enterprise-grade security. This integration delivers a unified, secure interface for building intelligent, context-aware workflows across Azure services - accelerating insights and automation. Learn more Greater networking flexibility with Custom Port Support Custom port support for Azure Database for MySQL is now generally available, giving organizations the flexibility to configure a custom port (between 25001 and 26000) during new server creation. This enhancement streamlines integration with legacy applications, supports strict network security policies, and helps avoid port conflicts in complex environments. Supported across all network configurations - including public access, private access, and Private Link - custom port provisioning ensures every new MySQL server can be tailored to your needs. The managed experience remains seamless, with all administrative capabilities and integrations working as before. Learn more Streamline migrations and compatibility with Lower Case Table Names support Azure Database for MySQL now supports configuring lower_case_table_names server parameter during initial server creation for MySQL 8.0 and above, ensuring seamless alignment with your organization’s naming conventions. This setting is automatically inherited for restores and replicas, and cannot be modified. Key Benefits: Simplifies migrations by aligning naming conventions and reducing complexity. Enhances compatibility with legacy systems that depend on case-insensitive table names. Minimizes support dependency, enabling faster and smoother onboarding. Learn more Unlock New Capabilities with Private Preview Features at Ignite 2025 We’re excited to announce that you can now explore two powerful capabilities in early access - Reader Endpoint for seamless read scaling and Server Rename for greater flexibility in server management. Scale reads effortlessly with Reader Endpoint (Private Preview) We’re excited to announce that the Reader Endpoint feature for Azure Database for MySQL is now ready for private preview. Reader Endpoint provides a dedicated read-only endpoint for read replicas, enabling automatic connection-based load balancing of read-only traffic across multiple replicas. This simplifies application architecture by offering a single endpoint for read operations, improving scalability and fault tolerance. Azure Database for MySQL supports up to 10 read replicas per primary server. By routing read-only traffic through the reader endpoint, application teams can efficiently manage connections and optimize performance without handling individual replica endpoints. Reader endpoints continuously monitor the health of replicas and automatically exclude any replica that exceeds the configured replication lag threshold or becomes unavailable. To enroll in the preview, please submit your details using this form. Limitations During Private Preview: Only performance-based routing is supported in this preview. Certain settings such as routing method and the option to attach new replicas to the reader endpoint can only be configured at creation time. Only one reader endpoint can be created per replica group. Including the primary server as a fallback for read traffic when no replicas are available is not supported in this preview. Get flexibility in server management with Server Rename (Private Preview) We’re excited to announce the Private Preview of Server Rename for Azure Database for MySQL. This feature lets you update the name of an existing MySQL server without recreating it, migrating data, or disrupting applications - making it easier to adopt clear, consistent naming. It provides a near zero-downtime path to a new hostname of the server. To enroll in the preview, please submit your details using this form. Limitations During Private Preview: Primary server with read replicas: Renaming a primary server that has read replicas keeps replication healthy. However, the SHOW SLAVE STATUS output on the replicas will still display the old primary server's name. This is a display inconsistency only and does not affect replication. Renaming is currently unsupported for servers using Customer Managed Key (CMK) encryption or Microsoft Entra Authentication (Entra Id). Real-World Success: Azure Database for MySQL Powers Resilient Applications at Scale Factorial Factorial, a leading HR software provider, uses Azure Database for MySQL alongside Azure Kubernetes Service to deliver secure, scalable HR solutions for thousands of businesses worldwide. By leveraging Azure Database for MySQL’s reliability and seamless integration with cloud-native technologies, Factorial ensures high availability and rapid innovation for its customers. Learn more YES (Youth Employment Service) South Africa’s largest youth employment initiative, YES, operates at national scale by leveraging Azure Database for MySQL to deliver a resilient, centralized platform for real-time job matching, learning management, and career services - connecting thousands of young people and employers, and helping nearly 45 percent of participants secure permanent roles within six months. Learn more Nasdaq At Ignite 2025, Nasdaq will showcase how it uses Azure Database for MySQL - alongside Azure Database for PostgreSQL and other Azure products - to power a secure, resilient architecture that safeguards confidential data while unlocking new agentic AI capabilities. Learn more These examples demonstrate that Azure Database for MySQL is trusted by industry leaders to build resilient, scalable applications - empowering organizations to innovate and grow with confidence. We Value Your Feedback Azure Database for MySQL is built for scale, resilience, and performance - ready to support your most demanding workloads. With every update, we’re focused on simplifying development, migration, and management so you can build with confidence. Explore the latest features and enhancements to see how Azure Database for MySQL meets your data needs today and in the future. We welcome your feedback and invite you to share your experiences or suggestions at AskAzureDBforMySQL@service.microsoft.com Stay up to date by visiting What's new in Azure Database for MySQL, and follow us on YouTube | LinkedIn | X for ongoing updates. Thank you for choosing Azure Database for MySQL!593Views2likes0CommentsBuild Smarter with Azure HorizonDB
By: Maxim Lukiyanov, PhD, Principal PM Manager; Abe Omorogbe, Senior Product Manager; Shreya R. Aithal, Product Manager II; Swarathmika Kakivaya, Product Manager II Today, at Microsoft Ignite, we are announcing a new PostgreSQL database service - Azure HorizonDB. You can read the announcement here, and in this blog you can learn more about HorizonDB’s AI features and development tools. Azure HorizonDB is designed for the full spectrum of modern database needs - from quickly building new AI applications, to scaling enterprise workloads to unprecedented levels of performance and availability, to managing your databases efficiently and securely. To help with building new AI applications we are introducing 3 features: DiskANN Advanced Filtering, built-in AI model management, and integration with Microsoft Foundry. To help with database management we are introducing a set of new capabilities in PostgreSQL extension for Visual Studio Code, as well as announcing General Availability of the extension. Let’s dive into AI features first. DiskANN Advanced Filtering We are excited to announce a new enhancement in the Microsoft’s state of the art vector indexing algorithm DiskANN – DiskANN Advanced Filtering. Advanced Filtering addresses a common problem in vector search – combining vector search with filtering. In real-world applications where queries often include constraints like price ranges, ratings, or categories, traditional vector search approaches, such as pgvector’s HNSW, rely on multiple step retrieval and post-filtering, which can make search extremely slow. DiskANN Advanced Filtering solves this by combining filter and search into one operation - while the graph of vectors is traversed during the vector search, each vector is also checked for filter predicate match, ensuring that only the correct vectors are retrieved. Under the hood, it works in a 3-step process: first creating a bitmap of relevant rows using indexes on attributes such as price or rating, then performing a filter-aware graph traversal against the bitmap, and finally, validating and ordering the results for accuracy. This integrated approach delivers dramatically faster and more efficient filtered vector searches. Initial benchmarks show that enabling Advanced Filtering on DiskANN reduces query latency by up to 3x, depending on filter selectivity. AI Model Management Another exciting feature of HorizonDB is AI Model Management. This feature automates Microsoft Foundry model provisioning during database deployment and instantly activates database semantic operators. This eliminates tens of setup and configuration steps and simplifies the development of new AI apps and agents. AI Model Management elevates the experience of using semantic operators within PostgreSQL. When activated, it provisions key models for embedding, semantic ranking and generation via Foundry, installs and configures the azure_ai extension to enable the operators, establishes secure connections, integrates model management, monitoring and cost management within HorizonDB. What would otherwise require significant manual effort and context-switching between Foundry and PostgreSQL for configuration, management, and monitoring is now possible with just a few clicks, all without leaving the PostgreSQL environment. You can also continue to bring your own Foundry models, with a simplified and enhanced process for registering your custom model endpoints in the azure_ai extension. Microsoft Foundry Integration Microsoft Foundry offers a comprehensive technology stack for building AI apps and agents. But building modern agents capable of reasoning, acting, and collaborating is impossible without connection to data. To facilitate that connection, we are excited to announce a new PostgreSQL connector in Microsoft Foundry. The connector is designed using a new standard in data connectivity – Model Context Protocol (MCP). It enables Foundry agents to interact with HorizonDB securely and intelligently, using natural language instead of SQL, and leveraging Microsoft Entra ID to ensure secure connection. In addition to HorizonDB this connector also supports Azure Database for PostgreSQL (ADP). This integration allows Foundry agents to perform tasks like: Exploring database schemas Retrieving records and insights Performing analytical queries Executing vector similarity searches for semantic search use cases All through natural language, without compromising enterprise security or compliance. To get started with Foundry Integration, follow these setup steps to deploy your own HorizonDB (requires participation in Private Preview) or ADP and connect it to Foundry in just a few steps. PostgreSQL extension for VS Code is Generally Available We’re excited to announce that the PostgreSQL extension for Visual Studio Code is now Generally Available. This extension garnered significant popularity within the PostgreSQL community since it’s preview in May’25 reaching more than 200K installs. It is the easiest way to connect to a PostgreSQL database from your favorite editor, manage your databases, and take advantage of built-in AI capabilities without ever leaving VS Code. The extension works with any PostgreSQL whether it's on-premises or in the cloud, and also supports unique features of Azure HorizonDB and Azure Database for PostgreSQL (ADP). One of the key new capabilities is Metrics Intelligence, which uses Copilot and real-time telemetry of HorizonDB or ADP to help you diagnose and fix performance issues in seconds. Instead of digging through logs and query plans, you can open the Performance Dashboard, see a CPU spike, and ask Copilot to investigate. The extension sends a rich prompt that tells Copilot to analyze live metrics, identify the root cause, and propose an actionable fix. For example, Copilot might find a full table scan on a large table, recommend a composite index on the filter columns, create that index, and confirm the query plan now uses it. The result is dramatic: you can investigate and resolve the CPU spike in seconds, with no manual scripting or guesswork, and with no prior PostgreSQL expertise required. The extension also makes it easier to work with graph data. HorizonDB and ADP support open-source graph extension Apache AGE. This turns these services into fully managed graph databases. You can run graph queries against HorizonDB and immediately visualize the results as an interactive graph inside VS Code. This helps you understand relationships in your data faster, whether you’re exploring customer journeys, network topologies, or knowledge graphs - all without switching tools. In Conclusion Azure HorizonDB brings together everything teams need to build, run, and manage modern, AI-powered applications on PostgreSQL. With DiskANN Advanced Filtering, you can deliver low-latency, filtered vector search at scale. With built-in AI Model Management and Microsoft Foundry integration, you can provision models, wire up semantic operators, and connect agents to your data with far fewer steps and far less complexity. And with the PostgreSQL extension for Visual Studio Code, you get an intuitive, AI-assisted experience for performance tuning and graph visualization, right inside the tools you already use. HorizonDB is now available in private preview. If you’re interested in building AI apps and agents on a fully managed, PostgreSQL-compatible service with built-in AI and rich developer tooling, sign-up for Private Preview: https://aka.ms/PreviewHorizonDB.1.6KViews4likes0CommentsBuilding brighter futures: How YES tackles youth unemployment with Azure Database for MySQL
YES leverages Azure Database for MySQL to power South Africa’s largest youth employment initiative, delivering scalable, reliable systems that connect thousands of young people to jobs and learning opportunities.200Views0likes0Comments