database
95 TopicsBuilding a Restaurant Management System with Azure Database for MySQL
In this hands-on tutorial, we'll build a Restaurant Management System using Azure Database for MySQL. This project is perfect for beginners looking to understand cloud databases while creating something practical.1.3KViews5likes5CommentsTutorial: Building AI Agents That Talk to Your Azure Database for MySQL
What if you could ask your database a question in plain English and get the answer instantly, without writing a single line of SQL? In this tutorial, you'll build a Python-based AI agent that connects to Azure Database for MySQL server and uses OpenAI's function calling to translate natural language questions into SQL queries, execute them, and return human-readable answers. The agent can explore your schema, answer business questions, and even self-correct when it writes invalid SQL. What you'll build: An Azure Database for MySQL server with sample data A Python AI agent with three tools: list_tables, describe_table, and run_sql_query In the context of AI agents, tools are functions the agent can call to interact with external systems like querying a database, fetching a file, or calling an API. Here, our agent has three tools that let it explore and query your MySQL database. An interactive chat interface where you ask questions and the agent auto-generates and runs SQL Prerequisites Before you begin, make sure you have: An Azure account — Sign up for free (includes 12 months of free MySQL hosting) An OpenAI API key — Get one here (you'll need a few dollars of credit) Python 3.10+ — Download here (check "Add to PATH" during install) A code editor — VS Code recommended Optional: You can download the complete project from this GitHub repository, or follow the step‑by‑step instructions below to build it from scratch. Step 1 — Create the Azure Database for MySQL server Go to the Azure Portal Search for "Azure Database for MySQL server" and click + Create Configure the following settings: Setting Value Resource group rg-mysql-ai-agent (create new) Server name mysql-ai-agent (or any unique name) Region Your nearest region MySQL version 8.4 Workload type Dev/Test (Burstable B1ms — free for 12 months) Admin username mysqladmin Password A strong password — save it! 4. ✅ Check "Add firewall rule for current IP address" ⚠️ Important: If you skip the firewall settings, you won't be able to connect from Cloud Shell or your local machine. 5. Click Review + create → Create and wait 3–5 minutes Once deployment finishes, navigate to your server and note the hostname from the Connection details: mysql-ai-agent.mysql.database.azure.com Step 2 — Load Sample Data Open Azure Cloud Shell by clicking the >_ icon in the portal's top toolbar. Select Bash if prompted. Connect to your MySQL server. You can copy the exact connection command from the "Connect from browser or locally" section on your server's overview page in the Azure portal: mysql -h mysql-ai-agent.mysql.database.azure.com -u mysqladmin -p Enter your password when prompted (the cursor won't move — just type and press Enter). Now paste the following SQL to create a sample sales database: CREATE DATABASE demo_sales; USE demo_sales; CREATE TABLE customers ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100), email VARCHAR(100), city VARCHAR(50), signup_date DATE ); CREATE TABLE orders ( id INT AUTO_INCREMENT PRIMARY KEY, customer_id INT, product VARCHAR(100), amount DECIMAL(10,2), order_date DATE, FOREIGN KEY (customer_id) REFERENCES customers(id) ); INSERT INTO customers (name, email, city, signup_date) VALUES ('Sara Ahmed', 'sara@example.com', 'Cairo', '2024-06-15'), ('John Smith', 'john@example.com', 'London', '2024-08-22'), ('Priya Patel', 'priya@example.com', 'Mumbai', '2025-01-10'); INSERT INTO orders (customer_id, product, amount, order_date) VALUES (1, 'Azure Certification Voucher', 150.00, '2025-03-01'), (2, 'MySQL Workbench Pro License', 99.00, '2025-03-10'), (1, 'Power BI Dashboard Template', 45.00, '2025-04-05'), (3, 'Data Analysis Course', 200.00, '2025-05-20'); Verify the data: SELECT * FROM customers; SELECT * FROM orders; Type exit to leave MySQL. Step 3 — Set Up the Python Project Open a terminal on your local machine and create the project: mkdir mysql-ai-agent cd mysql-ai-agent python -m venv venv Activate the virtual environment: Windows (PowerShell): venv\Scripts\Activate.ps1 macOS/Linux: source venv/bin/activate Install the required packages: pip install openai mysql-connector-python python-dotenv Step 4 — Configure Environment Variables Create a file named .env in your project folder: OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxxxxxxxxxxxx MYSQL_HOST=mysql-ai-agent.mysql.database.azure.com MYSQL_USER=mysqladmin MYSQL_PASSWORD=YourPasswordHere MYSQL_DATABASE=demo_sales 🔒 Security: Never commit this file to Git. Add .env to your .gitignore Step 5 — Build the Agent Open VS Code, create a new file called mysql_agent.py in your mysql-ai-agent folder, and paste the following code. Let's walk through each section. 5.1 — Imports and Database Connection import os import json import mysql.connector from openai import OpenAI from dotenv import load_dotenv load_dotenv() def get_db_connection(): return mysql.connector.connect( host=os.getenv("MYSQL_HOST"), user=os.getenv("MYSQL_USER"), password=os.getenv("MYSQL_PASSWORD"), database=os.getenv("MYSQL_DATABASE"), ssl_disabled=False ) This loads your secrets from .env and creates a reusable MySQL connection function with SSL encryption. 5.2 — Define the Three Tools These are the functions the AI agent can call: def list_tables(): conn = get_db_connection() cursor = conn.cursor() cursor.execute("SHOW TABLES") tables = [row[0] for row in cursor.fetchall()] cursor.close() conn.close() return json.dumps({"tables": tables}) def describe_table(table_name): conn = get_db_connection() cursor = conn.cursor() cursor.execute(f"DESCRIBE `{table_name}`") columns = [] for row in cursor.fetchall(): columns.append({ "field": row[0], "type": row[1], "null": row[2], "key": row[3] }) cursor.close() conn.close() return json.dumps({"table": table_name, "columns": columns}) def run_sql_query(query): if not query.strip().upper().startswith("SELECT"): return json.dumps({"error": "Only SELECT queries are allowed."}) conn = get_db_connection() cursor = conn.cursor() try: cursor.execute(query) columns = [desc[0] for desc in cursor.description] rows = cursor.fetchall() results = [] for row in rows: results.append(dict(zip(columns, row))) return json.dumps({"columns": columns, "rows": results}, default=str) except mysql.connector.Error as e: return json.dumps({"error": str(e)}) finally: cursor.close() conn.close() A few things to note: run_sql_query only allows SELECT statements — this is a safety guardrail that prevents the AI from modifying data The try/except block is critical — if the AI generates invalid SQL (e.g., a bad GROUP BY), the error message is returned to OpenAI, and the model automatically corrects its query and retries. Without this, the script would crash. 5.3 — Register Tools with OpenAI This tells OpenAI what tools the agent has access to: tools = [ { "type": "function", "function": { "name": "list_tables", "description": "List all tables in the connected MySQL database.", "parameters": {"type": "object", "properties": {}, "required": []} } }, { "type": "function", "function": { "name": "describe_table", "description": "Get the schema (columns and types) of a specific table.", "parameters": { "type": "object", "properties": { "table_name": {"type": "string", "description": "Name of the table to describe"} }, "required": ["table_name"] } } }, { "type": "function", "function": { "name": "run_sql_query", "description": "Execute a read-only SQL SELECT query and return results.", "parameters": { "type": "object", "properties": { "query": {"type": "string", "description": "The SQL SELECT query to execute"} }, "required": ["query"] } } } ] def call_tool(name, args): if name == "list_tables": return list_tables() elif name == "describe_table": return describe_table(args["table_name"]) elif name == "run_sql_query": return run_sql_query(args["query"]) else: return json.dumps({"error": f"Unknown tool: {name}"}) 5.4 — The Agent Loop This is the core logic. It sends the user's message to OpenAI, processes any tool calls, and loops until the model produces a final text response: def chat(user_message, conversation_history): client = OpenAI() conversation_history.append({"role": "user", "content": user_message}) print(f"\n{'='*60}") print(f"🧑 You: {user_message}") print(f"{'='*60}") while True: response = client.chat.completions.create( model="gpt-4o-mini", messages=conversation_history, tools=tools, tool_choice="auto" ) assistant_message = response.choices[0].message if assistant_message.tool_calls: conversation_history.append(assistant_message) for tool_call in assistant_message.tool_calls: fn_name = tool_call.function.name fn_args = json.loads(tool_call.function.arguments) print(f" 🔧 Calling tool: {fn_name}({json.dumps(fn_args)})") result = call_tool(fn_name, fn_args) print(f" ✅ Tool returned: {result[:200]}...") conversation_history.append({ "role": "tool", "tool_call_id": tool_call.id, "content": result }) else: final_answer = assistant_message.content conversation_history.append({"role": "assistant", "content": final_answer}) print(f"\n🤖 Agent:\n{final_answer}") return conversation_history The while True loop is what makes self-correction possible. When a tool returns an error, the model sees it in the conversation and generates a corrected tool call in the next iteration. 5.5 — Main Entry Point if __name__ == "__main__": print("\n" + "=" * 60) print(" 🤖 MySQL AI Agent") print(" Powered by OpenAI + Azure Database for MySQL") print(" Type 'quit' to exit") print("=" * 60) system_message = { "role": "system", "content": ( "You are a helpful data analyst agent connected to an Azure Database for MySQL. " "You have 3 tools: list_tables, describe_table, and run_sql_query. " "ALWAYS start by listing tables and describing their schema before writing queries. " "Only generate SELECT statements. Never write INSERT, UPDATE, DELETE, or DROP. " "Present query results in clean, readable tables. " "If the user asks a question, figure out the right SQL to answer it." ) } conversation_history = [system_message] while True: user_input = input("\n🧑 You: ").strip() if user_input.lower() in ("quit", "exit", "q"): print("\n👋 Goodbye!") break if not user_input: continue conversation_history = chat(user_input, conversation_history) Your final project folder should look like this: Step 6 — Run and Test the Agent python mysql_agent.py Test: Prompt: Which product generated the most revenue and who bought it? How Self-Correction Works One of the most powerful aspects of this agent is its ability to recover from SQL errors automatically. Azure Database for MySQL has sql_mode=only_full_group_by enabled by default, which rejects queries where non-aggregated columns aren't in the GROUP BY clause. When the AI generates an invalid query, here's what happens: The run_sql_query function catches the MySQL error It returns the error message as the tool result OpenAI sees the error in the conversation context The model generates a corrected query automatically The agent retries — and succeeds Without the try/except error handling, the entire script would crash. This is a key design pattern for production AI agents. Security Best Practices When building AI agents that interact with databases, security is critical: Read-only enforcement — The run_sql_query function rejects anything that isn't a SELECT statement SSL encryption — All connections use ssl_disabled=False, ensuring data in transit is encrypted Environment variables — Credentials are stored in .env, never hardcoded Principle of least privilege — For production, create a dedicated MySQL user with SELECT-only permissions: CREATE USER 'ai_agent'@'%' IDENTIFIED BY 'AgentPass123!'; GRANT SELECT ON demo_sales.* TO 'ai_agent'@'%'; FLUSH PRIVILEGES; Network isolation — For production workloads, consider using Azure Private Link instead of public access. Conclusion In this tutorial, you built a Python AI agent that connects to Azure Database for MySQL and answers natural language questions by auto-generating SQL - complete with self-correction and security guardrails. Clone the GitHub repo, spin up your own server, and start experimenting! If you'd like to connect to Azure Database for MySQL using the Model Context Protocol (MCP), see Unlocking AI-Driven Data Access: Azure Database for MySQL Support via the Azure MCP Server. If you have any feedback or questions about the information provided above, please leave a comment below. Thank you!248Views0likes0CommentsExport backups on-demand in Azure Database for MySQL - Flexible Server
We’re pleased to announce public preview of the On-demand backup and export feature for Azure Database for MySQL - Flexible Server, which you can use to export a physical backup of your flexible server to an Azure storage account (Azure blob storage) whenever the need.4.3KViews0likes0CommentsFebruary 2026 Recap: Azure Database for MySQL
We're excited to share a summary of the Azure Database for MySQL updates from the last couple of months. Extended Support Timeline Update Based on customer feedback requesting additional time to complete major version upgrades, we have extended the grace period before extended support billing begins for Azure Database for MySQL: MySQL 5.7: Extended support billing start date moved from April 1, 2026 to August 1, 2026. MySQL 8.0: Extended support billing start date moved from June 1, 2026 to January 1, 2027. This update provides customers additional time to plan, validate, and complete upgrades while maintaining service continuity and security. We continue to recommend upgrading to a supported MySQL version as early as possible to avoid extended support charges and benefit from the latest improvements. Learn more about performing a major version upgrade in Azure Database for MySQL. When upgrading using a read replica, you can optionally use the Rename Server feature to promote the replica and avoid application connection‑string updates after the upgrade completes. Rename Server is currently in Private Preview and is expected to enter Public Preview around the April 2026 timeframe. Private Preview - Fabric Mirroring for Azure Database for MySQL This capability enables real‑time replication of MySQL data into Microsoft Fabric with a zero‑ETL experience, allowing data to land directly in OneLake in analytics‑ready formats. Customers can seamlessly analyse mirrored data using Microsoft Fabric experiences, while isolating analytical workloads from their operational MySQL databases. Stay Connected We welcome your feedback and invite you to share your experiences or suggestions at AskAzureDBforMySQL@service.microsoft.com Stay up to date by visiting What's new in Azure Database for MySQL, and follow us on YouTube | LinkedIn | X for ongoing updates. Thank you for choosing Azure Database for MySQL!305Views0likes0CommentsGuide to Upgrade Azure Database for MySQL from 8.0 to 8.4
A practical, end‑to‑end guide for safely upgrading Azure Database for MySQL to 8.4 LTS, covering prerequisites, breaking changes, upgrade paths, downtime considerations, and rollback strategies based on real‑world experience.2.2KViews1like0CommentsSupporting ChatGPT on PostgreSQL in Azure
Affan Dar, Vice President of Engineering, PostgreSQL at Microsoft Adam Prout, Partner Architect, PostgreSQL at Microsoft Panagiotis Antonopoulos, Distinguished Engineer, PostgreSQL at Microsoft The OpenAI engineering team recently published a blog post describing how they scaled their databases by 10x over the past year, to support 800 million monthly users. To do so, OpenAI relied on Azure Database for PostgreSQL to support important services like ChatGPT and the Developer API. Collaborating with a customer experiencing rapid user growth has been a remarkable journey. One key observation is that PostgreSQL works out of box for very large-scale points. As many in the public domain have noted, ChatGPT grew to 800M+ users before OpenAI started moving new and shardable workloads to Azure Cosmos DB. Nevertheless, supporting the growth of one of the largest Postgres deployments was a great learning experience for both of our teams. Our OpenAI friends did an incredible job at reacting fast and adjusting their systems to handle the growth. Similarly, the Postgres team at Azure worked to further tune the service to support the increasing OpenAI workload. The changes we made were not limited to OpenAI, hence all our Azure Database for PostgreSQL customers with demanding workloads have benefited. A few of the enhancements and the work that led to these are listed below. Changing the network congestion protocol to reduce replication lag Azure Database for PostgreSQL used the default CUBIC congestion control algorithm for replication traffic to replicas both within and outside the region. Leading up to one of the OpenAI launch events, we observed that several geo-distributed read replicas occasionally experienced replication lag. Replication from the primary server to the read replicas would typically operate without issues; however, at times, the replicas would unexpectedly begin falling behind the primary for reasons that were not immediately clear. This lag would not recover on its own and would grow to a point when, eventually, automation would restart the read replica. Once restarted, the read replica would once again catch up, only to repeat this cycle again within a day or less. After an extensive debugging effort, we traced the root cause to how the TCP congestion control algorithm handled a higher rate of packet drops. These drops were largely a result of high point-to-point traffic between the primary server and its replicas, compounded by the existing TCP window settings. Packet drops across regions are not unexpected; however, the default congestion control algorithm (CUBIC) treats packet loss as a sign of congestion and does an aggressive backoff. In comparison, the Bottleneck Bandwidth and Round-trip propagation time (BBR) congestion control algorithm is less sensitive to packet drops. Switching to BBR, adding SKU specific TCP window settings, and switching to fair queuing network discipline (which can control pacing of outgoing packets at hardware level) resolved this issue. We’ll also note that one of our seasoned PostgreSQL committers provided invaluable insights during this process, helping us pinpoint the issue more effectively. Scaling out with Read replicas PostgreSQL primaries, if configured properly, work amazingly well in supporting a large number of read replicas. In fact, as noted in the OpenAI engineering blog, a single primary has been able to power around 50+ replicas across multiple regions. However, going beyond this increases the chance of impacting the primary. For this reason, we added the cascading replica support to scale out reads even further. But this brings in a number of additional failure modes that need to be handled. The system must carefully orchestrate repairs around lagging and failing intermediary nodes, safely repointing replicas to new intermediary nodes while performing catch up or rewind in a mission critical setup. Furthermore, disaster recovery (DR) scenarios can require a fast rebuild of a replica and as data movement across regions is a costly and time-consuming operation, we developed the ability to create a geo replica from a snapshot of another replica in the same region. This feature avoids the traditional full data copy process, which may take hours or even days depending on the size of the data, by leveraging data for that cluster that already exists in that region. This feature will soon be available for all our customers as well. Scaling out Writes These improvements solved the read replica lag problems and read scale but did not help address the growing write scale for OpenAI. At some point, the balance tipped and it was obvious that the IOPs limits of a single PostgreSQL primary instance will not cut it anymore. As a result OpenAI decided to move new and shardable workloads to Azure Azure Cosmos DB, which is our default recommended NoSQL store for fully elastic workloads. However, some workloads, as noted in the OpenAI blog are much harder to shard. While OpenAI is using Azure Database for PostgreSQL flexible server, several of the write scaling requirements that came up have been baked into our new Azure HorizonDB offering, which entered private preview in November 2025. Some of the architectural innovations are described in the following sections. Azure HorizonDB scalability design To better support more demanding workloads, Azure HorizonDB introduces a new storage layer for Postgres that delivers significant performance and reliability enhancements: More efficient read scale out. Postgres read replicas no longer need to maintain their own copy of the data. They can read pages from the single copy maintained by the storage layer. Lower latency Write-Ahead Logging (WAL) writes and higher throughput page reads via two purpose-built storage services designed for WAL storage and Page storage. Durability and high availability responsibilities are shifted from the Postgres primary to the storage layer, allowing Postgres to dedicate more resources to executing transactions and queries. Postgres failovers are faster and more reliable. To understand how Azure HorizonDB delivers these capabilities, let’s look at its high‑level architecture as shown in Figure 1. It follows a log-centric storage model, where the PostgreSQL writeahead log (WAL) is the sole mechanism used to durably persist changes to storage. PostgreSQL compute nodes never write data pages to storage directly in Azure HorizonDB. Instead, pages and other on-disk structures are treated as derived state and are reconstructed and updated from WAL records by the data storage fleet. Azure HorizonDB storage uses two separate storage services for WAL and data pages. This separation allows each to be designed and optimized for the very different patterns of reads and writes PostgreSQL does against WAL files in contrast to data pages. The WAL server is optimized for very low latency writes to the tail of a sequential WAL stream and the Page server is designed for random reads and writes across potentially many terabytes of pages. These two separate services work together to enable Postgres to handle IO intensive OLTP workloads like OpenAI’s. The WAL server can durably write a transaction across 3 availability zones using a single network hop. The typical PostgreSQL replication setup with a hot standby (Figure 2) requires 4 hops to do the same work. Each hop is a component that can potentially fail or slow down and delay a commit. Azure HorizonDB page service can scale out page reads to many hundreds of thousands of IOPs for each Postgres instance. It does this by sharding the data in Postgres data files across a fleet of page servers. This spreads the reads across many high performance NVMe disks on each page server. 2 - WAL Writes in HorizonDB Another key design principle for Azure HorizonDB was to move durability and high availability related work off PostgreSQL compute allowing it to operate as a stateless compute engine for queries and transactions. This approach gives Postgres more CPU, disk and network to run your application’s business logic. Table 1 summarizes the different tasks that community PostgreSQL has to do, which Azure HorizonDB moves to its storage layer. Work like dirty page writing and checkpointing are no longer done by a Postgres primary. The work for sending WAL files to read replicas is also moved off the primary and into the storage layer – having many read replicas puts no load on the Postgres primary in Azure HorizonDB. Backups are handled by Azure Storage via snapshots, Postgres isn’t involved. Task Resource Savings Postgres Process Moved WAL sending to Postgres replicas Disk IO, Network IO Walsender WAL archiving to blob storage Disk IO, Network IO Archiver WAL filtering CPU, Network IO Shared Storage Specific (*) Dirty Page Writing Disk IO background writer Checkpointing Disk IO checkpointer PostgreSQL WAL recovery Disk IO, CPU startup recovering PostgreSQL read replica redo Disk IO, CPU startup recovering PostgreSQL read replica shared storage Disk IO background, checkpointer Backups Disk IO pg_dump, pg_basebackup, pg_backup_start, pg_backup_stop Full page writes Disk IO Backends doing WAL writing Hot standby feedback Vacuum accuracy walreceiver Table 1 - Summary of work that the Azure HorizonDB storage layer takes over from PostgreSQL The shared storage architecture of Azure HorizonDB is the fundamental building block for delivering exceptional read scalability and elasticity which are critical for many workloads. Users can spin up read replicas instantly without requiring any data copies. Page Servers are able to scale and serve requests from all replicas without any additional storage costs. Since WAL replication is entirely handled by the storage service, the primary’s performance is not impacted as the number of replicas changes. Each read replica can scale independently to serve different workloads, allowing for workload isolation. Finally, this architecture allows Azure HorizonDB to substantially improve the overall experience around high availability (HA). HA replicas can now be added without any data copying or storage costs. Since the data is shared between the replicas and continuously updated by Page Servers, secondary replicas only replay a portion of the WAL and can easily keep up with the primary, reducing failover times. The shared storage also guarantees that there is a single source of truth and the old primary never diverges after a failover. This prevents the need for expensive reconciliation, using pg_rewind, or other techniques and further improves availability. Azure HorizonDB was designed from the ground up with learnings from large scale customers, to meet the requirements of the most demanding workloads. The improved performance, scalability and availability of the Azure HorizonDB architecture make Azure a great destination for Postgres workloads.4.5KViews11likes0CommentsNew series of monthly Live Webinars on Azure Database for MySQL!
Today we are announcing a new series of monthly Live Webinars about Azure Database for MySQL! These sessions will showcase newly released features and capabilities, technical deep-dives, and demos. The product group will also be addressing your questions about the service in real-time!4.5KViews2likes0CommentsMicrosoft Azure innovation powers leading price-performance for MySQL database in the cloud
As part of our commitment to ensuring that Microsoft Azure is the best place to run MySQL workloads, Microsoft is excited to announce that Azure Database for MySQL - Flexible Server just achieved a new, faster performance benchmark.7.2KViews5likes0CommentsDeploying Moodle on Azure – things you should know
Moodle is one of the most popular open-source learning management platform empowering educators and researchers across the world to disseminate their work efficiently. It is also one of the most mature and robust OSS applications that the community has developed and improvised over the years. We have seen customers from small, medium, and large enterprises to schools, public sector, and government organizations deploying Moodle in Azure. In this blog post, I’ll share some best practices and tips for deploying Moodle on Azure based on our experiences working with several of our customers.69KViews14likes25Comments