flexible server
87 TopicsAugust 2025 Recap: Azure Database for MySQL
We're excited to share a summary of the Azure Database for MySQL updates for the month of August 2025. Join us live on our YouTube channel on September 11, 2025 for an exclusive webinar where we’ll dive deeper into these updates and answer your questions! Watch it live here. Azure Database for MySQL 8.4 - General Availability We’re excited to announce that Azure Database for MySQL now supports MySQL 8.4 in General Availability (GA). This means you can create new MySQL 8.4 servers on Azure fully supported for production workloads. MySQL 8.4 is a long-term supported release from the MySQL community, bringing the latest features and improvements while emphasizing stability. With Azure’s managed service, you get these new capabilities backed by Azure’s enterprise-grade reliability and support. In short, MySQL 8.4 GA opens the door for you to upgrade your databases and future-proof your MySQL environment on Azure. Learn more. Cross subscription and cross resource-group placement in restore/replica provisioning workflow You can now restore a server or create a read replica in a different subscription and resource group in Azure Database for MySQL – Flexible Server. This enhancement offers greater flexibility for cross-environment restores, resource organization, and subscription-level separation, helping meet governance and operational requirements. Learn more. Ability to delete on-demand backup You can now delete on-demand backups in Azure Database for MySQL – Flexible Server, giving you greater control over backup management and storage costs. This feature allows you to remove on-demand backups that are no longer needed, helping maintain a cleaner backup inventory and optimize resource usage. Learn more. Unlocking Regional Insights with the Location Based Capabilities REST API Managing MySQL Flexible Server deployments across Azure regions often means choosing the right Azure region for your MySQL deployment is critical. The new Location-Based Capability Set – List API helps you: Retrieve real-time, region-specific capabilities. Compare SKUs, storage options, backup retention, and HA configurations. Integrate insights into automation pipelines for smarter deployments. This API empowers architects and developers to make informed decisions, reduce misconfigurations, and accelerate deployment cycles. Learn more. Stay Connected We look forward to your feedback as you explore these enhancements and continue building with Azure Database for MySQL. If you have any suggestions or queries about our service, please let us know by emailing us at AskAzureDBforMySQL@service.microsoft.com. You can also submit product ideas and feedback at Azure Database for MySQL Community forum. To learn more about what's new with Flexible Server, see What's new in Azure Database for MySQL - Flexible Server. Stay tuned for more updates and announcements by following us on social media: YouTube | LinkedIn | X. Take care, and thanks for being part of our community!143Views0likes0CommentsAzure Database for MySQL 8.4 Now Generally Available
We’re excited to announce that Azure Database for MySQL – Flexible Server now supports MySQL 8.4 in General Availability (GA). This means you can create new MySQL 8.4 servers on Azure fully supported for production workloads. MySQL 8.4 is a long-term supported release from the MySQL community, bringing the latest features and improvements while emphasizing stability. With Azure’s managed service, you get these new capabilities backed by Azure’s enterprise-grade reliability and support. In short, MySQL 8.4 GA opens the door for you to upgrade your databases and future-proof your MySQL environment on Azure. Why Upgrade to MySQL 8.4? Avoid End-of-Support Deadlines: If you’re running MySQL 5.7 or 8.0 on Azure, planning an upgrade is crucial. MySQL 5.7’s community support ended on October 31, 2023, and MySQL 8.0’s end-of-life is April 30, 2026. Azure’s standard support for these versions extends slightly beyond those dates (until March 31, 2026 for 5.7, and May 31, 2026 for 8.0). After those points, servers on 5.7 or 8.0 enter Extended Support, a paid support phase that provides critical fixes for up to three years (through 2029). Running your database in Extended Support means additional costs. Upgrading to MySQL 8.4 now ensures your database stays within standard support for years to come, sparing you the hassle of last-minute upgrades or extended support fees. Benefits of MySQL 8.4: MySQL 8.4 is essentially an evolution of 8.0, so it brings numerous performance enhancements, security patches, and new SQL features that have been introduced since 8.0. Because it’s an LTS release, MySQL 8.4 is designed for stability – making it an ideal target for enterprises. Most applications that work on MySQL 8.0 will be compatible with 8.4 with little to no changes, but with 8.4 you gain improvements in areas like replication, query optimization, and JSON handling (among others) that can boost your application’s efficiency. Moreover, by standardizing on 8.4, you align with the version that will receive updates well into the future. In summary, upgrading means better reliability, availability, and security now, and assured support longevity. Upgrading from MySQL 8.0 (In-Place Upgrade) For current Azure Database for MySQL 8.0 users, moving to 8.4 is straightforward, thanks to our in-place major version upgrade capability. You can upgrade your existing 8.0 server to 8.4 on the same server instance, without dumping and restoring data. Here’s how it works: Upgrade Availability: If you create a new MySQL 8.0 server today (post-GA), the option to upgrade to 8.4 is available immediately in the Azure portal or CLI. For existing 8.0 servers (those created before this GA release), the upgrade capability will become available after your next scheduled maintenance window. The September 2025 platform update is enabling this feature across all regions. Note: Azure will not auto-upgrade your server during that maintenance; it only enables the new version as an option. You remain in control of when to perform the major version upgrade. Performing the Upgrade: Once the feature is enabled for your server, you can initiate the upgrade via the Azure portal, Azure CLI, or PowerShell. The process involves a downtime (the server will be taken offline and restarted on MySQL 8.4), so plan to execute during a maintenance window or low-traffic period. We strongly recommend taking a backup or snapshot before upgrading, as a precaution. For a step-by-step guide and best practices (including how to minimize downtime by using read replicas for the upgrade), refer to the official Azure documentation on https://learn.microsoft.com/azure/mysql/flexible-server/how-to-upgrade. In most cases, upgrading from 8.0 to 8.4 is completed within several minutes. After upgrade, your server retains the same endpoints, configuration, and data – just running on the new MySQL 8.4 engine. Upgrading from MySQL 5.7 (Two-Step Path) Upgrading from MySQL 5.7 to 8.4 requires a two-step approach, since a direct jump is not supported: First upgrade 5.7 to 8.0: Azure MySQL Flexible Server supports in-place major upgrade from 5.7 to 8.0. This moves your server to a supported major version and is a necessary intermediate step (you cannot skip major versions in one go). MySQL 8.0 introduced some changes from 5.7 (for example, stricter SQL modes and a new default authentication plugin), so after upgrading to 8.0, test your application and fix any compatibility issues. Azure’s standard support for 5.7 runs until March 31, 2026, so you should aim to complete this step before then. Then upgrade 8.0 to 8.4: With your server now on 8.0, you can use the in-place upgrade to 8.4 as described above. All Azure 8.0 servers will have the 8.4 upgrade option by the end of the next maintenance cycle (after the feature rollout in September 2025). Plan to perform the 8.0 → 8.4 upgrade at a convenient time, ideally well before MySQL 8.0’s support winds down in 2026. This final step ensures you’re on the latest GA version and out of the legacy support cycle. Some customers may choose to migrate 5.7 to 8.4 by creating a new 8.4 server and importing data (using dump and restore or Azure Database Migration Service). This approach can be useful if you want to reorganize your environment or test in parallel. However, it will likely involve more downtime than the sequential in-place upgrades. Evaluate which method fits your needs – either way, now is the time to start, given that free support for 5.7 ends in less than two years. Support Timeline Summary and Next Steps To recap the support timelines and why upgrading matters: MySQL 5.7: Community EOL: Oct 31, 2023. Azure standard support until: March 31, 2026. After that, servers enter extended support (critical fixes only, with additional charges) until March 31, 2029. Action: Plan to upgrade off 5.7 before Q1 2026 to stay within standard support. MySQL 8.0: Community EOL: Apr 30, 2026. Azure standard support until: May 31, 2026. Extended support then runs to May 31, 2029. Action: Begin upgrading 8.0 instances to 8.4 in the coming months, rather than waiting until the last moment. The upgrade feature is available now (or after one maintenance cycle for older servers). MySQL 8.4: GA start: Sep 2025 (now). This is the recommended target for all MySQL deployments on Azure going forward. It will be fully supported on Azure well beyond 2026, receiving regular updates and improvements as part of the Azure service. Action: Deploy new databases on 8.4 and upgrade existing 5.7/8.0 databases to 8.4 when feasible, to benefit from the latest features and long-term support. Next Steps: Getting started with MySQL 8.4 on Azure is easy. You can create a new Azure Database for MySQL 8.4 server from the Azure Portal or via CLI today. For existing servers, review the https://learn.microsoft.com/azure/mysql/flexible-server/how-to-upgrade to choose your upgrade method (in-place with some downtime vs. replica method for minimal downtime) and schedule a time for the upgrade. By moving to Azure Database for MySQL 8.4, you’re investing in a more stable, performant, and future-proof foundation for your applications. We’re thrilled to see customers embrace MySQL 8.4, and we’re committed to making your upgrade process as smooth as possible. Upgrade with confidence, and leverage the power of MySQL 8.4 in Azure to drive your business forward! For more information or to provide feedback, contact Ask Azure Database For MySQL.417Views2likes0CommentsManaging bloat in PostgreSQL using pgstattuple on Azure Database for PostgreSQL flexible server
Bloat refers to the unused space within database objects like tables and indexes, caused by accumulated dead tuples that have not been reclaimed by the storage engine. This often results from frequent updates, deletions, or insertions, leading to inefficient storage and performance issues. Addressing bloat is crucial for maintaining optimal database performance, as it can significantly impact storage efficiency, increase I/O operations, reduce cache efficiency, prolong vacuum times, and slow down index scans. In this blog post, I will walk you through how to use the pgstattuple extension in PostgreSQL to analyze and understand the physical storage of your database objects. By leveraging pgstattuple, you can identify and quantify the unused space within your tables and indexes. We will guide you through analyzing bloat, interpreting the results, and addressing the bloat to optimize your database and improve its performance. I will be using the pg_repack extension as an alternative to VACUUM FULL and REINDEX. pg_repack is a PostgreSQL extension that removes bloat from tables and indexes and reorganizes them more efficiently. pg_repack works by creating a new copy of the target table or index, applying any changes that occurred during the process, and then swapping the old and new versions atomically. pg_repack doesn't require any downtime or exclusive access locks on the processed table or index, except for a brief period at the beginning and at the end of the operation. Performing a full table repack requires free disk space about twice as large as the target table(s) and its indexes. For example, if the total size of the tables and indexes to be reorganized is 1GB, an additional 2GB of disk space is required. For pg_repack to run successfully on a table you must have either a PRIMARY KEY or a UNIQUE index on a NOT NULL column. Let us dive in and see how you can make the most of this powerful tool. To get more details on the bloat on an Azure Database for PostgreSQL flexible server, you can follow these steps: 1. Installing pgstattuple Add pgstattuple to the azure.extensions server parameter. You must install the extension on the database in which you want to analyze the bloat. To do so, connect to the database of interest and run the following command: CREATE EXTENSION pgstattuple; 2. Analyze the table/index Once the extension is installed, you can use the pgstattuple function to gather the detailed statistics to analyze the bloat on test table. The function provides information such as the number of live tuples, dead tuples, and the percentage of bloat, free space within the pages. To showcase the pgstattuple extension features, I have used a 4 Vcore SKU with PG version 16, created a test table with an index, loaded it with 24 Gb data and kept on generating bloat by performing some update/delete commands on the table leading to bloat. Creating a test table using the below script. CREATE TABLE test_table ( id bigserial PRIMARY KEY, column1 text, column2 text, column3 text, column4 text, column5 text, column6 text, column7 text, column8 text, column9 text, column10 text); Loading the table with 24 GB data using the below script. INSERT INTO test_table (column1, column2, column3, column4, column5, column6, column7, column8, column9, column10) SELECT md5(random()::text), md5(random()::text), md5(random()::text), md5(random()::text), md5(random()::text), md5(random()::text), md5(random()::text), md5(random()::text), md5(random()::text), md5(random()::text) FROM generate_series(1, 22000000); Create an index on this table to depict the usage of pgstatindex function to analyze bloat on an index. CREATE INDEX idx_column1 ON test_table (column1); Run functions pgstattuple on the above table and pgstatindex on the above index without bloat. pgstattuple on unbloated table You can observe the dead_tuple_percent is 0 and free_percent is 4.3 referring to the healthy state of a table. pgstatindex on unbloated index You can observe the avg_leaf_density is 89.07% and leaf_fragementation is 8.1 referring to healthy state of the index. Using the command below, I am generating bloat on the table. UPDATE test_table SET column1 = md5(random()::text), column2 = md5(random()::text), column3 = md5(random()::text), column4 = md5(random()::text), column5 = md5(random()::text), column6 = md5(random()::text) WHERE id % 5 = 0; You can analyze a table using the below query: SELECT * FROM pgstattuple('your_table_name'); What it does: It performs a full scan to gather detailed statistics Performance: On large tables the performance is slower and might take some seconds to minutes depending on the table size due to the full table scans Use case: To diagnose bloat or planning vacuuming strategies like (performing VACUUM/VACUUM FULL) To achieve faster estimates you can use, SELECT * FROM pgstattuple_approx(‘your_table_name’); What it does: Uses sampling to estimate statistics about the table. Accuracy: Results are close but not exact Performance: On large tables the performance is faster as it only considers a sample (subset of pages). Use case: quick insights The function provides the following information about the table. Column Description table_len Table length in bytes (tuple_len+dead_tuple_len+frees_space) and the overhead accounts for the padding (for tuple alignment) and page header (for per page table pointers) tuple_count Number of live tuples tuple_len Length of live tuples in bytes tuple_percent Percentage of live tuples dead_tuple_count Number of dead tuples dead_tuple_len Length of dead tuples in bytes dead_tuple_percent Percentage of dead tuples free_space Total free space in bytes within the pages free_percent Percentage of free space within the pages You should mainly concentrate on the below 3 columns to understand the table bloat and the unused space. dead_tuple_percent column tells us the percentage of dead tuples in the table. It is calculated as below. dead_tuple_percent = dead_tuple_len / table_len * 100 This can be reduced by running VACUUM on the table. However, VACUUM does not reclaim the space. Hence, free_space and free_percent would increase after the VACUUM. free_space and free_percent depict the unused/wasted space within the pages. The space can be reclaimed only by performing a VACUUM FULL on the table. If you see high free_percent it depicts the table needs VACUUM FULL (here instead you could use pg_repack) to reclaim the space. If you observe a dead_tuple_percent anything > 20% you would need to run VACUUM on the table. However, if you observe a free_percent > 50% you would need to run VACUUM FULL on the table. The below 3 snips depict the pgstattuple function run on a bloated table, vacuumed table and the output after pg_repack run on the table. Pgstattuple function run on the bloated table. pgstattuple function run on the table after vacuuming the table. Note: Here you also see a difference in tuple_count as I have performed some delete statements which are not captured in the document. Hence you see a tuple count difference. pgstattuple function on after running pg_repack on the table. Summary of Changes with vacuum and pg_repack run on a bloated table: dead_tuple_count: Reduced to 0 after VACUUM dead_tuple_len: Reduced to 0 after VACUUM dead_tuple_percent: Reduced after VACUUM from 21% to 0 free_space: Increased after VACUUM but significantly reduced after pg_repack free_percent: Increased after VACUUM but drastically reduced after pg_repack Similarly, to analyze an index, you can run: SELECT * FROM pgstatindex('your_index_name'); The function provides the following information about the index. Column Description version B-tree version number tree_level Tree level of the root page index_size Total number of pages in index root_block_no Location of root block internal_pages Number of "internal" (upper-level) pages leaf_pages Number of leaf pages empty_pages Number of empty pages deleted_pages Number of deleted pages avg_leaf_density Average density of leaf pages leaf_fragmentation Leaf page fragmentation Low avg_leaf_density implies underutilized pages. It denotes the percentage of good data in index pages. After VACUUM this column value would further go down as cleaning the dead tuples in the indexes reduces the leaf density further pointing us to increase in unused/wasted space. To reclaim the unused/wasted space REINDEX needs to be performed for the index to be performant. High leaf_fragmentation implies Poor data locality within the pages again a REINDEX would help. If you see avg_leaf_density anything < 20% you would need perform REINDEX. The below 3 snips depict the pgstatindex function run on a bloated table index, vacuumed table index and the output after pg_repack run on the table index. pgstatindex ran on bloated table pgstatindex run on index after Vacuuming the table pgstatindex run on index after pg_repack run Summary of Changes with vacuum and pg_repack run on a bloated index: index_size: Remained high after VACUUM but significantly reduced after pg_repack as the unused/wasted space is reclaimed avg_leaf_density: Reduced significantly after VACUUM from 80%-14% depicting a smaller number of good data on the leaf page and increased to 89% after pg_repack leaf_fragmentation: Remained the same after VACUUM and reduced to 0 after pg_repack depicting no page fragmentation happening Note: The index size is a sum of leaf_pages, empty_pages, deleted_pages and internal_pages. pgstattuple acquires a read lock on the object (table/index). So the results do not reflect an instantaneous snapshot; concurrent updates will affect them. For more information on pgstatginindex and pgstathasindex functions refer to PostgreSQL documentation here. For more insights on pgstattuple with respect to TOAST tables, please refer to the relevant documentation here. 3. Analyze bloat on partition tables The pgstattuple extension in PostgreSQL is a powerful tool for analyzing table and index bloat by providing detailed statistics such as dead tuple percentage and free space. However, it’s important to note that this function cannot be executed directly on a partitioned table. Instead, it must be run individually on each partition to gather meaningful insights. To streamline this process, especially when dealing with a large number of partitions, you can use a PL/pgSQL script that iterates through all partitions of a parent table. This script executes pgstattuple on each partition and stores the resulting statistics—such as dead tuple percentage and free space—in a summary table for easy review and analysis. This approach not only simplifies the task of identifying bloat across partitions but also enables proactive monitoring and optimization of storage efficiency in partitioned PostgreSQL environments. -- Create a temporary table to store bloat statistics DROP TABLE IF EXISTS bloat_summary; CREATE TEMP TABLE bloat_summary ( partition_name TEXT, table_len BIGINT, tuple_count BIGINT, tuple_len BIGINT, tuple_percent NUMERIC, dead_tuple_count BIGINT, dead_tuple_len BIGINT, dead_tuple_percent NUMERIC, free_space BIGINT, free_percent NUMERIC ); -- DO block to iterate over all partitions and collect statistics DO $$ DECLARE part RECORD; stats RECORD; BEGIN FOR part IN SELECT inhrelid::regclass AS partition_name FROM pg_inherits WHERE inhparent = 'your_parent_table'::regclass LOOP EXECUTE format('SELECT * FROM pgstattuple(%L)', part.partition_name) INTO stats; INSERT INTO bloat_summary VALUES ( part.partition_name, stats.table_len, stats.tuple_count, stats.tuple_len, stats.tuple_percent, stats.dead_tuple_count, stats.dead_tuple_len, stats.dead_tuple_percent, stats.free_space, stats.free_percent ); END LOOP; END $$; -- Output the summary SELECT * FROM bloat_summary ORDER BY dead_tuple_percent DESC; 4. Addressing the bloat Once you have identified a bloat, you can address it by taking the following steps. Below are common approaches. VACUUM: Clears dead tuples without reclaiming the space. Pg_repack: Performs VACUUM FULL and REINDEX online and efficiently reorganizes the data. Note: Other unused space, like the one left in heap or index pages due to a configured fill factor lower than 100 or because the remaining available space on a page cannot accommodate a row, given its minimum size, is not considered bloat, while it's also unused space. pgstattuple will help you address bloat! This blog post guided you through understanding and managing bloat in PostgreSQL using the pgstattuple extension. By leveraging this tool, you were able to gain detailed insights into the extent of bloat within their tables and indexes. These insights prove valuable in maintaining efficient storage and ensuring optimal server performance.Data integration with Azure Logic Apps and MySQL Flexible Server
Data integration allows applications to move, process, or transform data across multiple systems as part of their micro-service architecture. While you can accomplish data integration in several ways, you can use Logic apps to move data to an Azure Database for MySQL flexible server and automate data integration tasks that you perform in response to API calls.3.8KViews2likes1CommentBoost your MySQL apps: why enterprises are migrating MySQL databases to Azure
MySQL is one of the world’s most popular open-source databases, and for good reason. It’s cost-effective, scalable, and familiar to millions of developers. But if your enterprise is running MySQL on-premises, you might be missing out on huge benefits in cost savings, performance, and agility. Recent findings from an ESG Economic Validation report, commissioned by Microsoft, reveal just how advantageous it is to migrate on-premises MySQL databases to Azure’s fully managed database- as -a -service platform. Read the full MySQL report TL;DR – benefits of migrating to Azure Database for MySQL Managing data security, quality and privacy as well as general database management are among the most significant challenges facing developer teams. Some even report that database technology is evolving faster than they’re able to keep up with. Migrating to Azure’s fully managed service offloads these responsibilities so teams can focus fully on projects that move their business forward. Some benefits of moving from on premises to Azure Database for MySQL highlighted in the report include: 54% lower total cost of ownership 86% lower MySQL admin costs 25% increase in development velocity “Our developers can now focus on their core job: creating code. We now put out 8 releases per year compared to 2 when we were on premises. This gets features and fixes in the hands of our customers sooner.” Review the Azure Database for MySQL Economic Validation Infographic Zooming in – how fully managed open-source databases on Azure deliver economic wins for the enterprise More than 50% lower costs and total cost of ownership Azure’s fully managed MySQL service delivers the same (or better) database capabilities for almost half the cost. By one estimate, a company could save millions over a few years and even see an ROI above 90% from the migration when factoring in both cost savings and new revenue opportunities. These savings come from a few key areas: No more hardware and maintenance expenses: On-premises MySQL deployments require investing in servers, storage, networking gear – plus ongoing power, cooling, and data center space. Azure Database for MySQL eliminates those needs entirely. You don’t buy or maintain hardware; Microsoft handles the infrastructure. Drastically reduced admin overhead: Companies in the study reported an 86% reduction in the cost of MySQL administration after migrating to Azure. All the routine tasks—installing updates, patching OS and MySQL versions, tuning performance, taking backups, managing high availability—are handled by Azure as part of the service. Pay-as-you-go efficiency: On-premises setups often over-provision resources to handle peak loads, which wastes money during lulls. Azure can scale resources on-demand, so you’re never stuck paying for idle capacity. You can also use cost controls like burstable instances for dev/test, stop/start to pause servers, and reserved instances for discounts. Included extras: Many capabilities that would incur extra cost on-premises (or require separate licenses) are bundled into Azure Database for MySQL. For example, security features, monitoring and performance tuning tools, backup storage, and high availability options come built-in at no extra charge in Azure’s service tiers. Improved performance and scalability Beyond cost, Azure Database for MySQL helps your applications run faster and scale seamlessly to meet demand. In on-premises environments, you might need to tune configs, add hardware, or handle sharding as usage grows. Azure takes a lot of that burden away and offers cloud-scale performance out of the box: Better throughput: Azure’s managed MySQL runs on high-performance cloud infrastructure (with fast SSD storage, plenty of memory, and low-latency networking). Microsoft has also added capabilities like accelerated I/O logs and intelligent caching that improve MySQL’s throughput and response times compared to typical on-premises setups. Elastic scaling on demand: With Azure, scaling up a MySQL server is as simple as a configuration change. No new hardware is required. You can scale vertically (bigger machine) or horizontally (add read replicas) in minutes. Azure even supports autoscaling of IOPS and storage based on set thresholds. This flexibility means your database can handle traffic spikes or growth without manual intervention. No wasted capacity: On-premises, you often must deploy extra servers “just in case” future demand increases, and that hardware sits underutilized most of the time. Azure’s model avoids this waste. Enterprises reported that Azure’s ability to fine-tune resources helped them avoid overprovisioning and maintain a better price/performance mix. Reliable high performance at scale: Whether you have 10 users or 10 million, Azure’s global infrastructure can scale to meet your needs. One customer in the study found that after migrating, they could provide MySQL services 5× faster than when they were on-premises. Faster development cycles and greater developer productivity For developers, one of the biggest wins of moving MySQL to Azure is time back to innovate. When you no longer have to babysit your database infrastructure, you can focus on building features and improving your applications. The ESG report highlighted that companies saw significantly improved developer productivity and agility after migrating. Faster development: Organizations reported that their development cycles became 25% faster on average once on Azure. One customer shared that “our developers can now focus on their core job: creating code.” The customer went from shipping two software releases per year to eight releases per year after migrating to Azure. Elimination of toil: Azure’s fully managed platform lifts the burden of routine DB administration from your team. No more worrying about backups, failover, or OS patches. The service continuously applies best practices and optimizations automatically, so your team doesn’t have to. Teams can be more agile because they’re not bogged down by lengthy processes or constant maintenance. Faster time-to-market: ESG modeled that by releasing features earlier and more often, thanks to Azure MySQL, a mid-sized software company could realize an additional $15 million in revenue over three years by being first-to-market with new capabilities can capture customers and market share. Stronger security and high availability Running MySQL in Azure doesn’t just make life easier and cheaper. It also makes your databases more secure and resilient. Enterprises often struggle to keep up with patches, security threats, and high availability when managing databases on-premises. Azure Database for MySQL is hardened with enterprise-grade security and reliability features that can significantly reduce your risk. Fully managed security updates: In the ESG survey, nearly half of organizations (46%) said they needed outside expertise to help manage database platforms on-premises often because of the complexity of securing and tuning them. Azure takes care of patching the MySQL engine and underlying OS for you, ensuring you’re always on a supported, secure version. Enhanced data protection: By default, Azure Database for MySQL encrypts data at rest and in transit. It also offers network isolation options to lock down access to your database. Many customers found that after migrating, their security posture was stronger than before. They could easily implement role-based access control via Azure AD, set up threat detection alerts, and use Azure’s monitoring to audit activity with just a few clicks. High availability and disaster recovery built In: Azure Database for MySQL can be deployed with high availability configurations across availability zones. In case of an outage, it can switch to a standby replica usually in under 60 seconds, dramatically reducing downtime for your apps. Companies in the study experienced 70% less downtime incidents after moving to Azure. Compliance and governance: Azure Database for MySQL is compliant with a broad set of industry standards and certifications, easing the audit burden. Many organizations found that moving to Azure made it simpler to adhere to internal security policies and compliance standards because so many controls are built into the platform. Enterprise-ready MySQL is just a few clicks away Azure provides a hardened, enterprise-ready environment for MySQL that most companies would be hard-pressed to build on their own. By entrusting MySQL to Azure’s managed service, you benefit from Microsoft’s investments in security and infrastructure resiliency. The result is peace of mind: your data is safer, your databases are more stable, and your team has far fewer 3 AM incidents to deal with. Read the full report for more details about the quantified benefits and customer testimonials. If you’re ready to start your journey, check out our migration guides. With Azure’s fully managed open-source databases, you can supercharge your data strategy, empower your developers, and ultimately accelerate your path to an AI-driven future.163Views0likes0CommentsModel Context Protocol (MCP) Server for Azure Database for MySQL
We are excited to introduce a new MCP Server for integrating your AI models with data hosted in Azure Database for MySQL. By utilizing this server, you can effortlessly connect any AI application that supports MCP to your MySQL flexible server (using either MySQL password-based authentication or Microsoft Entra authentication methods), enabling you to provide your business data as meaningful context in a standardized and secure manner.1.7KViews2likes0CommentsAzure Database for MySQL bindings for Azure Functions (General Availability)
We’re thrilled to announce the general availability (GA) of Azure Database for MySQL Input and Output bindings for Azure Functions—a powerful way to build event-driven, serverless applications that seamlessly integrate with your MySQL databases. Key Capabilities With this GA release, your applications can use: Input bindings that allow your function to retrieve data from a MySQL database without writing any connection or query logic. Output bindings that allow your function to insert or update data in a MySQL table without writing explicit SQL commands. In addition you can use both the input and output bindings in the same function to read-modify-write data patterns. For example, retrieve a record, update a field, and write it back—all without managing connections or writing SQL. These bindings are fully supported for both in-process and isolated worker models, giving you flexibility in how you build and deploy your Azure Functions. How It Works Azure Functions bindings abstract away the boilerplate code required to connect to external services. With the MySQL Input and Output bindings, you can now declaratively connect your serverless functions to your Azure Database for MySQL database with minimal configuration. You can configure these bindings using attributes in C#, decorators in Python, or annotations in JavaScript/Java. The bindings use the MySql.Data.MySqlClient library under the hood and support Azure Database for MySQL Flexible Server. Getting Started To use the bindings, install the appropriate NuGet or npm package: # For isolated worker model (C#) dotnet add package Microsoft.Azure.Functions.Worker.Extensions.MySql # For in-process model (C#) dotnet add package Microsoft.Azure.WebJobs.Extensions.MySql Then, configure your function with a connection string and binding metadata. Full samples for all the supported programming frameworks are available in our github repository. Here is a sample C# in-process function example where you want to retrieve a user by ID, increment their login count, and save the updated record back to the MySQL database for lightweight data transformations, modifying status fields or updating counters and timestamps. public class User { public int Id { get; set; } public string Name { get; set; } public int LoginCount { get; set; } } public static class UpdateLoginCountFunction { [FunctionName("UpdateLoginCount")] public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Function, "post", Route = "user/{id}/login")] HttpRequest req, [MySql("SELECT * FROM users WHERE id = @id", CommandType = System.Data.CommandType.Text, Parameters = "@id={id}", ConnectionStringSetting = "MySqlConnectionString")] User user, [MySql("users", ConnectionStringSetting = "MySqlConnectionString")] IAsyncCollector<User> userCollector, ILogger log) { if (user == null) { return new NotFoundObjectResult("User not found."); } // Modify the user object user.LoginCount += 1; // Write the updated user back to the database await userCollector.AddAsync(user); return new OkObjectResult($"Login count updated to {user.LoginCount} for user {user. Name}."); } } Learn More Azure Functions MySQL Bindings Azure Functions Conclusion With input and output bindings for Azure Database for MySQL now generally available, building serverless apps on Azure with MySQL has never been simpler or more efficient. By eliminating the need for manual connection management and boilerplate code, these bindings empower you to focus on what matters most: building scalable, event-driven applications with clean, maintainable code. Whether you're building real-time dashboards, automating workflows, or syncing data across systems, these bindings unlock new levels of productivity and performance. We can’t wait to see what you’ll build with them. If you have any feedback or questions about the information provided above, please leave a comment below or email us at AskAzureDBforMySQL@service.microsoft.com. Thank you!Just published: What's new with Postgres at Microsoft, 2025 edition
If you’re using Postgres on Azure—or just curious about what the Postgres team at Microsoft has been up to during the past 12 months—this annual update might be worth a look. The blog post covers: New features in Azure Database for PostgreSQL – Flexible Server Open source code contributions to Postgres 18 (including async I/O) Work on the Citus extension to Postgres Community efforts like POSETTE, helping with PGConf.dev, our monthly Talking Postgres podcast, and more There’s also a hand-made infographic that maps out the different Postgres workstreams at Microsoft over the past year. It's a lot to take in, but the infographic captures so much of the work across the team—I think it's kind of a work of art. 📝 Read the full post here: https://techcommunity.microsoft.com/blog/adforpostgresql/whats-new-with-postgres-at-microsoft-2025-edition/4410710 And, I'd love to hear your thoughts or questions.Deploying Moodle on Azure – things you should know
Moodle is one of the most popular open-source learning management platform empowering educators and researchers across the world to disseminate their work efficiently. It is also one of the most mature and robust OSS applications that the community has developed and improvised over the years. We have seen customers from small, medium, and large enterprises to schools, public sector, and government organizations deploying Moodle in Azure. In this blog post, I’ll share some best practices and tips for deploying Moodle on Azure based on our experiences working with several of our customers.67KViews14likes25Comments