Optimally tuning your workload on Azure Database for MySQL
Published Jun 02 2020 03:37 PM 12.5K Views
Microsoft

 

Overview 

When troubleshooting Azure Database for MySQL performance, it can often be hard to know which server parameters you need to adjust.  We used General Purpose 2 vCore and 4 vCore MySQL servers with a workload that consisted of ~90% read queries across 150 concurrent connections as our test environment. The recommendations and insights below were generated and tested specifically on Azure Database for MySQL instances.  

 

Symptom 1: Queries exceed temporary table sizing 

In high-read scenariosat times queries executed against the server can exceed the in-memory temporary table sizes.  If your workload exceeds the default size of temporary table sizes, then the MySQL server will switch to writing the temporary tables to disk, which can affect server performance. So how do you know if your server is writing to disk because of exceeding temporary table size? 

 

Let's take a look at our metrics, specifically for 'created_tmp_disk_tables' and 'created_tmp_tables'. 

show global status like 'created_tmp_disk_tables'; 
show global status like 'created_tmp_tables'; 

If the server creates the table on disk either initially or by converting an in-memory table, it will increment the 'created_tmp_disk_tables' status variable.  Similarly, queries performing a join or sort without the benefit of an index require the creation of a temporary table in memory, which will increment the 'created_tmp_tables' variable. 

 

NoteTo determine if running a specific query will use temporary tables, run an explain on the query. The detail in the 'extra' column indicates 'Using temporary' if the query will run using temporary tables. 

 

To calculate the percentage of your workload with queries spilling to disks, use your metric values in the formula below: 

Created_tmp_disk_tables / (Created_tmp_disk_tables + Created_tmp_tables) ) * 100 

Ideally, this percentage should be less 25%. If you see that the percentage is 25% or greater, you can modify two server parameters, 'tmp_table_size' and 'max_heap_table_size'. 

 

To calculate the values required, look at the total per-connection memory values and the base memory. The sum of per-connection memory parameters, excluding 'tmp_table_size'combined with the base memory accounts for total memory of the server.   

 

The formula to calculate the largest possible size of 'tmp_table_size' and 'max_heap_table_size' is: 

(total memory - (base memory + (sum of per-connection memory * # of connections)) / # of connections 

 

Symptom 2: Heavy contention for tables 

If a user is running several queries against tables, causing the number of opened tables to increase and potentially heavy contention on the same table, then querperformance can be impacted.  This is particularly true for a heavy-read scenario in which different tables are concurrently being opened by many connections.  You can monitor table open trends by looking at the 'opened_tables' metric. 

Let's also look at the 'table_open_cache' parameter, which is the total number of open tables allowed for all threads Based on our performance testing, a heavy-read scenario can benefit from tuning this parameter. 

 

To optimize the value of the 'table_open_cache' parameter, configure it so that approximately 50% of its value is being used by the workload. You can calculate this fairly easily using the following formula: 

Minimum table_open_cache = (total # of tables) x (# of active connections) / 2 

Italso important, however, to include the base tables within this calculation. For example, for 150 active connections with around 250 tables total, the 'table_open_cache' parameter should be at least 18,750. 

 

Similarly, for 'table_open_cache_instances', increase the value if a single table is experiencing a lot of heavy contentions of queries against it.  For 150 active connections and 250 total tables, we found a benefit by increasing 'table_open_cache_instances' from 1 to 4, especially while testing a high-read scenario.   

 

Symptom 3:  Per thread buffers 

Understanding a user's workload is critical to improving performance against a server.  For example, for a workload with 150 active connections and complex queries, it can be useful to modify certain per thread buffers for Azure Database for MySQL. 

 

To understand if your queries are using per thread buffers, run explain on a query and then under the ‘extra’ column, view the description to understand what kind of buffer your query may be using. 

In most cases, the tables are already set up to have indexes, so any complex join queries and sort queries should already be using the given indexes to work optimally.  However, these queries can potentially be further optimized by running them through the Performance Recommendation feature, which is available as part of your Intelligent Performance InsightsIf your queries are already optimized for JOINs and ORDER BYS, then you can benefit from 'preload_buffer_size'.   

 

Join_buffer_size 

The 'join_buffer_size' parameter is used for all index scans, as well as for joins that do not use indexes.  Thus, for a read-heavy workload with tables that don't have indexes, joins will do full table scans that will use quite a bit of the buffer.  If join queries are heavy, then 'join_buffer_size' could be increased to allow for full table scans within memory.  Note that 'join_buffer_size' is not only a per-thread buffer but also a per-join-per-thread buffer. 

 

On the flip side, setting the value of the 'join_buffer_size' parameter too high can cause significant performance drops because the majority of the buffer_size allocated will not be used or required.  Thus, setting this is a value appropriately depends upon a user's indexes and their queries. 

 

Sort_buffer_size 

Similarly, 'sort_buffer_size' is used for ORDER BY and GROUP BY operations.  For a read-heavy workload (again in which tables don't have indexes), sort queries require memory to complete a full sort operation.  For queries that can’t be further optimized or can’t use indexes, you can increase 'sort_buffer_size' to allow for more performant queries.  If the metric 'sort_merge_passes' increases quicklya user’s queries may benefit from increasing the default 256K 'sort_buffer_size'.  However, if the value is set too high, it will affect the performance for a smaller workload in which queries do not use all of the 'sort_buffer_size'. 

 

Preload_buffer_size 

'Preload_buffer_size' is useful when your workload is using indexes, as does an average WordPress workload, because it allocates the size of the buffer used to preload indexes.  It’s especially helpful to modify this parameter if indexes are being used often, as it is better to read indexes from the buffer. 

 

Symptom 3:  Improving write transactions 

There are times when a user's workload may be write-heavy or rely upon committing and flushing of certain transactions.  In this case, you can tune certain parameters to assist with the flushing process. 

 

Innodb_lru_scan_depth 

This parameter specifies how far into the LRU buffer pool a flush operation should look for dirty pages.  This flush operation is run 1x a second and is used by the page cleaner thread that is doing the scan. For most workloads, you can set this parameter to a low value, as searching too far into the buffer pool could impact performance.  In our performance tests, setting this value lower than the default of 1024 bytes didn’t improve or degrade the performance of a read-heavy workload. However, for write-heavy workloads there was an improvement. 

 

Innodb_purge_threads 

'Innodb_purge_threads' is a useful parameter when there are many purge operations ongoing in your instance.  Since a row and its records are only removed from memory when InnoDB does a purge operation, increasing the number of 'innodb_purge_threads' is also beneficial, especially for cases in which there are many tables and threads are less likely to contend for the same tables. 

 

Innodb_autoinc_lock_mode 

This parameter is used to set the locking mechanism for insert-like statements that add values for tables AUTO_INCREMENT columns. Typically, this value is set to 1, indicating a “consecutive” lock mode.  For workloads with many insert-like statements, there is an improvement in performance in setting 'innodb_autoinc_lock_mode' to 2, which indicates “interleaved” lock mode.  This allows insert-like statements to run in a concurrent and scalable manner because InnoDB removes any table-lock hold.  

 

You should only set this value if you have no dependency on incremental consistency for the AUTO_INCREMENT column.  Further, setting the lock mode to an interleaved mechanism may not work for all write heavy scenarios, like those with simple insertsIt also is not safe and recommended for statement-based replication scenarios.  

 

Leave a comment below if you have more server parameters you recommend tuning!

 

Thanks for reading!

Mollee Jain 

Software Engineer, Azure OSS Databases team

1 Comment
Version history
Last update:
‎Jun 02 2020 03:37 PM
Updated by: