Customers who have read our previous blogs frequently raise an important question: How should we configure ProxySQL in a highly available setup to get maximum performance and scale from our Azure Database for MySQL service. In this post, I hope to provide detail that will help to address the question.
Important: This post is a part of a multi-post blog series related to configuring ProxySQL with Azure Database for MySQL. We highly recommend that you read the previous posts in this series (shown below) to have more context about use cases for ProxySQL with Azure Database for MySQL servers.
Most of the time, you configure ProxySQL on a separate server, and all the connections are routed from the application to the ProxySQL server. ProxySQL in turn directs all the connections to database server depending on the query rules configured (illustrated in figure 1). There are some concerns with using this approach, as:
One way to address these concerns is to configure ProxySQL on the application side (illustrated in figure 2). By collocating ProxySQL with the application VMs, you also reduce cost. One of the downsides of this solution is the need to manually synchronize the ProxySQL configuration across application VMs or pods. While this works for many customers, if ProxySQL configuration changes are less frequent (as is the case in most scenarios), it can become a challenge in a complex setup. You can mitigate this problem and also build a highly available and scalable architecture with ProxySQL by using ProxySQL native clustering (as illustrated in figure 3).
Important:
This feature is experimental in ProxySQL and subject to change. We strongly recommend to think carefully before using it in production.
This blog post shows how to configure native ProxySQL clustering as shown in Figure 3.
Note:
ProxySQL is an open source community tool. It is supported by Microsoft on a best effort basis. In order to get production support with authoritative guidance, you can evaluate and reach out to ProxySQL Product support.
For the example in this post, we’ll be setting up a cluster with three nodes and configuring the ProxySQL server to split the read and write workloads to designated Azure Database for MySQL servers. Prerequisites for this effort are outlined in the post Load balance read replicas using ProxySQL in Azure Database for MySQL, the difference being that we’ll use three Linux VMs running Ubuntu as three nodes for ProxySQL cluster.
Perform below steps on all three Linux VMs.
To install ProxySQL, in the post Load balance read replicas using ProxySQL in Azure Database for MySQL, refer to the detail in the section “Installing ProxySQL on Ubuntu VM”.
admin_variables=
{
admin_credentials="admin:admin;cluster1:secret1pass"
# mysql_ifaces="127.0.0.1:6032;/tmp/proxysql_admin.sock"
mysql_ifaces="0.0.0.0:6032"
# refresh_interval=2000
# debug=true
cluster_username="cluster1"
cluster_password="secret1pass"
cluster_check_interval_ms=200
cluster_check_status_frequency=100
cluster_mysql_query_rules_save_to_disk=true
cluster_mysql_servers_save_to_disk=true
cluster_mysql_users_save_to_disk=true
cluster_proxysql_servers_save_to_disk=true
cluster_mysql_query_rules_diffs_before_sync=3
cluster_mysql_servers_diffs_before_sync=3
cluster_mysql_users_diffs_before_sync=3
cluster_proxysql_servers_diffs_before_sync=3
}
proxysql_servers =
(
{
hostname="VM_1_Public_IP"
port=6032
comment="proxysql100"
},
{
hostname=" VM_2_Public_IP"
port=6032
comment="proxysql200"
},
{
hostname=" VM_3_Public_IP"
port=6032
comment="proxysql300"
}
)
Important:
2. Execute the below step to re-initialize ProxySQL from the config file (after first startup the DB file is used instead of the proxysql.cnf config file)
Service proxysql initial
Note:
In the previous section, we configured ProxySQL native clustering. This ensures that changes to one ProxySQL node will be propagated to all nodes in the cluster. From now onwards, please perform the ProxySQL specific changes to only one of the nodes in the ProxySQL cluster.
mysql –u admin –p admin -h127.0.0.1 -P6032
insert into mysql_servers(hostgroup_id,hostname,port,weight,comment) values(1,'mydemomasterserver.mysql.database.azure.com',3306,1,'Write Group');
insert into mysql_servers(hostgroup_id,hostname,port,weight,comment) values(2,'mydemoreplicaserver.mysql.database.azure.com',3306,1,'Read Group');
UPDATE mysql_servers SET use_ssl=1 WHERE hostgroup_id=1;
UPDATE mysql_servers SET use_ssl=1 WHERE hostgroup_id=2;
In ProxySQL, the user connects to ProxySQL and in turn ProxySQL passes the connection to the MySQL node. To allow ProxySQL to access to the MySQL database, we need to create a user on MySQL database with the same credentials as on the ProxySQL server.
CREATE USER 'mydemouser'@'%' IDENTIFIED BY ' secretpassword';
GRANT ALL PRIVILEGES ON *.* TO ' mydemouser'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
Note: Perform the action below on only one of the nodes in the ProxySQL cluster.
Allow the ‘mydemouser’ user to connect to ProxySQL server.
insert into mysql_users(username,password,default_hostgroup,transaction_persistent)values('mydemouser',' secretpassword',1,1);
Configure ProxySQL to monitor the nodes, and then create the monitoring user on the Master server.
CREATE USER ' monitoruser'@'%' IDENTIFIED BY 'secretpassword';
GRANT SELECT ON *.* TO ' monitoruser'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
On the server running ProxySQL, configure mysql-monitor to the username of the new account.
set mysql-monitor_username='monitoruser';
set mysql-monitor_password=' secretpassword';
insert into mysql_query_rules(rule_id,active,match_digest,destination_hostgroup,apply)values(1,1,'^SELECT.*FOR UPDATE$',1,1);
insert into mysql_query_rules(rule_id,active,match_digest,destination_hostgroup,apply)values(2,1,'^SELECT',2,1);
In ProxySQL configuration system, the changes we made are in memory and to make them persist across the restarts, you must copy settings to runtime and save them to disk.
load mysql users to runtime;
load mysql servers to runtime;
load mysql query rules to runtime;
load mysql variables to runtime;
load admin variables to runtime;
save mysql users to disk;
save mysql servers to disk;
save mysql query rules to disk;
save mysql variables to disk;
save admin variables to disk;
After successfully completing the above steps, ProxySQL is configured with native clustering and is ready to split the read and write workload. Use the following steps to test if the read and write splits are being forwarded properly:
mysql –u mydemouser –p secretpassword -h127.0.0.1 -P6033
SELECT *
FROM mydemotable;
UPDATE mydemotable
SET mydemocolumn=value
WHERE condition;
To verify that ProxySQL has routed the above read and write correctly:
3. Connect to the ProxySQL administration interface on one of the VM with the default password ‘admin’.
mysql –u admin –p admin -h127.0.0.1 -P6032
4. Execute the following query:
SELECT * FROM stats_mysql_query_digest;
If you have trouble setting up ProxySQL on Azure Database for MySQL, please contact the Azure Database for MySQL team at AskAzureDBforMySQL@service.microsoft.com
Thank you!
Amol Bhatnagar
Program Manager - Microsoft
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.