In the first post of this series (Testing ReadySet as a Query Cacher for PostgreSQL (Plus ProxySQL and HAproxy) Part 1: How-To), I presented my test environment and methodology and explained how to install ReadySet, ProxySQL, and HAproxy and configure them to work with PostgreSQL. In this final part, I present the different test scenarios […]
02
2025
Testing ReadySet as a Query Cacher for PostgreSQL (Plus ProxySQL and HAproxy) Part 2: Test Results
01
2025
Testing ReadySet as a Query Cacher for PostgreSQL (Plus ProxySQL and HAproxy) Part 1: How-To
A couple of weeks ago, I attended a PGDay event in Blumenau, a city not far away from where I live in Brazil. Opening the day were former Percona colleagues Marcelo Altmann and Wagner Bianchi, showcasing ReadySet’s support for PostgreSQL. Readyset is a source-available database cache service that differs from other solutions by not relying […]
11
2024
Is MySQL Router 8.2 Any Better?
In my previous article, Comparisons of Proxies for MySQL, I showed how MySQL Router was the lesser performing Proxy in the comparison. From that time to now, we had several MySQL releases and, of course, also some new MySQL Router ones.Most importantly, we also had MySQL Router going back to being a level 7 proxy […]
03
2024
The Various Methods to Backup and Restore ProxySQL

ProxySQL is a high-performance SQL proxy that runs as a daemon watched by a monitoring process. The process monitors the daemon and restarts it in case of a crash to minimize downtime.
The daemon accepts incoming traffic from MySQL clients and forwards it to backend MySQL servers.
The proxy is designed to run continuously without needing to be restarted. Most configurations can be done at runtime using queries similar to SQL statements in the ProxySQL admin interface. These include runtime parameters, server grouping, and traffic-related settings.

In this blog, we are discussing various methods to backup and restore ProxySQL. Below are backup methods to perform a backup restore of the ProxySQL server.
- Config file backup
- Mysqldump
- Physical snapshot
Here will consider ProxySQL configured for async replication with Primary and Replica as shown below in topology.
ProxySQLAdmin> select * from runtime_mysql_servers; +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 192.0.0.1 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | | 2 |192.0.0.2 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 2 rows in set (0.002 sec)
Config file backup:
ProxySQL backup can be performed by the config file. This method is beneficial in creating multiple instances.
Config backup file can be created by using the below commands and it can be used to start new ProxySQL instances.
ProxySQLAdmin> select config into outfile /var/lib/proxysql/proxybkp.cnf;
Query OK, 9920 rows affected (0.00 sec)
File /var/lib/proxysql/proxybkp.cnf is saved.
proxysql $ sudo cat /var/lib/proxysql/proxybkp.cnf |grep -wi mysql_servers -A 35
mysql_servers:
(
{
hostgroup_id=1
hostname="192.0.0.1"
port=3306
gtid_port=0
status="ONLINE"
weight=1
compression=0
max_connections=1000
max_replication_lag=600
use_ssl=0
max_latency_ms=0
},
{
hostgroup_id=2
hostname="192.0.0.2"
port=3306
gtid_port=0
status="ONLINE"
weight=1
compression=0
max_connections=1000
max_replication_lag=600
use_ssl=0
max_latency_ms=0
}
)
mysql_replication_hostgroups:
(
{
writer_hostgroup=1
reader_hostgroup=2
check_type="read_only"
}
Mysqldump:
A ProxySQL backup restore can be performed using mysqldump. It is also possible to dump all (or some) of the tables of ProxySQL’s admin using mysqldump.
mysqldump must be run with –no-tablespaces , –skip-triggers, –skip-add-locks, and –skip-column-statistics
Backup:
$ mysqldump -u admin -p -h 127.0.0.1 -P6032 --no-tablespaces --no-create-info --no-create-db --skip-triggers --skip-column-statistics main mysql_servers > proxy_mysql_servers_bkp.sql
Restoration:
ProxySQLAdmin> select * from mysql_servers; +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 192.0.0.1 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | | 2 | 192.0.0.2 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 2 rows in set (0.00 sec) ProxySQLAdmin> delete from mysql_servers; Query OK, 2 rows affected (0.01 sec) ProxySQLAdmin> select * from mysql_servers; Empty set (0.00 sec) $ mysql -uadmin -p -h 127.0.0.1 -P 6032 -vvv < proxysql_mysql_servers_bkp.sql Enter password: Query OK, 0 rows affected (0.00 sec) Bye ProxySQLAdmin> select * from mysql_servers; +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 192.0.0.1 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | | 2 | 192.0.0.2 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+-------------+-------+--------------------+---------------+ 2 rows in set (0.00 sec)
If mysqldump version 8.0 is used:
- mysqldump also needs the –skip-column-statistics option
- It only works from ProxySQL version 2.0.8 and above
Physical snapshot:
A ProxySQL backup can be performed using the physical snapshot method. Ensure to stop ProxySQL services to get a consistent backup. This method contains the following steps:
- Stop ProxySQL
- Create a copy of the data directory
- Start ProxySQL
Backup:
$ sudo service proxysql stop Redirecting to /bin/systemctl stop proxysql.service $ sudo cp /var/lib/proxysql/* /proxysql_snapshot/
Restoration:
$ sudo rm -rf /var/lib/proxysql/* $ sudo cp /proxysql_snapshot/* /var/lib/proxysql/ $ sudo chown -R proxysql:proxysql /var/lib/proxysql/ $ sudo service proxysql start Redirecting to /bin/systemctl start proxysql.service ProxySQLAdmin> select * from mysql_servers; +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 192.0.0.1 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | | 2 | 192.0.0.2 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 2 rows in set (0.00 sec)
Conclusion
These methods provide solutions for ProxySQL backups and restores, which play a pivotal role in safeguarding the integrity of your data and providing defense against various disasters, hardware malfunctions, data loss, and corruption. Without reliable backups, the consequences of data loss can be severe, as businesses risk operational disruptions, financial losses, damage to their reputation, and could even result in compliance violations.
If you are interested in more hands-on learning of ProxySQL, Percona’s professional training department offers a one-day, hands-on intensive tutorial covering all things ProxySQL. In the tutorial, we cover the basics of installation and configuration, read-write splitting, async/GR/PXC-based high availability, query firewalling, query whitelisting, results caching, and much, much more. Reach out to us today to schedule your instructor-led class!
Learn more about Percona Training
Related links:
How to Install ProxySQL From the Percona Repository
Install Percona Server for MySQL
How to set up a replica for replication in 6 simple steps with Percona XtraBackup
25
2023
Restrict MySQL Connections to Broken Replica in ProxySQL

ProxySQL is a high-performance SQL proxy, which runs as a daemon watched by a monitoring process. The process monitors the daemon and restarts it in case of a crash to minimize downtime.
The daemon accepts incoming traffic from MySQL clients and forwards it to backend MySQL servers.
The proxy is designed to run continuously without needing to be restarted. Most configurations can be done at runtime using queries similar to SQL statements in the ProxySQL admin interface. These include runtime parameters, server grouping, and traffic-related settings.

Here, we will consider ProxySQL configured for async replication. Even when a replica is broken/stopped, ProxySQL still routes connections to replicas. It can be overcome by setting the appropriate value for mysql-monitor_slave_lag_when_null
mysql-monitor_slave_lag_when_null
When the replication check returns that Seconds_Behind_Master=NULL, the value of mysql-monitor_slave_lag_when_null (in seconds) is assumed to be the current replication lag. This allows us to either shun or keep online a server where replication is broken/stopped.
ProxSQL stops routing connections to replicas whenever the lag value is greater than the max_replication_lag value defined in ProxySQLl. When a replica is broken/stopped, the replication check will return that Seconds_Behind_Master=NULL. At this position, the value of mysql-monitor_slave_lag_when_null (in seconds) is assumed to be the current replication lag.
MySQL connections to broken/stopped replicas can be restricted by setting the value of mysql-monitor_slave_lag_when_null greater than max_replication_lag. Let’s see in the scenario below.
max_replication_lag is set to 600. ProxSQL stops routing connections to replicas when the lag is greater than 600.
ProxySQLAdmin> select * from runtime_mysql_servers; +--------------+---------------+------+-----------+--------+----- ---+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 127.0.0.1 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | | 2 | 127.0.0.2 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 2 rows in set (0.002 sec)
Let’s stop replication on replica 127.0.0.2:
Replica_127.0.0.2>stop replica; Query OK, 0 rows affected (0.01 sec)
Still, ProxySQL routes connections to replica, though replication is stopped.
ProxySQLAdmin> select * from runtime_mysql_servers; +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 127.0.0.1 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | | 2 | 127.0.0.2 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 2 rows in set (0.002 sec)
Now consider setting mysql-monitor_slave_lag_when_null = 610 as it is greater than the current value of max_replication_lag.
ProxySQLAdmin> select * from runtime_global_variables where variable_name='mysql-monitor_slave_lag_when_null'; +-----------------------------------+----------------+ | variable_name | variable_value | +-----------------------------------+----------------+ | mysql-monitor_slave_lag_when_null | 60 | +-----------------------------------+----------------+ 1 row in set (0.002 sec) ProxySQLAdmin> UPDATE global_variables SET variable_value=610 WHERE variable_name='mysql-monitor_slave_lag_when_null'; Query OK, 1 row affected (0.002 sec) ProxySQLAdmin> load mysql variables to runtime; Query OK, 0 rows affected (0.002 sec) ProxySQLAdmin> save mysql variables to disk; Query OK, 158 rows affected (0.006 sec) ProxySQLAdmin> select * from runtime_global_variables where variable_name='mysql-monitor_slave_lag_when_null'; +-----------------------------------+----------------+ | variable_name | variable_value | +-----------------------------------+----------------+ | mysql-monitor_slave_lag_when_null | 610 | +-----------------------------------+----------------+ 1 row in set (0.002 sec)
Let’s stop replication once again on replica 127.0.0.2:
Replica_127.0.0.2>stop replica; Query OK, 0 rows affected (0.01 sec)
As soon as replication is stopped on replica 127.0.0.2, the replication check returns that Seconds_Behind_Master=NULL. As mysql-monitor_slave_lag_when_null is set to 610, it assumes replication lag is greater than 600 and shuns the replica — thus it stops connections to the replica.
ProxySQLAdmin> select * from runtime_mysql_servers; +--------------+---------------+------+-----------+---------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+-----------+---------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 127.0.0.1 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | | 2 | 127.0.0.2 | 3306 | 0 | SHUNNED | 1 | 0 | 1000 | 600 | 0 | 0 | | +--------------+---------------+------+-----------+---------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 2 rows in set (0.002 sec)
When replication is started on replica 127.0.0.2:
Replica_127.0.0.2>start replica; Query OK, 0 rows affected (0.01 sec)
ProxySQL will start routing connections to replica.
ProxySQLAdmin> select * from runtime_mysql_servers; +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 1 | 127.0.0.1 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | | 2 | 127.0.0.2 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 600 | 0 | 0 | | +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 2 rows in set (0.002 sec)
Conclusion
By setting the appropriate value for mysql-monitor_slave_lag_when_null in ProxySQL, MySQL connections can be restricted to broken/stopped replicas.
Related links:
How to Install ProxySQL From the Percona Repository
Install Percona Server for MySQL
How to set up a replica for replication in 6 simple steps with Percona XtraBackup
24
2023
How to Persist a Hashed Format Password Inside ProxySQL

In this blog post, we will see how to persist the password inside the ProxySQL mysql_users table in hashed format only. Also, even if someone stored the password in cleartext, we see how to change those into the hashed format easily.
Here we are just highlighting one of the scenarios during work on the client environment where we noticed that the ProxySQL mysql_users table had more than 100 user entries, but some of them were available/inserted into the clear text password, whereas some were inserted properly into hashed entries.
Before just explaining those simple commands that were used to fix those clear text entries into the hashed entry quickly, let’s see some more information about the ProxySQL mysql_users table and the password formats.
Password formats inside ProxySQL
ProxySQL is capable of storing passwords in two different formats within the mysql_users.password field, whether in-memory or on-disk. These formats include plain text and hashed passwords.
Plain text passwords are extremely vulnerable to unauthorized access, as anyone with access to the database or configuration files can easily read them. While storing these files in a secure location can mitigate some security concerns, there is still a risk of data breaches. Hashed passwords, on the other hand, are stored in the same format as passwords in the MySQL server’s “mysql.user.password” (before MySQL 8.0 version) or “mysql.user.authentication_string” column (since MySQL 8.0 version using the mysql_native_password plugin), providing an added layer of security.
In ProxySQL, any password that begins with an asterisk (*) is considered to be a hashed password.
The Admin interface of ProxySQL lacks a PASSWORD() function. Therefore, any passwords stored within ProxySQL are preserved in the format in which they were originally inserted. This format may either be plain text or a hashed value.
Note: In general, ProxySQL doesn’t support the user created using the caching_sha2_password plugin password, once the same mysql.user.authentication_string is stored inside the mysql_users.password column. Still, there is a workaround for using those user accounts that are created inside the database using the caching_sha2_password plugin by inserting the clear text password entries inside the ProxySQL mysql_users.password column, but that is not recommended as per security best practices to keep clear text password entries inside the ProxySQL. Hence, we could say, ProxySQL and MySQL communication better support users that are created with the mysql_native_password plugin inside the database.
For more details, please check this blog post ProxySQL Support for MySQL caching_sha2_password and the official ProxySQL documentation Information about MySQL 8.0 – ProxySQL .
So, to explain this scenario, here we created four different test DB user accounts inside the database with the mysql_native_password plugin.
From a database node:
| Username | Password (In Clear Text) |
| test1 | test1 |
| test2 | test2 |
| test3 | test3 |
| test4 | test4 |
mysql [localhost:8028] {msandbox} ((none)) > select user,host,authentication_string,plugin from mysql.user where user like 'test%'G
*************************** 1. row ***************************
user: test1
host: localhost
authentication_string: *06C0BF5B64ECE2F648B5F048A71903906BA08E5C
plugin: mysql_native_password
*************************** 2. row ***************************
user: test2
host: localhost
authentication_string: *7CEB3FDE5F7A9C4CE5FBE610D7D8EDA62EBE5F4E
plugin: mysql_native_password
*************************** 3. row ***************************
user: test3
host: localhost
authentication_string: *F357E78CABAD76FD3F1018EF85D78499B6ACC431
plugin: mysql_native_password
*************************** 4. row ***************************
user: test4
host: localhost
authentication_string: *D159BBDA31273BE3F4F00715B4A439925C6A0F2D
plugin: mysql_native_password
4 rows in set (0.00 sec)
From ProxySQL:
Here we will insert the user accounts into the mysql_users tables in mixed clear text format as well as in hash format.
ProxySQL_Admin> INSERT INTO mysql_users(username,password) VALUES ('test1','test1'), ('test2','*7CEB3FDE5F7A9C4CE5FBE610D7D8EDA62EBE5F4E'),('test3','test3'), ('test4','*D159BBDA31273BE3F4F00715B4A439925C6A0F2D');
Query OK, 4 rows affected (0.00 sec)
ProxySQL_Admin> select distinct username,password from mysql_users where username like 'test%';
+----------+-------------------------------------------+
| username | password |
+----------+-------------------------------------------+
| test1 | test1 |
| test2 | *7CEB3FDE5F7A9C4CE5FBE610D7D8EDA62EBE5F4E |
| test3 | test3 |
| test4 | *D159BBDA31273BE3F4F00715B4A439925C6A0F2D |
+----------+-------------------------------------------+
4 rows in set (0.00 sec)
ProxySQL_Admin> LOAD MYSQL USERS TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)
ProxySQL_Admin> SAVE MYSQL USERS TO DISK;
Query OK, 0 rows affected (0.01 sec)
ProxySQL_Admin> select distinct username,password from mysql_users where username like 'test%';
+----------+-------------------------------------------+
| username | password |
+----------+-------------------------------------------+
| test1 | test1 |
| test2 | *7CEB3FDE5F7A9C4CE5FBE610D7D8EDA62EBE5F4E |
| test3 | test3 |
| test4 | *D159BBDA31273BE3F4F00715B4A439925C6A0F2D |
+----------+-------------------------------------------+
4 rows in set (0.00 sec)
Below are some other ProxySQL configurations done to verify the ProxySQL’s mysql_users table password is working fine to establish a connection with the MySQL database.
ProxySQL_Admin> select hostgroup_id,hostname,port,status from mysql_servers;
+--------------+-----------+------+--------+
| hostgroup_id | hostname | port | status |
+--------------+-----------+------+--------+
| 10 | localhost | 8028 | ONLINE |
+--------------+-----------+------+--------+
1 row in set (0.00 sec)
ProxySQL_Admin> select rule_id,active,proxy_port,match_digest,destination_hostgroup,retries,apply from mysql_query_rulesG
*************************** 1. row ***************************
rule_id: 1048
active: 1
proxy_port: 6033
match_digest: ^SELECT.*FOR UPDATE
destination_hostgroup: 10
retries: 3
apply: 1
*************************** 2. row ***************************
rule_id: 1050
active: 1
proxy_port: 6033
match_digest: ^SELECT.*$
destination_hostgroup: 10
retries: 3
apply: 1
2 rows in set (0.00 sec)
Let’s check the database connectivity via ProxySQL using these DB user accounts.
for i in {1..4}; do mysql -h 127.0.0.1 -utest$i -ptest$i -P6033 -e"select current_user(),version();";done
mysql: [Warning] Using a password on the command line interface can be insecure.
+-----------------+-----------+
| current_user() | version() |
+-----------------+-----------+
| test1@localhost | 8.0.28 |
+-----------------+-----------+
mysql: [Warning] Using a password on the command line interface can be insecure.
+-----------------+-----------+
| current_user() | version() |
+-----------------+-----------+
| test2@localhost | 8.0.28 |
+-----------------+-----------+
mysql: [Warning] Using a password on the command line interface can be insecure.
+-----------------+-----------+
| current_user() | version() |
+-----------------+-----------+
| test3@localhost | 8.0.28 |
+-----------------+-----------+
mysql: [Warning] Using a password on the command line interface can be insecure.
+-----------------+-----------+
| current_user() | version() |
+-----------------+-----------+
| test4@localhost | 8.0.28 |
+-----------------+-----------+
Here above, the main problem is that our mysql_users tables have some plain text passwords visible for some user accounts (test1 & test3), which we don’t want to keep in clear/plain text password format. Instead, all password entries should be available in the hashed password format.
There is one way to fix this problem: drop those clear text password entries for user accounts, get the password hash for those user passwords generated from the MySQL database prompt using the PASSWORD() function, and later insert those actual hashed entries inside the mysql_users table to fix the issue.
But as earlier mentioned, if our mysql_users table had a lot of (>100) entries, fixing those passwords manually can be a tedious job.
Note: Here it is assumed we are not using the percona-scheduler-admin client, which has the feature to sync your user accounts directly with the database nodes in the ProxySQL mysql_users table.
So for this case, let’s see the next section, where we will understand how the admin-hash_passwords variable will help us to solve this problem and persist only hashed password entries inside the ProxySQL’s mysql_users table.
ProxySQL’s admin-hash_passwords variable
ProxySQL version 1.2.3 has included a new global boolean variable called admin-hash_passwords, which is enabled by default to support hashed passwords. If admin-hash_passwords=true, passwords will be automatically hashed during runtime when executing the LOAD MYSQL USERS TO RUNTIME command. However, passwords stored in the mysql_users tables will not be automatically hashed.
Nevertheless, it is possible to hash these passwords both in-memory and on-disk by copying users from RUNTIME using commands such as “SAVE MYSQL USERS FROM RUNTIME” after executing LOAD MYSQL USERS TO RUNTIME and then saving the updated information using SAVE MYSQL USERS TO DISK.
Let’s persist the hashed password inside ProxySQL
ProxySQL_Admin> select @@admin-hash_passwords; +------------------------+ | @@admin-hash_passwords | +------------------------+ | true | +------------------------+ 1 row in set (0.00 sec) ProxySQL_Admin> select distinct username,password from mysql_users where username like 'test%'; +----------+-------------------------------------------+ | username | password | +----------+-------------------------------------------+ | test1 | test1 | | test2 | *7CEB3FDE5F7A9C4CE5FBE610D7D8EDA62EBE5F4E | | test3 | test3 | | test4 | *D159BBDA31273BE3F4F00715B4A439925C6A0F2D | +----------+-------------------------------------------+ 4 rows in set (0.00 sec) ProxySQL_Admin> LOAD MYSQL USERS TO RUNTIME; Query OK, 0 rows affected (0.01 sec) ProxySQL_Admin> select distinct username,password from mysql_users where username like 'test%'; +----------+-------------------------------------------+ | username | password | +----------+-------------------------------------------+ | test1 | test1 | | test2 | *7CEB3FDE5F7A9C4CE5FBE610D7D8EDA62EBE5F4E | | test3 | test3 | | test4 | *D159BBDA31273BE3F4F00715B4A439925C6A0F2D | +----------+-------------------------------------------+ 4 rows in set (0.00 sec)
Currently, passwords are hashed at RUNTIME, but they are not hashed on the mysql_users table. To hash them inside the mysql_users table as well, we need to run the “SAVE MYSQL USERS FROM RUNTIME” command.
ProxySQL_Admin> SAVE MYSQL USERS FROM RUNTIME; Query OK, 0 rows affected (0.00 sec) ProxySQL_Admin> select distinct username,password from mysql_users where username like 'test%'; +----------+-------------------------------------------+ | username | password | +----------+-------------------------------------------+ | test1 | *06C0BF5B64ECE2F648B5F048A71903906BA08E5C | | test2 | *7CEB3FDE5F7A9C4CE5FBE610D7D8EDA62EBE5F4E | | test3 | *F357E78CABAD76FD3F1018EF85D78499B6ACC431 | | test4 | *D159BBDA31273BE3F4F00715B4A439925C6A0F2D | +----------+-------------------------------------------+ 4 rows in set (0.00 sec)
The command “SAVE MYSQL USERS TO DISK“ can now be used to store/persist the hashed passwords on the disk.
ProxySQL_Admin> SAVE MYSQL USERS TO DISK; Query OK, 0 rows affected (0.01 sec) ProxySQL_Admin> select distinct username,password from mysql_users where username like 'test%'; +----------+-------------------------------------------+ | username | password | +----------+-------------------------------------------+ | test1 | *06C0BF5B64ECE2F648B5F048A71903906BA08E5C | | test2 | *7CEB3FDE5F7A9C4CE5FBE610D7D8EDA62EBE5F4E | | test3 | *F357E78CABAD76FD3F1018EF85D78499B6ACC431 | | test4 | *D159BBDA31273BE3F4F00715B4A439925C6A0F2D | +----------+-------------------------------------------+ 4 rows in set (0.00 sec)
Let’s verify the database connectivity via ProxySQL using these DB user accounts.
for i in {1..4}; do mysql -h 127.0.0.1 -utest$i -ptest$i -P6033 -e"select current_user(),version();";done
mysql: [Warning] Using a password on the command line interface can be insecure.
+-----------------+-----------+
| current_user() | version() |
+-----------------+-----------+
| test1@localhost | 8.0.28 |
+-----------------+-----------+
mysql: [Warning] Using a password on the command line interface can be insecure.
+-----------------+-----------+
| current_user() | version() |
+-----------------+-----------+
| test2@localhost | 8.0.28 |
+-----------------+-----------+
mysql: [Warning] Using a password on the command line interface can be insecure.
+-----------------+-----------+
| current_user() | version() |
+-----------------+-----------+
| test3@localhost | 8.0.28 |
+-----------------+-----------+
mysql: [Warning] Using a password on the command line interface can be insecure.
+-----------------+-----------+
| current_user() | version() |
+-----------------+-----------+
| test4@localhost | 8.0.28 |
+-----------------+-----------+
Conclusion
Using the admin-hash_passwords feature can be extremely beneficial when there are mixed-format password entries in the mysql_users table. By saving the RUNTIME entries, which consist of hashed passwords, to disk and persisting only these entries in the mysql_users table of ProxySQL, we can easily simplify the management of hashed password entries. Furthermore, to ensure that only hashed password entries are stored within ProxySQL, it is imperative to create database user accounts using the mysql_native_password plugin.
Percona Distribution for MySQL is the most complete, stable, scalable, and secure open source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!
04
2023
ProxySQL for Short-Term Application Fixes

When talking about the benefits and use cases of ProxySQL with clients, one feature I generally reference is the query rewrite engine. This is a great feature that is often used for sharding (I’ve written about this in the past at Horizontal Scaling in MySQL – Sharding Followup). Another use case I reference is “temporary application fixes.” While this is definitely a valid use case, I hadn’t personally come across an issue in the wild where the application fix wasn’t trivial.
Recently, a client hit a case where pt-archiver wasn’t able to archive rows from a table that had a bit column as part of a primary key. This is certainly an edge case, but we had hoped the fix was trivial. Unfortunately, the root of the issue was around how the Perl DBI library quotes and handles the bit data type by default.
When executing the query, Perl was trying to quote an empty string instead of the bit representation of 0, causing the tool to fail (note this is output from the locally reproduced dummy schema/data):
2022-10-18T14:19:02 0 0 DBD::mysql::st execute failed: Truncated incorrect DOUBLE value: '' [for Statement "DELETE FROM `myd_test`.`gerald` WHERE (`id` = ? AND `val` = ?)" with ParamValues: 0=2, 1='.'] at /usr/bin/pt-archiver line 6795. 2022-10-18T14:19:02.698906Z 3001 Query DELETE FROM `myd_test`.`gerald` WHERE (`id` = '2' AND `val` = '')
The bug was reported, and engineering began working on a fix, but the table was still growing and needed to be purged (in a non-locking manner) quickly.
Looking at the query, we could identify that a simple CAST around the bit column had achieved the result we were looking for in this edge case (since BIT(1) is either 0 or 1). Unfortunately, the version of MySQL running didn’t have the ability to rewrite DML queries (this was introduced in 8.0.12: https://dev.mysql.com/doc/refman/8.0/en/rewriter-query-rewrite-plugin.html). Finally, a case where the application fix wasn’t trivial, and the query rewrite engine in ProxySQL was needed to solve an immediate issue!
To quickly set up the environment, ProxySQL was installed locally on the host running pt-archiver and set up with a single backend (the target testbed server). With this setup, you simply connect pt-archiver to the localhost and allow the query rewrites to flow through ProxySQL to the backend servers. Please note – there was nothing elaborate or special done in this setup – just a single server backend in a single hostgroup.
Once ProxySQL was up and running, I ran pt-archiver, waited for it to fail, and then reviewed the query digest IDs recorded in the stats_mysql_query_digest table. Here were the main queries and the associated digest hashes for my testbed:
- Get first chunk: 0x0025B6A9E5F76E31
SELECT SQL_NO_CACHE `id`,`val` FROM `test`.`gerald` FORCE INDEX(`PRIMARY`) WHERE ((val) in (select a.val from test.gerald a where id > ?)) ORDER BY `id`,`val` LIMIT ?
- Get next chunk: 0x3128ED8C7A53F401
SELECT SQL_NO_CACHE `id`,`val` FROM `test`.`gerald` FORCE INDEX(`PRIMARY`) WHERE ((val) in (select a.val from test.gerald a where id > ?)) AND ((`id` > ?) OR (`id` = ? AND `val` >= ?)) ORDER BY `id`,`val` LIMIT ?
- Delete chunk: 0x3CE028A9657E611F
DELETE FROM `test`.`gerald` WHERE (`id` = ? AND `val` = ?)
With the digest hash for each, I then added three query rewrite rules to cast the bit column to an unsigned int like this:
INSERT INTO mysql_query_rules (rule_id, active, digest, match_pattern, replace_pattern, apply) VALUES (125, 1, "0x0025B6A9E5F76E31", "`id`,`val` FROM", "`id`,cast(`val` as unsigned) as `val` FROM", 1); INSERT INTO mysql_query_rules (rule_id, active, digest, match_pattern, replace_pattern, apply) VALUES (126, 1, "0x3128ED8C7A53F401", "`id`,`val` FROM", "`id`,cast(`val` as unsigned) as `val` FROM", 1); INSERT INTO mysql_query_rules (rule_id, active, digest, match_pattern, replace_pattern, apply) VALUES (127, 1, "0x3CE028A9657E611F", "AND `val`", "AND cast(`val` as unsigned)", 1);
Notice the key columns here – the match_pattern and the replace_pattern. This is where we can dynamically rewrite the problematic portion of the query. For the delete chunk query form, we are rewriting the DELETE query to look like this:
DELETE FROM `test`.`gerald` WHERE (`id` = ? AND cast(`val` as unsigned) = ?)
With the new rule in place, re-running pt-archiver through the local ProxySQL cleaned up the table correctly! While this was not a long-term solution (and the pt-archiver fix has been released to handle this edge case), it shows a real-world example using ProxySQL to fix an application that can’t be updated in a timely manner.
Percona Distribution for MySQL is the most complete, stable, scalable, and secure open-source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!
20
2023
Comparisons of Proxies for MySQL

With a special focus on Percona Operator for MySQL
Overview
HAProxy, ProxySQL, MySQL Router (AKA MySQL Proxy); in the last few years, I had to answer multiple times on what proxy to use and in what scenario. When designing an architecture, many components need to be considered before deciding on the best solution.
When deciding what to pick, there are many things to consider, like where the proxy needs to be, if it “just” needs to redirect the connections, or if more features need to be in, like caching and filtering, or if it needs to be integrated with some MySQL embedded automation.
Given that, there never was a single straight answer. Instead, an analysis needs to be done. Only after a better understanding of the environment, the needs, and the evolution that the platform needs to achieve is it possible to decide what will be the better choice.
However, recently we have seen an increase in the usage of MySQL on Kubernetes, especially with the adoption of Percona Operator for MySQL. In this case, we have a quite well-defined scenario that can resemble the image below:
In this scenario, the proxies must sit inside Pods, balancing the incoming traffic from the Service LoadBalancer connecting with the active data nodes.
Their role is merely to be sure that any incoming connection is redirected to nodes that can serve them, which includes having a separation between Read/Write and Read Only traffic, a separation that can be achieved, at the service level, with automatic recognition or with two separate entry points.
In this scenario, it is also crucial to be efficient in resource utilization and scaling with frugality. In this context, features like filtering, firewalling, or caching are redundant and may consume resources that could be allocated to scaling. Those are also features that will work better outside the K8s/Operator cluster, given the closer to the application they are located, the better they will serve.
About that, we must always remember the concept that each K8s/Operator cluster needs to be seen as a single service, not as a real cluster. In short, each cluster is, in reality, a single database with high availability and other functionalities built in.
Anyhow, we are here to talk about Proxies. Once we have defined that we have one clear mandate in mind, we need to identify which product allows our K8s/Operator solution to:
- Scale at the maximum the number of incoming connections
- Serve the request with the higher efficiency
- Consume as fewer resources as possible
The environment
To identify the above points, I have simulated a possible K8s/Operator environment, creating:
- One powerful application node, where I run sysbench read-only tests, scaling from two to 4096 threads. (Type c5.4xlarge)
- Three mid-data nodes with several gigabytes of data in with MySQL and Group Replication (Type m5.xlarge)
- One proxy node running on a resource-limited box (Type t2.micro)
The tests
We will have very simple test cases. The first one has the scope to define the baseline, identifying the moment when we will have the first level of saturation due to the number of connections. In this case, we will increase the number of connections and keep a low number of operations.
The second test will define how well the increasing load is served inside the previously identified range.
For documentation, the sysbench commands are:
Test1
sysbench ./src/lua/windmills/oltp_read.lua --db-driver=mysql --tables=200 --table_size=1000000 --rand-type=zipfian --rand-zipfian-exp=0 --skip_trx=true --report-interval=1 --mysql-ignore-errors=all --mysql_storage_engine=innodb --auto_inc=off --histogram --stats_format=csv --db-ps-mode=disable --point-selects=50 --reconnect=10 --range-selects=true –rate=100 --threads=<#Threads from 2 to 4096> --time=1200 run
Test2
sysbench ./src/lua/windmills/oltp_read.lua --mysql-host=<host> --mysql-port=<port> --mysql-user=<user> --mysql-password=<pw> --mysql-db=<schema> --db-driver=mysql --tables=200 --table_size=1000000 --rand-type=zipfian --rand-zipfian-exp=0 --skip_trx=true --report-interval=1 --mysql-ignore-errors=all --mysql_storage_engine=innodb --auto_inc=off --histogram --table_name=<tablename> --stats_format=csv --db-ps-mode=disable --point-selects=50 --reconnect=10 --range-selects=true --threads=<#Threads from 2 to 4096> --time=1200 run
Results
Test 1
As indicated here, I was looking to identify when the first Proxy will reach a dimension that would not be manageable. The load is all in creating and serving the connections, while the number of operations is capped at 100.
As you can see, and as I was expecting, the three Proxies were behaving more or less the same, serving the same number of operations (they were capped, so why not) until they weren’t.
MySQL router, after the 2048 connection, could not serve anything more.
NOTE: MySQL Router actually stopped working at 1024 threads, but using version 8.0.32, I enabled the feature: connection_sharing. That allows it to go a bit further.
Let us take a look also the latency:
Here the situation starts to be a little bit more complicated. MySQL Router is the one that has the higher latency no matter what. However, HAProxy and ProxySQL have interesting behavior. HAProxy performs better with a low number of connections, while ProxySQL performs better when a high number of connections is in place.
This is due to the multiplexing and the very efficient way ProxySQL uses to deal with high load.
Everything has a cost:
HAProxy is definitely using fewer user CPU resources than ProxySQL or MySQL Router …
.. we can also notice that HAProxy barely reaches, on average, the 1.5 CPU load while ProxySQL is at 2.50 and MySQL Router around 2.
To be honest, I was expecting something like this, given ProxySQL’s need to handle the connections and the other basic routing. What was instead a surprise was MySQL Router, why does it have a higher load?
Brief summary
This test highlights that HAProxy and ProxySQL can reach a level of connection higher than the slowest runner in the game (MySQL Router). It is also clear that traffic is better served under a high number of connections by ProxySQL, but it requires more resources.
Test 2
When the going gets tough, the tough get going
Let’s remove the –rate limitation and see what will happen.
The scenario with load changes drastically. We can see how HAProxy can serve the connection and allow the execution of more operations for the whole test. ProxySQL is immediately after it and behaves quite well, up to 128 threads, then it just collapses.
MySQL Router never takes off; it always stays below the 1k reads/second, while HAProxy served 8.2k and ProxySQL 6.6k.
Looking at the latency, we can see that HAProxy gradually increased as expected, while ProxySQL and MySQL Router just went up from the 256 threads on.
To observe that both ProxySQL and MySQL Router could not complete the tests with 4096 threads.
Why? HAProxy always stays below 50% CPU, no matter the increasing number of threads/connections, scaling the load very efficiently. MySQL router was almost immediately reaching the saturation point, being affected by the number of threads/connections and the number of operations. That was unexpected, given we do not have a level 7 capability in MySQL Router.
Finally, ProxySQL, which was working fine up to a certain limit, reached saturation point and could not serve the load. I am saying load because ProxySQL is a level 7 proxy and is aware of the content of the load. Given that, on top of multiplexing, additional resource consumption was expected.
Here we just have a clear confirmation of what was already said above, with 100% CPU utilization reached by MySQL Router with just 16 threads, and ProxySQL way after at 256 threads.
Brief summary
HAProxy comes up as the champion in this test; there is no doubt that it could scale the increasing load in connection without being affected significantly by the load generated by the requests. The lower consumption in resources also indicates the possible space for even more scaling.
ProxySQL was penalized by the limited resources, but this was the game, we had to get the most out of the few available. This test indicates that it is not optimal to use ProxySQL inside the Operator; it is a wrong choice if low resource and scalability are a must.
MySQL Router was never in the game. Unless a serious refactoring, MySQL Router is designed for very limited scalability, as such, the only way to adopt it is to have many of them at the application node level. Utilizing it close to the data nodes in a centralized position is a mistake.
Conclusions
I started showing an image of how the MySQL service is organized and want to close by showing the variation that, for me, is the one to be considered the default approach:
This highlights that we must always choose the right tool for the job.
The Proxy in architectures involving MySQL/Percona Server for MySQL/Percona XtraDB Cluster is a crucial element for the scalability of the cluster, no matter if using K8s or not. Choosing the one that serves us better is important, which can sometimes be ProxySQL over HAProxy.
However, when talking about K8s and Operators, we must recognize the need to optimize the resources usage for the specific service. In that context, there is no discussion about it, HAProxy is the best solution and the one we should go to.
My final observation is about MySQL Router (aka MySQL Proxy).
Unless there is a significant refactoring of the product, at the moment, it is not even close to what the other two can do. From the tests done so far, it requires a complete reshaping, starting to identify why it is so subject to the load coming from the query more than the load coming from the connections.
Great MySQL to everyone.
References
05
2022
High Memory Usage on ProxySQL Server

ProxySQL is a very useful tool for gaining high availability, load balancing, query routing, query caching, query rewriting, multiplexing, and data masking. It is a proven tool and is used largely in production. There can be an instance of ProxySQL using a lot of memory on the server. This post is related to one of the causes that can use memory on the ProxySQL server and how we handle it.
Explanation
How do we know where the memory is getting used?
Proxysql has a stats database which we can see with an admin login. This database contains metrics gathered by ProxySQL concerning its internal functioning.
This post mostly talks about stats in MySQL query digest in ProxySQL.
Memory used by query digest can grow unlimited, and you can query the memory usage with the below query with an admin login. The memory usage is shown in bytes.
MySQL [(none)]> select * from stats_memory_metrics; +------------------------------+----------------+ | Variable_Name | Variable_Value | +------------------------------+----------------+ | SQLite3_memory_bytes | 3732504 | | jemalloc_resident | 171573248 | | jemalloc_active | 123863040 | | jemalloc_allocated | 21793784 | | jemalloc_mapped | 237305856 | | jemalloc_metadata | 10041824 | | jemalloc_retained | 57868288 | | Auth_memory | 835 | | query_digest_memory | 177248 | | mysql_query_rules_memory | 8410 | | mysql_firewall_users_table | 0 | | mysql_firewall_users_config | 0 | | mysql_firewall_rules_table | 0 | | mysql_firewall_rules_config | 329 | | stack_memory_mysql_threads | 67108864 | | stack_memory_admin_threads | 8388608 | | stack_memory_cluster_threads | 0 | +------------------------------+----------------+
This is the test instance. So you don’t see it growing so fast due to idle load. But in the production environment where there are tons of unique queries running from ProxySQL, it can grow fast and be troublesome. Once it restarts, it clears the memory and empties the stats_mysql_query_digest table.
You can check this memory usage using Percona Monitoring and Management (PMM) as well. Let’s see where it shows.
mysql> select * from stats_memory_metrics where variable_name = 'query_digest_memory'; +---------------------+----------------+ | Variable_Name | Variable_Value | +---------------------+----------------+ | query_digest_memory | 3944 | +---------------------+----------------+
Under PMM, you can check “Memory used to store data” under the ProxySQL instance summary ? Memory usage pane.

Here is how it released the memory on restarting and also emptied the query_digest as well.
[root@yunus-shaikh-node3 ~]# systemctl restart proxysql [root@yunus-shaikh-node3 ~]# mysql -h 127.0.0.1 -u admin -P 6032 -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.5.30 (ProxySQL Admin Module) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [(none)]> select * from stats_mysql_query_digest; Empty set (0.00 sec) MySQL [(none)]> select * from stats_memory_metrics where variable_name='query_digest_memory'; +---------------------+----------------+ | Variable_Name | Variable_Value | +---------------------+----------------+ | query_digest_memory | 0 | +---------------------+----------------+
But restarting ProxySQL is not a good solution for a production environment, so how do we release the memory?
Ways to do it without restarting ProxySQL
1) You can disable mysql_query_digest if you don’t want it. It is not a big problem, so keeping it enabled might benefit from seeing what queries your application is running. ProxySQL table (stats_mysql_query_digest) can reduce the overhead of analyzing slow logs and sorting slow queries. So we don’t recommend it to be disabled as it can be taken care of better.
Below are the steps that can be used to disable it.
set mysql-query_digests = 0; LOAD MYSQL VARIABLES TO RUNTIME; SAVE MYSQL VARIABLES TO DISK;
2) ProxySQL has provided a table stats_mysql_query_digest_reset. If we run a select on it, it will display the contents on stats_mysql_query_digest and clears the table. This can be used if you don’t need the history of stats_mysql_query_digest anymore after reading once. This is similar to restarting ProxySQL as it does not backup anything and just deletes the data in stats_mysql_query_digest.
We had two entries in the digest, and when we selected it from stats_mysql_query_digest_reset then, it cleared the stats_mysql_query_digest table and also released the memory for the same.
MySQL [(none)]> select count(*) from stats_mysql_query_digest; +----------+ | count(*) | +----------+ | 2 | +----------+ 1 row in set (0.00 sec) MySQL [(none)]> select count(*) from stats_mysql_query_digest_reset; +----------+ | count(*) | +----------+ | 2 | +----------+ 1 row in set (0.00 sec) MySQL [(none)]> select count(*) from stats_mysql_query_digest; +----------+ | count(*) | +----------+ | 0 | +----------+ 1 row in set (0.01 sec) MySQL [(none)]> select * from stats_memory_metrics where variable_name='query_digest_memory'; +---------------------+----------------+ | Variable_Name | Variable_Value | +---------------------+----------------+ | query_digest_memory | 0 | +---------------------+----------------+
NOTE – This does not save your mysql_query_digest. It just deletes everything.
3) There is a variable admin-stats_mysql_query_digest_to_disk which allows you to save the query_digest in the history table history_mysql_query_digest after n number of seconds. N is the value you define to variables in seconds.
set admin-stats_mysql_query_digest_to_disk = <No of seconds in which you want to move from memory to disk>; LOAD ADMIN VARIABLES TO RUNTIME; SAVE ADMIN VARIABLES TO DISK;
Every N seconds, it will move the contents from stats_mysql_query_digest to history_mysql_query_digest. So the digest will then be stored on a disk. Memory will be released.
4) The below step will directly store the MySQL digest on DISK. This will stop using the stats_mysql_query_digest table and will only store all data in history_mysql_query_digest.
SAVE MYSQL DIGEST TO DISK
Other variables that can be used for reducing the query digest in the memory include:
1) mysql-query_digests_keep_comment – This is by default false, which can help to reduce the digest text, and that means the size as well for the query digest. If you need, you can enable it.
2) mysql-query_digests_max_digest_length – This is by default set to 2048. You can reduce it to a minimum of 16. This will result in the queries like:
select * from test to select * from te
The digest text will be limited to 16 characters counting with space and comments in the query. This will result in queries cut down to some extent. Again, it will save memory.
3) mysql-query_digests_grouping_limit – This will convert the queries like;
set mysql-query_digests_grouping_limit=2; LOAD MYSQL VARIABLES TO RUNTIME; select * from test2 where id IN (1,2,3,4); to select * from test2 where id IN (?,?,...)
4) mysql-query_digests_normalize_digest_text
When set to FALSE (default), ProxySQL will cache the SQL digest and related information in the table stats.stats_mysql_query_digest by the schema.
When this variable is TRUE, queries statistics store digest_text on a different internal hash table. In this way, ProxySQL will be able to normalize data, digest_text is internally stored elsewhere, and it deduplicates data.
When you query stats_mysql_query_digest, the data is merged. This drastically reduces memory usage on setups with many schemas but similar query patterns.
This will save us from restarting ProxySQL every time if the memory usage is high due to mysql_query_digest.
Conclusion
If you see the ProxySQL memory usage going high, you should look at the memory stats table to define where the memory is getting used. If you find that stats_mysql_query_digest is using the memory, then you can implement one of the abovementioned options to release the memory. You might also want to check why your application generates a lot of unique queries to fill the query digest. That can also help you to minimize the memory used by query digest.
The query digest doesn’t always need to cause a problem. If the memory metrics table shows something else, then you would have to go to troubleshoot in that way. If you think there is a memory leak, then you can refer to the below page on detecting memory leaks
In general, the recommended way of setting up the mysql_query_digest for ProxySQL would be to allow it to store in memory which can be faster than storing it on disk. So at some point in time, you would need to clear your memory before it grows. Here you can decide on your requirement if you need to store your query digest for a more extended period, then you can move it to disk. Query digest works so that it does not repeat unique queries in the digest. It only stores distinct queries when it is in one place.
When you move it from memory to disk (i.e. under history_mysql_query_digest), it empties the stats_mysql_query_digest table. So it will start recording all the queries that were also recorded previously and moved to the history table. When you move that data again to the history table, it will duplicate the query_digest there. So plan accordingly.
It would be best to decide how you want to set up your mysql_query_digest as per the requirement. It is always better to review it and move it to disk later. If not needed after a certain time, you can remove it from the disk too.
https://proxysql.com/documentation/memory-leak-detection/
References:
https://proxysql.com/documentation
Proxysql Major Version 2.0
10
2022
ProxySQL Support for MySQL caching_sha2_password

Every day we use dozens if not hundreds of applications connecting to some kind of data repository. This simple step is normally executed over the network and, given so, it is subject to possible sniffing with all the possible related consequences.
Given that, it is normally better to protect your connection using data encryption like SSL, or at the minimum, make the information you pass to connect less easy to be intercepted.
At the same time, it is a best practice to not store connection credentials in clear text, not even inside a table in your database. Doing that is the equivalent of writing your password on a sticky note on your desk. Not a good idea.
Instead, the main options are either transforming the passwords to be less identifiable via hashing or storing the information in an external centralized vault.
In MySQL, the passwords are transformed to not be clear text, and several different plugins are used to authenticate the user. From version 8, MySQL uses caching_sha2_password as the default authentication plugin. The caching_sha2_password and sha256_password authentication plugins provide more secure password encryption than the mysql_native_password plugin, and caching_sha2_password provides better performance than sha256_password. Due to these superior security and performance characteristics of caching_sha2_password, it is the preferred authentication plugin as of MySQL 8.0 and is also the default authentication plugin rather than mysql_native_password.
In this regard, recently I got the same question again “Can we use ProxySQL with MySQL 8 authorization mechanism?”, and I decided it was time to write this short blog post.
The short answer is “Yes you can”, however, do not expect to have full caching_sha2_password support.
This is because ProxySQL does not fully support the caching_sha2_password mechanism internally and, given that, a “trick” must be used.
So, what should we do when using MySQL 8 and ProxySQL?
In the text below we will see what can be done to continue to use ProxySQL with MySQL and Percona Server for MySQL 8.
Note that I have used the Percona proxysql_admin tool to manage the users except in the last case.
It is a nice tool that helps you to manage ProxySQL and in regards to a user, it also manages and synchronizes users from your Percona Server for MySQL or MySQL instance.
In the following examples:
Proxysql is on 192.168.4.191
User name/password is msandbox/msandbox
Using hashing
By default, MySQL comes with caching_sha2_password and if I create the user name msandbox I will have:
DC1-1(root@localhost) [(none)]>select user,host, authentication_string,plugin from mysql.user order by 1,2; +----------------------------+--------------------+------------------------------------------------------------------------+-----------------------+ | user | host | authentication_string | plugin | +----------------------------+--------------------+------------------------------------------------------------------------+-----------------------+ | msandbox | % | $A$005$Z[z@l'O%[Q5t^ EKJDgxjWXJjDpDEUv91oL7Hoh/0NydTeCzpV.aI06C9. | caching_sha2_password | <---- this user +----------------------------+--------------------+------------------------------------------------------------------------+-----------------------+
Then I use percona_scheduler_admin to sync the users:
./percona-scheduler-admin --config-file=config.toml --syncusers Syncing user accounts from PXC(192.168.4.205:3306) to ProxySQL Removing existing user from ProxySQL: msandbox Adding user to ProxySQL: msandbox Synced PXC users to the ProxySQL database! mysql> select * from mysql_users ; +------------+------------------------------------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+-----------------------------+ | username | password | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections | attributes | comment | +------------+------------------------------------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+-----------------------------+ | msandbox | $A$005$Z[z@l'O%[Q5t^ EKJDgxjWXJjDpDEUv91oL7Hoh/0NydTeCzpV.aI06C9 | 1 | 0 | 100 | NULL | 0 | 1 | 0 | 1 | 1 | 10000 | | | +------------+------------------------------------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+-----------------------------+
And set the query rules:
insert into mysql_query_rules (rule_id,proxy_port,username,destination_hostgroup,active,retries,match_digest,apply) values(1048,6033,'msandbox',100,1,3,'^SELECT.*FOR UPDATE',1); insert into mysql_query_rules (rule_id,proxy_port,username,destination_hostgroup,active,retries,match_digest,apply) values(1050,6033,'msandbox',101,1,3,'^SELECT.*$',1); load mysql query rules to run;save mysql query rules to disk;
Now I try to connect, passing by ProxySQL:
# mysql -h 192.168.4.191 -P6033 -umsandbox -pmsandbox ERROR 1045 (28000): ProxySQL Error: Access denied for user 'msandbox'@'192.168.4.191' (using password: YES)
My account will fail to connect given the failed authentication.
To fix this I need to drop the user and recreate it with a different authentication plugin in my MySQL server:
drop user msandbox@'%'; create user 'msandbox'@'%' identified with mysql_native_password BY 'msandbox'; grant select on *.* to 'msandbox'@'%'; select user,host, authentication_string,plugin from mysql.user order by 1,2; +----------+--------------------+-------------------------------------------+-----------------------+ | user | host | authentication_string | plugin | +----------+--------------------+-------------------------------------------+-----------------------+ | msandbox | % | *6C387FC3893DBA1E3BA155E74754DA6682D04747 | mysql_native_password | +----------+--------------------+-------------------------------------------+-----------------------+
At this point, I can re-run:
./percona-scheduler-admin --config-file=config.toml --syncusers
And if I try to connect again:
# mysql -h 192.168.4.191 -P6033 -umsandbox -pmsandbox mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 6708563 Server version: 8.0.28 (ProxySQL). <---------------------------- Connecting to proxysql Copyright (c) 2000, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show global variables like 'version%'; +-------------------------+------------------------------------------------------------------------------------+ | Variable_name | Value | +-------------------------+------------------------------------------------------------------------------------+ | version | 8.0.25-15.1 <--- Percona/MySQL version | | version_comment | Percona XtraDB Cluster binary (GPL) 8.0.25, Revision 8638bb0, WSREP version 26.4.3 | | version_compile_machine | x86_64 | | version_compile_os | Linux | | version_compile_zlib | 1.2.11 | | version_suffix | .1 | +-------------------------+------------------------------------------------------------------------------------+ 6 rows in set (0.02 sec)
This is the only way to keep the password hashed in MySQL and ProxySQL.
Not using hashing
What if you cannot use mysql_native_password for the password in your MySQL server?
There is a way to still connect, however, I do not recommend it as it is highly insecure, but for completeness, I am going to illustrate it.
First of all, disable password hashing in Proxysql:
update global_variables set Variable_Value='false' where Variable_name='admin-hash_passwords';
At this point, instead, sync the user you can locally create the user like:
insert into mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent,comment) values ('msandbox','msandbox',1,100,'mysql',1,'generic test for security');
mysql> select * from runtime_mysql_users where username ='msandbox';
+----------+----------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+---------------------------+
| username | password | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections | attributes | comment |
+----------+----------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+---------------------------+
| msandbox | msandbox | 1 | 0 | 100 | mysql | 0 | 1 | 0 | 1 | 0 | 10000 | | generic test for security |
| msandbox | msandbox | 1 | 0 | 100 | mysql | 0 | 1 | 0 | 0 | 1 | 10000 | | generic test for security |
+----------+----------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+---------------------------+
As you can see, doing that will prevent the password from being hashed, and instead, it will be clear text.
At this point, you will be able to connect to MySQL 8 using the caching_sha2_password, but the password is visible in ProxySQL.
Let me repeat, I DO NOT recommend using it this way, because for me it is highly insecure.
Conclusion
While it is still possible to configure your user in MySQL to connect using ProxySQL, it is obvious that we have a gap in the way ProxySQL supports security.
The hope is that this gap will be filled soon by the ProxySQL development team. Also if you are looking at the past issues, this seems pending for years now.
References
https://proxysql.com/documentation/mysql-8-0/
https://github.com/sysown/proxysql/issues/2580
Upgrade your Libraries: Authentication Plugin ‘caching_sha2_password’ Cannot be Loaded









