Aug
14
2019
--

How to Manage ProxySQL Cluster with Core and Satellite Nodes

Manage ProxySQL Cluster

Manage ProxySQL ClusterIn this post, we will manage ProxySQL Cluster with “core” and “satellite” nodes. What does that mean? Well, in hundreds of nodes, only one change in one of those nodes will replicate to the entire cluster.

Any mistake or changes will replicate to all nodes, and this can make it difficult to find the most recently updated node or the node of true.

Before continuing, you need to install and configure ProxySQL Cluster. You can check my previous blogs for more information:

The idea to use “core” and “satellite” nodes is to limit only a few nodes as masters (aka core) and the rest of the nodes as slaves (aka satellite). Any change in the “core” nodes will be replicated to all core/satellite nodes, but any change in a “satellite” node will not be replicated. This is useful to manage big amount of nodes because we are minimizing manual errors and false/positive changes, doing the difficult task of finding the problematic node over all the nodes in the cluster.

This works in ProxySQL version 1.4 and 2.

How does it work?

When you configure a classic ProxySQL Cluster, all nodes listen to all nodes, but with this feature, all nodes will only listen to a couple of nodes or the nodes you want to use as “core” nodes.

Any change in one or more nodes not listed in the “proxysql_servers” table will not be replicated, due to the fact that there aren’t nodes listening in the admin port waiting for changes.

Each node opens one thread per server listed in the proxysql_server table and connects to the IP on admin port (default admin port is 6032), waiting for any change in four tables – mysql_servers, mysql_users, proxysql_servers, mysql_query_rules. The only relationship between a core and satellite node is a satellite node connects to the core node and it waits for any change.

How to configure

It’s easy, we will configure only the IPs of core nodes in all cluster nodes, including “core and satellite” nodes, into the proxysql_servers tables. If you read my previous posts and configured a ProxySQL Cluster, we will clean the previous config for the next tables to test from scratch:

delete from mysql_query_rules;
LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;

delete from mysql_servers;
LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;

delete from mysql_users;
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;

delete from proxysql_servers;
LOAD PROXYSQL SERVERS TO RUNTIME;
SAVE PROXYSQL SERVERS TO DISK;

Suppose we have 100 ProxySQL nodes, for example, and here is the list of hostnames and IPs from our instances:

proxysql_node1 = 10.0.0.1
proxysql_node2 = 10.0.0.2
...
proxysql_node100 = 10.0.0.100

And we want to configure and use only 3 core nodes, so we select the first 3 nodes from the cluster:

proxysql_node1 = 10.0.0.1
proxysql_node2 = 10.0.0.2
proxysql_node3 = 10.0.0.3

And the rest of the nodes will be the satellite nodes:

proxysql_node4 = 10.0.0.4
proxysql_node5 = 10.0.0.5
...
proxysql_node100 = 10.0.0.100

We will use the above IPs to configure the proxysql_servers table, with only those 3 IPs over all nodes. So all ProxySQL nodes (from proxysql_node1 to proxysql_node100) will listen for changes only on those 3 nodes.

DELETE FROM proxysql_servers;

INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.0.1',6032,0,'proxysql_node1');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.0.2',6032,0,'proxysql_node2');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.0.3',6032,0,'proxysql_node3');

LOAD PROXYSQL SERVERS TO RUNTIME;
SAVE PROXYSQL SERVERS TO DISK;

Now all nodes from proxysql_node4 to proxysql_node100 are the satellite nodes.

We can see something like this in the proxysql.log file:

2019-08-09 15:50:14 [INFO] Created new Cluster Node Entry for host 10.0.0.1:6032
2019-08-09 15:50:14 [INFO] Created new Cluster Node Entry for host 10.0.0.2:6032
2019-08-09 15:50:14 [INFO] Created new Cluster Node Entry for host 10.0.0.3:6032
...
2019-08-09 15:50:14 [INFO] Cluster: starting thread for peer 10.0.0.1:6032
2019-08-09 15:50:14 [INFO] Cluster: starting thread for peer 10.0.0.2:6032
2019-08-09 15:50:14 [INFO] Cluster: starting thread for peer 10.0.0.3:6032

How to Test

I’ll create a new entry mysql_users table in the core node to test if the replication from core to satellite is working fine.

Connect to proxysql_node1 and run the next queries:

INSERT INTO mysql_users(username,password, active, default_hostgroup) VALUES ('user1','123456', 1, 10);
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;

Now from any satellite node, for example, proxysql_node4, check the ProxySQL log file to find if there are updates. If this is working fine we see something like this:

2019-08-09 18:49:24 [INFO] Cluster: detected a new checksum for mysql_users from peer 10.0.1.113:6032, version 3, epoch 1565376564, checksum 0x5FADD35E6FB75557 . Not syncing yet ...
2019-08-09 18:49:26 [INFO] Cluster: detected a peer 10.0.1.113:6032 with mysql_users version 3, epoch 1565376564, diff_check 3. Own version: 2, epoch: 1565375661. Proceeding with remote sync
2019-08-09 18:49:27 [INFO] Cluster: detected a peer 10.0.1.113:6032 with mysql_users version 3, epoch 1565376564, diff_check 4. Own version: 2, epoch: 1565375661. Proceeding with remote sync
2019-08-09 18:49:27 [INFO] Cluster: detected peer 10.0.1.113:6032 with mysql_users version 3, epoch 1565376564
2019-08-09 18:49:27 [INFO] Cluster: Fetching MySQL Users from peer 10.0.1.113:6032 started
2019-08-09 18:49:27 [INFO] Cluster: Fetching MySQL Users from peer 10.0.1.113:6032 completed
2019-08-09 18:49:27 [INFO] Cluster: Loading to runtime MySQL Users from peer 10.0.1.113:6032
2019-08-09 18:49:27 [INFO] Cluster: Saving to disk MySQL Query Rules from peer 10.0.1.113:6032

Then check if the previous update exists in the mysql_users table on proxysql_node4 or any other satellite node. These updates should exist in the mysql_users and runtime_mysql_users tables.

admin ((none))>select * from runtime_mysql_users;
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| username | password                                  | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| user1    | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | 1      | 0       | 0                 |                | 0             | 1                      | 0            | 1       | 0        | 10000           |
| user1    | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | 1      | 0       | 0                 |                | 0             | 1                      | 0            | 0       | 1        | 10000           |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+

admin ((none))>select * from mysql_users;
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| username | password                                  | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| user1    | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | 1      | 0       | 0                 |                | 0             | 1                      | 0            | 1       | 0        | 10000           |
| user1    | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | 1      | 0       | 0                 |                | 0             | 1                      | 0            | 0       | 1        | 10000           |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+

Now the final test is to create a new MySQL user into a satellite node, connect to proxysql_node4, and run the next queries to create a new username:

INSERT INTO mysql_users(username,password, active, default_hostgroup) VALUES ('user2','123456', 1, 10);
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;

From the proxysql log on proxysql_node4, we see the next output:

[root@ip-10-0-1-10 ~]# tail -f /var/lib/proxysql/proxysql.log -n100
...
2019-08-09 18:59:12 [INFO] Received LOAD MYSQL USERS TO RUNTIME command
2019-08-09 18:59:12 [INFO] Received SAVE MYSQL USERS TO DISK command

The last thing to check is the proxysql log file in the core node, to see if there are updates from the table mysql_users. Below is the output from proxysql_node1:

[root@proxysql proxysql]# tail /var/lib/proxysql/proxysql.log -n100
...
2019-08-09 19:09:21 [INFO] ProxySQL version 1.4.14-percona-1.1
2019-08-09 19:09:21 [INFO] Detected OS: Linux proxysql 4.14.77-81.59.amzn2.x86_64 #1 SMP Mon Nov 12 21:32:48 UTC 2018 x86_64

As you can see there are no updates, because the core nodes are not listening for changes from satellite nodes. Core nodes only listen for changes in other core nodes.

And finally, this feature is really useful when you have many servers to manage. Hope you can test this!

Aug
12
2019
--

ProxySQL 2.0.5 and proxysql-admin tool Now Available

ProxySQL 2.0.5

ProxySQL 1.4.14

ProxySQL 2.0.5, released by ProxySQL, is now available for download in the Percona Repository along with Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL and database servers in the MySQL ecosystem (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

This release includes ProxySQL 2.0.5 which fixes many bugs and introduces a number of features and enhancements. The proxysql-admin tool has been enhanced to support the following new options:

  • The --add-query-rule option creates query rules for synced MySQL users. This option is only applicable for the singlewrite mode and works together with the --syncusers and --sync-multi-cluster-users options.
  • The --force option skips existing configuration checks in mysql_servers, mysql_users and mysql_galera_hostgroups tables. This option will only work together with the –enable option: proxysql-admin --enable --force.
  • The --update-mysql-version option updates the mysql-server_version variable in ProxySQL with the version from a node in Percona XtraDB Cluster.

The ProxySQL 2.0.5 source and binary packages available from the Percona download page for ProxySQL include proxysql-admin – a tool developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 2.0.5 are available as well. You can download the original ProxySQL from GitHub. GitHub hosts the documentation in the wiki format.

ProxySQL 2.0.5 Improvements

  • PSQLADM-49: Create rules for –syncusers. When running with --syncusers or --sync-multi-cluster-users, the --add-query-rule option can now be specified to add the singlewriter query rules for the new users.
  • PSQLADM-51: Update mysql-server_version variable. The --update-mysql-version command has been added to set the mysql-server_version__ global variable in ProxySQL.  This will take the version from a node in the cluster and set it in ProxySQL.

Bugs Fixed

  • PSQLADM-190: The --remove-all-servers option did not work on enable. When running with proxysql-cluster, the galera hostgroups information was not replicated which could result in failing to run --enable on a different ProxySQL node.  The --force option was added for --enable to be able to ignore any errors and always configure the cluster.
  • PSQLADM-199: query-rules removed during proxysql-cluster creation with PXC operator. When using the PXC operator for Kubernetes and creating a proxysql-cluster, the query rules could be removed.  The code was modified to merge the query rules (rather than deleting and recreating).  If the --force option was specified, then a warning was issued in case any existing rules were found; otherwise an error was issued. The --disable-updates option was added to ensure that ProxySQL cluster updates did not interfere with the current command.
  • PSQLADM-200: users were not created for –-syncusers with PXC operator. When using the PXC operator for Kubernetes, the --syncusers command was run but the mysql_users table was not updated.  The fix for PSQLADM-199 that suggested to use --disable-updates also applies here.

ProxySQL is available under Open Source license GPLv3.

Jun
26
2019
--

Percona XtraDB Cluster 5.7.26-31.37 Is Now Available

Percona XtraDB Cluster 5.7

Percona XtraDB Cluster 5.7

Percona is glad to announce the release of Percona XtraDB Cluster 5.7.26-31.37 on June 26, 2019. Binaries are available from the downloads section or from our software repositories.

Percona XtraDB Cluster 5.7.26-31.37 is now the current release, based on the following:

All Percona software is open-source and free.

Bugs Fixed

  • PXC-2480: In some cases, Percona XtraDB Cluster could not replicate CURRENT_USER() used in the ALTER statement. USER() and CURRENT_USER() are no longer allowed in any ALTER statement since they fail when replicated.
  • PXC-2487: The case when a DDL or DML action was in progress from one client and the provider was updated
    from another client could result in a race condition.
  • PXC-2490: Percona XtraDB Cluster could crash when binlog_space_limit was set to a value other than zero during wsrep_recover mode.
  • PXC-2491: SST could fail if the donor had encrypted undo logs.
  • PXC-2497: The user can set the preferred donor by setting the wsrep_sst_donor variable. An IP address is not valid as the value of this variable. If the user still used an IP address, an error message was produced that did not provide sufficient information. The error message has been improved to suggest that the user check the value of the wsrep_sst_donor for an IP address.
  • PXC-2537: Nodes could crash after an attempt to set a password using mysqladmin

Other bugs fixedPXC-2276PXC-2292PXC-2476,  PXC-2560

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Jun
19
2019
--

Percona XtraDB Cluster 5.6.44-28.34 Is Now Available

Percona XtraDB Cluster 5.7

Percona XtraDB Cluster 5.7

Percona is glad to announce the release of Percona XtraDB Cluster 5.6.44-28.34 on June 19, 2019. Binaries are available from the downloads section or from our software repositories.

Percona XtraDB Cluster 5.6.44-28.34 is now the current release, based on the following:

All Percona software is open-source and free.

Bugs Fixed

  • PXC-2480: In some cases, Percona XtraDB Cluster could not replicate CURRENT_USER() used in the ALTER statement. USER() and CURRENT_USER() are no longer allowed in any ALTER statement since they fail when replicated.
  • PXC-2487: The case when a DDL or DML action was in progress from one client and the provider was updated
    from another client could result in a race condition.
  • PXC-2490: Percona XtraDB Cluster could crash when binlog_space_limit was set to a value other than zero during wsrep_recover mode.
  • PXC-2497: The user can set the preferred donor by setting the wsrep_sst_donor variable. An IP address is not valid as the value of this variable. If the user still used an IP address, an error message was produced that did not provide sufficient information. The error message has been improved to suggest that the user check the value of the wsrep_sst_donor for an IP address.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

May
28
2019
--

ProxySQL 2.0.4 and proxysql-admin tool Now Available

ProxySQL 1.4.14

ProxySQL 1.4.14

ProxySQL 2.0.4, released by ProxySQL, is now available for download in the Percona Repository along with Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL and database servers in the MySQL ecosystem (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

This release includes ProxySQL 2.0.4 which fixes many bugs and introduces a number of features and enhancements. The proxysql-admin tool has not changed since the previous release.

The ProxySQL 2.0.4 source and binary packages available from the Percona download page for ProxySQL include ProxySQL Admin – a tool developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 2.0.4 are available as well. You can download the original ProxySQL from GitHub. GitHub hosts the documentation in the wiki format.

ProxySQL is available under Open Source license GPLv3.

May
07
2019
--

ProxySQL 2.0.3 and updated proxysql-admin tool

ProxySQL 1.4.14

ProxySQL 1.4.14

ProxySQL 2.0.3, released by ProxySQL, is now available for download in the Percona Repository along with an updated version of Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL,  and database servers in the MySQL ecosystem (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

The ProxySQL 2.0.3 source and binary packages available from the Percona download page for ProxySQL include ProxySQL Admin – a tool developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 2.0.3 are available as well. You can download the original ProxySQL from GitHub. GitHub hosts the documentation in the wiki format.

With ProxySQL 2.0.3, the proxysql-admin tool now uses the native ProxySQL support for Percona XtraDB Cluster and does not require custom bash scripts to keep track of PXC status.  As a result, proxysql_galera_checker and proxysql_node_monitor have been removed.

Improvements

  • The proxysql-admin tool is MySQL 8.0 compatible

Added Features

  • New option --use-ssl to use SSL for connections between ProxySQL and the backend database servers
  • New option --max-transactions-behind to determine the maximum number of writesets that can be queued before the node is SHUNNED to avoid stale reads. The default value is 100
  • New operation --update-cluster to update the cluster membership by adding server nodes as found. (Note that nodes are added but not removed).  The --writer-hg option may be used to specify which galera hostgroup to update. The --remove-all-servers option instructs to remove all servers from the mysql_servers table before updating the cluster.
  • Hostgroups can now be specified on the command-line: --writer-hg, --reader-hg, --backup-writer-hg, and --offline-hg.
    Previously, these host groups were only read from the configuration file.
  • The --enable and --update-cluster options used simultaneously have special meaning. If the cluster has not been enabled, then --enable is run.  If the cluster has already been enabled, then --update-cluster is run.
  • New command --is-enabled to see if a cluster has been enabled. This command checks for the existence of a row in the mysql_galera_hostgroups table.  The --writer-hg option may be used to specify the writer hostgroup used to search the mysql_galera_hostgroups table.
  • New command --status to display galera hostgroup information. This command lists all rows in the current mysql_galera_hostgroups table as well as all servers that belong to these hostgroups.  With the --writer-hg option, only the information for the galera hostgroup with that writer hostgroup is displayed.

Changed Features

  • Setting --node-check-interval now changes the ProxySQL global variable mysql-monitor_galera_healthcheck_interval
    Note that this is a global variable, not a per-cluster variable.
  • The option --write-node now takes only a single address as a parameter. In the singlewrite mode we only set the weight if --write-node specifies address:port.  A priority list of addresses is no longer accepted.
  • The option --writers-as-readers option now accepts a different set of values. Due to changes in the behavior of ProxySQL between version 1.4 and version 2.0 related to Galera support, the values of --writers-as-readers have been changed.  This option now accepts the following values: yes, no, and backup.
    yes: writers, backup-writers, and read-only nodes can act as readers.
    no: only read-only nodes can act as readers.
    backup: only backup-writers can act as readers.
  • The commands --syncusers, --sync-multi-cluster-users, --adduser, and --disable can now use the --writer-hg option.
  • The command --disable removes all users associated with the galera cluster hostgroups. Previously, this command only removed the users with the CLUSTER_APP_USERNAME.
  • The command --disable now accepts the --writer-hg option to disable the Galera cluster associated with that hostgroup overriding the value specified in the configuration file.

Removed Features

  • Asynchronous slave reader support has been removed: the --include-slaves option is not supported.
  • A list of nodes in the priority order is no longer supported. Only a single node is supported at this time.
  • Since the galera_proxysql_checker and galera_node_monitor scripts are no longer run in the scheduler, automatic cluster membership updates are not supported.
  • Checking the pxc_maint_mode variable is no longer supported
  • Using desynced nodes if no other nodes are available is no longer supported.
  • The server status is no longer maintained in the mysql_servers table.

Limitations

  • With --writers-as-readers=backup read-only nodes are not allowed.
    This a limitation of ProxySQL 2.0.  Note that backup is the default value of --writers-as-readers when --mode=singlewrite

ProxySQL is available under Open Source license GPLv3.

Apr
10
2019
--

How to Add More Nodes to an Existing ProxySQL Cluster

proxysql on application instances

In my previous post, some time ago, I wrote about the new cluster feature of ProxySQL. For that post, we were working with three nodes, now we’ll work with even more! If you’ve installed one ProxySQL per application instance and would like to work up to more, then this post is for you. If this is new to you, though, read my earlier post first for more context.

Check the image below to understand the structure of “one ProxySQL per application”. This means you have ProxySQL installed, and your application (Java, PHP, Apache server etc) in the same VM (virtual machine).

Schematic of a ProxySQL cluster

Having taken a look at that you probably have a few questions, such as:

  • What happens if you have 20 nodes synced and you now need to add 100 or more nodes?
  • How can I sync the new nodes without introducing errors?

Don’t be scared, it’s a simple process.

Remember there are only four tables which can be synced over the cluster. These tables are:

  • mysql_query_rules
  • mysql_servers
  • mysql_users
  • proxysql_servers

From a new proxysql cluster installation

After a fresh installation all those four tables are empty. That means if we configure it as a new node in a cluster, all rows in those tables will be copied instantly.

Generally the installation is straightforward.

Now ProxySQL is up and all the configuration are in default settings

Connect to the ProxySQL console:

mysql -uadmin -padmin -h127.0.0.1 -P6032 --prompt='\u (\d)>'

Now there are two steps remaining:

  • Configure global_variables table
  • Configure proxysql_servers table

How to configure the global_variables table

Below is an example showing the minimal parameters to set – you can change the username and password according to your needs.

You can copy and paste the username and passwords from the current cluster and monitoring process by running the next command in a node of the current cluster:

select * from global_variables where variable_name in ('admin-admin_credentials', 'admin-cluster_password', 'mysql-monitor_password', 'admin-cluster_username', 'mysql-monitor_username');

You can update the parameters of the current node by using this as a template:

update global_variables set variable_value='<REPLACE-HERE>' where variable_name='admin-admin_credentials';
update global_variables set variable_value='<REPLACE-HERE>' where variable_name='admin-cluster_username';
update global_variables set variable_value='<REPLACE-HERE>' where variable_name='admin-cluster_password';
update global_variables set variable_value='<REPLACE-HERE>' where variable_name='mysql-monitor_username';
update global_variables set variable_value='<REPLACE-HERE>' where variable_name='mysql-monitor_password';
update global_variables set variable_value=1000 where variable_name='admin-cluster_check_interval_ms';
update global_variables set variable_value=10 where variable_name='admin-cluster_check_status_frequency';
update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_query_rules_save_to_disk';
update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_servers_save_to_disk';
update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_users_save_to_disk';
update global_variables set variable_value='true' where variable_name='admin-cluster_proxysql_servers_save_to_disk';
update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_query_rules_diffs_before_sync';
update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_servers_diffs_before_sync';
update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_users_diffs_before_sync';
update global_variables set variable_value=3 where variable_name='admin-cluster_proxysql_servers_diffs_before_sync';
load admin variables to RUNTIME;
save admin variables to disk;

Configure “proxysql_servers” table

At this point you need keep in mind that you will need to INSERT into this table “all” the IPs from the other ProxySQL nodes.

Why so? Because this table will have a new epoch time and this process will overwrite the rest of the nodes listed by its last table update.

In our example, let’s assume the IP of the new node is 10.0.1.3 (i.e. node 3, below)

INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.1.1',6032,0,'p1');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.1.2',6032,0,'p2');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.1.3',6032,0,'p3');
LOAD PROXYSQL SERVERS TO RUNTIME;
SAVE PROXYSQL SERVERS TO DISK;

If you already have many ProxySQL servers in the cluster, you can run mysqldump as this will help speed up this process.

If that’s the case, you need to find the most up to date node and use mysqldump to export the data from the proxysql_servers table.

How would you find this node? There are a few stats tables in ProxySQL, and in this case we can use two of these to help identify the right node.

SELECT stats_proxysql_servers_checksums.hostname, stats_proxysql_servers_metrics.Uptime_s, stats_proxysql_servers_checksums.port, stats_proxysql_servers_checksums.name, stats_proxysql_servers_checksums.version, FROM_UNIXTIME(stats_proxysql_servers_checksums.epoch) epoch, stats_proxysql_servers_checksums.checksum, stats_proxysql_servers_checksums.diff_check FROM stats_proxysql_servers_metrics JOIN stats_proxysql_servers_checksums ON stats_proxysql_servers_checksums.hostname = stats_proxysql_servers_metrics.hostname WHERE stats_proxysql_servers_metrics.Uptime_s > 0 ORDER BY epoch DESC

Here’s an example output

+------------+----------+------+-------------------+---------+---------------------+--------------------+------------+
| hostname   | Uptime_s | port | name              | version | epoch               | checksum           | diff_check |
+------------+----------+------+-------------------+---------+---------------------+--------------------+------------+
| 10.0.1.1   | 1190     | 6032 | mysql_users       | 2       | 2019-04-04 12:04:21 | 0xDB07AC7A298E1690 | 0          |
| 10.0.1.2   | 2210     | 6032 | mysql_users       | 2       | 2019-04-04 12:04:18 | 0xDB07AC7A298E1690 | 0          |
| 10.0.1.1   | 1190     | 6032 | mysql_query_rules | 1       | 2019-04-04 12:00:07 | 0xBC63D734643857A5 | 0          |
| 10.0.1.1   | 1190     | 6032 | mysql_servers     | 1       | 2019-04-04 12:00:07 | 0x0000000000000000 | 0          |
| 10.0.1.1   | 1190     | 6032 | proxysql_servers  | 1       | 2019-04-04 12:00:07 | 0x233638C097DE6190 | 0          |
| 10.0.1.2   | 2210     | 6032 | mysql_query_rules | 1       | 2019-04-04 11:43:13 | 0xBC63D734643857A5 | 0          |
| 10.0.1.2   | 2210     | 6032 | mysql_servers     | 1       | 2019-04-04 11:43:13 | 0x0000000000000000 | 0          |
| 10.0.1.2   | 2210     | 6032 | proxysql_servers  | 1       | 2019-04-04 11:43:13 | 0x233638C097DE6190 | 0          |
| 10.0.1.2   | 2210     | 6032 | admin_variables   | 0       | 1970-01-01 00:00:00 |                    | 0          |
| 10.0.1.2   | 2210     | 6032 | mysql_variables   | 0       | 1970-01-01 00:00:00 |                    | 0          |
| 10.0.1.1   | 1190     | 6032 | admin_variables   | 0       | 1970-01-01 00:00:00 |                    | 0          |
| 10.0.1.1   | 1190     | 6032 | mysql_variables   | 0       | 1970-01-01 00:00:00 |                    | 0          |
+------------+----------+------+-------------------+---------+---------------------+--------------------+------------+

For each table, we can see different versions, each related to table changes.  In this case, we need to look for the latest epoch time for the table “proxysql_servers“. In this example , above, we can see that the server with the IP address of  10.0.1.1 is the latest version.  We can now run the next command to get a backup of all the IP data from the current cluster

mysqldump --host=127.0.0.1 --port=6032 --skip-opt --no-create-info --no-tablespaces --skip-triggers --skip-events main proxysql_servers > proxysql_servers.sql

Now, copy the output to the new node and import into proxysql_servers table. Here’s an example: the data has been exported to the file proxysql_servers.sql which we’ll now load to the new node:

source proxysql_servers.sql
LOAD PROXYSQL SERVERS TO RUNTIME;
SAVE PROXYSQL SERVERS TO DISK;

You can run the next SELECTs to verify that the new node has the data from the current cluster, and in doing so ensure that the nodes are in sync as expected:

select * from mysql_query_rules;
select * from mysql_servers;
select * from mysql_users ;
select * from proxysql_servers;

How can we check if there are errors in the synchronization process?

Run the next command to fetch data from the table stats_proxysql_servers_checksums:

SELECT hostname, port, name, version, FROM_UNIXTIME(epoch) epoch, checksum, diff_check FROM stats_proxysql_servers_checksums  ORDER BY epoch;

Using our example, here’s the data saved in stats_proxysql_servers_checksums for the proxysql_servers table

admin ((none))>SELECT hostname, port, name, version, FROM_UNIXTIME(epoch) epoch, checksum, diff_check FROM stats_proxysql_servers_checksums  ORDER BY epoch;
+------------+------+-------------------+---------+---------------------+--------------------+------------+---------------------+
| hostname   | port | name              | version | epoch               | checksum           | diff_check | DATETIME('NOW')     |
+------------+------+-------------------+---------+---------------------+--------------------+------------+---------------------+
...
| 10.0.1.1   | 6032 | proxysql_servers  | 2       | 2019-03-25 13:36:17 | 0xC7D7443B96FC2A94 | 92         | 2019-03-25 13:38:31 |
| 10.0.1.2   | 6032 | proxysql_servers  | 2       | 2019-03-25 13:36:34 | 0xC7D7443B96FC2A94 | 92         | 2019-03-25 13:38:31 |
...
| 10.0.1.3   | 6032 | proxysql_servers  | 2       | 2019-03-25 13:37:00 | 0x233638C097DE6190 | 0          | 2019-03-25 13:38:31 |
...
+------------+------+-------------------+---------+---------------------+--------------------+------------+---------------------+

As we can see in the column “diff” there are some differences in the checksum of the other nodes: our new node (IP 10.0.1.3) has the checksum 0x233638C097DE6190 where as nodes 1 and 2 both show 0xC7D7443B96FC2A94

This indicates that these nodes have different data. Is this correct? Well, yes, in this case, since in our example the nodes 1 and 2 were created on the current ProxySQL cluster

How do you fix this?

Well, you connect to the node 1 and INSERT the data for node 3 (the new node).

This change will propagate the changes over the current cluster – node 2 and node 3 – and will update the table proxysql_servers.

INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.1.3',6032,0,'p3');
LOAD PROXYSQL SERVERS TO RUNTIME;

What happens if the new nodes already exist in the proxysql_servers table of the current cluster?

You can add the new ProxySQL node IPs to the current cluster before you start installing and configuring the new nodes.

In fact, this works fine and without any issues. The steps I went through, above, are the same, but when you check for differences in the table stats_proxysql_servers_checksums you should find there are none, and that the “diff” column should be displayed as 0.

How does ProxySQL monitor other ProxySQL nodes to sync locally?

Here’s a link to the explanation from the ProxySQL wiki page https://github.com/sysown/proxysql/wiki/ProxySQL-Cluster

Proxies monitor each other, so they immediately recognize that when a checksum of a configuration changes, the configuration has changed. It’s possible that the remote peer’s configuration and its own configuration were changed at the same time, or within a short period of time. So the proxy must also check its own status.

When proxies find differences:

  • If its own version is 1 , find the peer with version > 1 and with the highest epoch, and sync immediately
  • If its own version is greater than 1, starts counting for how many checks they differ
    • when the number of checks in which they differ is greater than cluster__name___diffs_before_sync and cluster__name__diffs_before_sync itself is greater than 0, find the peer with version > 1 and with the highest epoch, and sync immediately.

Note: it is possible that a difference is detected against one node, but the sync is performed against a different node. Since ProxySQL bases the sync on the node with the highest epoch, it is expected that all the nodes will converge.

To perform the syncing process:

These next steps run automatically when other nodes detect a change: this is an example for the table mysql_users

The same connection that’s used to perform the health check is used to execute a series of SELECTstatements in the form of SELECT _list_of_columns_ FROM runtime_module

SELECT username, password, active, use_ssl, default_hostgroup, default_schema, schema_locked, transaction_persistent, fast_forward, backend, frontend, max_connections FROM runtime_mysql_users;

In the node where it detected a difference in the checksum, it will run the next statements automatically:

DELETE FROM mysql_servers;
INSERT INTO mysql_users(username, password, active, use_ssl, default_hostgroup, default_schema, schema_locked, transaction_persistent, fast_forward, backend, frontend, max_connections) VALUES ('user_app','secret_password', 1, 0, 10, '', 0, 1, 0, 0, 1, 1000);
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;

Be careful:

After you’ve added the new ProxySQL node to the cluster, all four tables will sync immediately using the node with the highest epoch time as the basis for sync. Any changes to the tables previously added to the cluster, will propagate and overwrite the cluster

Summary

ProxySQL cluster is a powerful tool, and it’s being used more and more in production environments. If you haven’t tried it before, I’d recommend that you start testing ProxySQL cluster with a view to improving your application’s service. Hope you found this post helpful!

 

Apr
02
2019
--

Simple STONITH with ProxySQL and Orchestrator

3 DC Orchestrator ProxySQL

Distributed systems are hard – I just want to echo that. In MySQL, we have quite a number of options to run highly available systems. However, real fault tolerant systems are difficult to achieve.

Take for example a common use case of multi-DC replication where Orchestrator is responsible for managing the topology, while ProxySQL takes care of the routing/proxying to the correct server, as illustrated below. A rare case you might encounter is that the primary MySQL

node01

on DC1 might have a blip of a couple of seconds. Because Orchestrator uses an adaptive health check – not only the node itself but also consults its replicas – it can react really fast and promote the node in DC2.

Why is this problematic?

The problem occurs when

node01

resolves its temporary issue. A race condition could occur within ProxySQL that could mark it back as read-write. You can increase an “offline” period within ProxySQL to make sure Orchestrator rediscovers the node first. Hopefully, it will set it to read-only immediately, but what we want is an extra layer of predictable behavior. This normally comes in the form of STONITH – by taking the other node out of action, we practically reduce the risk of conflict close to zero.

The solution

Orchestrator supports hooks to do this, but we can also do it easily with ProxySQL using its built in scheduler. In this case, we create a script where Orchestrator is consulted frequently for any nodes recently marked as

downtimed

, and we also mark them as such in ProxySQL. The script proxy-oc-tool.sh can be found on Github.

What does this script do? In the case of our topology above:

  • If for any reason, connections to MySQL on
    node01

    fail, Orchestrator will pick

    node02

      as the new primary.

  • Since
    node01

    is unreachable –  cannot modify

    read_only

    nor update replication – it will be marked as

    downtimed

    with

    lost-in-recovery

    as the reason.

  • If
    node01

    comes back online, and ProxySQL sees it before the next Orchestrator check, it can rejoin the pool. Then it’s possible that you have two writeable nodes in the hostgroup.

  • To prevent the condition above, as soon as the node is marked with downtime from Orchestrator, the script proxy-oc-tool.sh will mark it
    OFFLINE_SOFT

    so it never rejoins the

    writer_hostgroup

      in ProxySQL.

  • Once an operator fixes
    node01

    i.e. reattaches as a replica and removes the

    downtimed

    mark, the script proxy-oc-tool.sh will mark it back

    ONLINE

      automatically.

  • Additionally, if DC1 gets completely disconnected from DC2 and AWS, the script will not be able to reach Orchestrator’s raft-leader and will set all nodes to
    OFFLINE_SOFT

    preventing isolated writes on DC1.

Adding the script to ProxySQL is simple. First you download and set permissions. I placed the script in

/usr/bin/

– but you can put it anywhere accessible by the ProxySQL process.

wget https://gist.githubusercontent.com/dotmanila/1a78ef67da86473c70c7c55d3f6fda89/raw/b671fed06686803e626c1541b69a2a9d20e6bce5/proxy-oc-tool.sh
chmod 0755 proxy-oc-tool.sh
mv proxy-oc-tool.sh /usr/bin/

Note, you will need to edit some variables in the script i.e.

ORCHESTRATOR_PATH

 .

Then load into the scheduler:

INSERT INTO scheduler (interval_ms, filename)
  VALUES (5000, '/usr/bin/proxy-oc-tool.sh');
LOAD SCHEDULER TO RUNTIME;
SAVE SCHEDULER TO DISK;

I’ve set the interval to five seconds since inside ProxySQL, a shunned node will need about 10 seconds before the next read-only check is done. This way, this script is still ahead of ProxySQL and is able to mark the dead node as

OFFLINE_SOFT

 .

Because this is the simple version, there are obvious additional improvements to be made in the script like using scheduler args to specify and

ORCHESTRATOR_PATH

implement error checking.

Feb
28
2019
--

Percona XtraDB Cluster 5.6.43-28.32 Is Now Available

Percona XtraDB Cluster 5.7

Percona XtraDB Cluster 5.7

Percona is glad to announce the release of Percona XtraDB Cluster 5.6.43-28.32 on February 28, 2019. Binaries are available from the downloads section or from our software repositories.

This release of Percona XtraDB Cluster includes the support of Ubuntu 18.10 (Cosmic Cuttlefish). Percona XtraDB Cluster 5.6.43-28.32 is now the current release, based on the following:

All Percona software is open-source and free.

Bugs Fixed

  • PXC-2388: In some cases, DROP FUNCTION function_name was not replicated.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Feb
28
2019
--

Percona XtraDB Cluster 5.7.25-31.35 Is Now Available

Percona XtraDB Cluster 5.7

Percona XtraDB Cluster 5.7Percona is glad to announce the release of Percona XtraDB Cluster 5.7.25-31.35 on February 28, 2018. Binaries are available from the downloads section or from our software repositories.

This release of Percona XtraDB Cluster includes the support of Ubuntu 18.10 (Cosmic Cuttlefish). Percona XtraDB Cluster 5.7.25-31.35 is now the current release, based on the following:

All Percona software is open-source and free.

Bugs Fixed

  • PXC-2346mysqld could crash when executing mysqldump --single-transaction while the binary log is disabled. This problem was also reported in PXC-1711PXC-2371PXC-2419.
  • PXC-2388: In some cases, DROP FUNCTION function_name was not replicated.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com