Oct
23
2019
--

ProxySQL 2.0.7 and proxysql-admin Tool Now Available

ProxySQL

ProxySQLProxySQL 2.0.7, released by ProxySQL, is now available for download in the Percona Repository along with Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL and database servers in the MySQL ecosystem (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

This release includes ProxySQL 2.0.7 which introduces many new features and enhancements and also fixes a number of bugs.

The proxysql-admin tool now supports MariaDB 10.4.

New Features

Improvements

  • PSQLADM-195: A new option --with-stats-reset has been added to the proxysql-status script to display the  *_reset tables from the stats database. If this option is not specified, these tables are not displayed by default.

Bugs Fixed

  • PSQLADM-157: In some cases, the proxysql-status script used the cat command to display a file without checking if the file existed and was readable.
  • PSQLADM-181: When run with --update-cluster --write-node=<node_name>, the proxysql-admin tool now verifies that the writer nodes are not read-only.

The ProxySQL 2.0.7 source and binary packages available from the Percona download page for ProxySQL include ProxySQL Admin – a tool developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 2.0.7 are available as well. You can download the original ProxySQL from GitHub. GitHub hosts the documentation in the wiki format.

ProxySQL is available under Open Source license GPLv3.

Oct
03
2019
--

Percona XtraDB Cluster 8.0 New Feature: wsrep_sst_auth Removal

Experimental Binary XtraDB 8.0

Experimental Binary XtraDB 8.0The problem

In PXC 5.6 and 5.7, when using xtrabackup-v2 as the SST method, the DBA must create a user with the appropriate privileges for use by Percona XtraBackup (PXB). The username and password of this backup user are specified in the wsrep_sst_auth variable.

This is a problem because this username and password was being stored in plaintext and required that the configuration file be secured.

The PXC 8.0 solution

(This feature is still under development and may change before PXC 8.0 GA)

Because the wsrep_sst_auth is only needed on the donor side to take a backup, PXC 8.0 uses an internal user (created specifically for use by PXC) with a randomly generated password. Since this user is only needed on the donor, the plaintext password is not needed on the joiner node.

This password consists of 32 characters generated at random. A new password is generated for each SST request. The plaintext of the password is never saved and never leaves the node. The username/password is sent to the SST script via unnamed pipes (stdin).

New PXC internal user accounts

mysql.pxc.internal.session

The mysql.pxc.internal.session user account provides the appropriate security context to create and set up the other PXC accounts. This account has a limited set of privileges, enough needed to create the mysql.pxc.sst.user??.

This account is locked and cannot be used to login (the password field will not allow login).

mysql.pxc.sst.user

The mysql.pxc.sst.user is used by XtraBackup to perform the backup. This account has the full set of privileges needed by XtraBackup.?? This account is created for an SST and is dropped at the end of an SST and also when the PXC node is shutdown. The creation/provisioning of this user account is not written to the binlog and is not replicated to other nodes. However, this account is sent with the backup to the joiner node. So the joiner node also has to drop this user after the SST has finished.

mysql.pxc.sst.role

The mysql.pxc.sst.role is the MySQL role that provides the privileges needed for XtraBackup. This allows for easy addition/removal of privileges needed for an SST.

The experimental release of PXC is based on MySQL 8.0.15, and we have not implemented the role-based support due to issues found with MySQL 8.0.15. This will be revisited in future versions of PXC 8.0.

Program flow

  1. DONOR node receives SST request from the JOINER
  2. DONOR node generates a random password and creates the internal SST user
    SET SESSION sql_log_bin = OFF;
    DROP USER IF EXISTS 'mysql.pxc.sst.user'@localhost;
    CREATE USER 'mysql.pxc.sst.user'@localhost IDENTIFIED WITH 'mysql_native_password' BY 'XXXXXXXX' ACCOUNT LOCK;
    GRANT 'mysql.pxc.sst.role'@localhost TO 'mysql.pxc.sst.user'@localhost;
    SET DEFAULT ROLE 'mysql.pxc.sst.role'@localhost to 'mysql.pxc.sst.user'@localhost;
    ALTER USER 'mysql.pxc.sst.user'@localhost ACCOUNT UNLOCK;

    The code that uses role is not being used in the current release due to issues with MySQL 8.0.15. Currently, we create the user with all the permissions needed explicitly.

  3. Launch the SST script (passing the username/password via stdin)
  4. SST uses the username/password to perform the backup
  5. SST script exits
  6. The DONOR node drops the user.
  7. The JOINER node receives the backup and drops the user. Note that the JOINER node also contains the internal SST user!

As a precaution, the user is also dropped when the server is shutdown.

Oct
01
2019
--

Experimental Binary of Percona XtraDB Cluster 8.0

Experimental Binary XtraDB 8.0

Experimental Binary XtraDB 8.0Percona is happy to announce the first experimental binary of Percona XtraDB Cluster 8.0 on October 1, 2019. This is a major step for tuning Percona XtraDB Cluster to be more cloud- and user-friendly. This release combines the updated and feature-rich Galera 4, with substantial improvements made by our development team.

Improvements and New Features

Galera 4, included in Percona XtraDB Cluster 8.0, has many new features. Here is a list of the most essential improvements:

  • Streaming replication supports large transactions
  • The synchronization functions allow action coordination (wsrep_last_seen_gtid, wsrep_last_written_gtid, wsrep_sync_wait_upto_gtid)
  • More granular and improved error logging. wsrep_debug is now a multi-valued variable to assist in controlling the logging, and logging messages have been significantly improved.
  • Some DML and DDL errors on a replicating node can either be ignored or suppressed. Use the wsrep_ignore_apply_errors variable to configure.
  • Multiple system tables help find out more about the state of the cluster state.
  • The wsrep infrastructure of Galera 4 is more robust than that of Galera 3. It features a faster execution of code with better state handling, improved predictability, and error handling.

Percona XtraDB Cluster 8.0 has been reworked in order to improve security and reliability as well as to provide more information about your cluster:

  • There is no need to create a backup user or maintain the credentials in plain text (a security flaw). An internal SST user is created, with a random password for making a backup, and this user is discarded immediately once the backup is done.
  • Percona XtraDB Cluster 8.0 now automatically launches the upgrade as needed (even for minor releases). This avoids manual intervention and simplifies the operation in the cloud.
  • SST (State Snapshot Transfer) rolls back or fixes an unwanted action. It is no more “a copy only block” but a smart operation to make the best use of the copy-phase.
  • Additional visibility statistics are introduced in order to obtain more information about Galera internal objects. This enables easy tracking of the state of execution and flow control.

Installation

You can only install this release from a tarball and it, therefore, cannot be installed through a package management system, such as apt or yum. Note that this release is not ready for use in any production environment.

Percona XtraDB Cluster 8.0 is based on the following:

Please be aware that this release will not be supported in the future, and as such, neither the upgrade to this release nor the downgrade from higher versions is supported.

This release is also packaged with Percona XtraBackup 8.0.5. All Percona software is open-source and free.

In order to experiment with Percona XtraDB Cluster 8.0 in your environment, download and unpack the tarball for your platform.

Note

Be sure to check your system and make sure that the packages are installed which Percona XtraDB Cluster 8.0 depends on.

For Debian or Ubuntu:

$ sudo apt-get install -y \
socat libdbd-mysql-perl \
rsync libaio1 libc6 libcurl3 libev4 libgcc1 libgcrypt20 \
libgpg-error0 libssl1.1 libstdc++6 zlib1g libatomic1

For Red Hat Enterprise Linux or CentOS:

$ sudo yum install -y openssl socat  \
procps-ng chkconfig procps-ng coreutils shadow-utils \
grep libaio libev libcurl perl-DBD-MySQL perl-Digest-MD5 \
libgcc rsync libstdc++ libgcrypt libgpg-error zlib glibc openssl-libs

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Sep
27
2019
--

Multiplexing (Mux) in ProxySQL: Use Case

Multiplexing (Mux) in ProxySQL

Multiplexing (Mux) in ProxySQLMultiplexing Background

Historically it’s a technique used in networking to integrate multiple analog and digital signals via a shared medium. The goal of multiplexing over the network was to enable signals to be transmitted more efficiently for a given communication channel, thus achieving cost efficiency. 

Since the term Multiplexing comes from telecommunications, the industry has heavily used a device called Multiplexers – aka Mux. There was even a term called muxing where signals were often analog and digital to be combined in a single line. 

The technique was developed in the early 1870s, it’s origins to be found in telegraphy, and it has become a standard for digital telecommunications in the 20th Century. 

Following multiplexing methods are currently available: 

  • Frequency Division Multiplexing (FDM)
  • Time Division Multiplexing (TDM)
  • Wavelength Division Multiplexing (WDM)

There’s the other way in telco that we aren’t going to get into too much, which is called demultiplexer. Demux involves reanalyzing composite signals to separate them. 

How can we achieve Multiplexing for MySQL database connections?

We would basically need a Mux between the database and the application server. This means a proxy layer that can combine communication channels to the backend database. Again the goal is to reduce the overhead of opening several connections and maintaining a minimal number of open connections to reduce memory footprint. 

Thread pooling is available in the following MySQL distributions:

  • MySQL Enterprise Edition 
  • Percona Server 
  • MariaDB 

The way they work is the listener accepts the connections and hands it over to a group of thread pools. This way each client connection is handled by the same thread pool. The above implementations aren’t as efficient as ProxySQL’s implementation. 

ProxySQL comes to help when we need to Mux our communication channels to a database server, where it can often be flooded.

Main Use Cases

  • Any application with a persistent connection to the database
  • Java applications to be specific with built-in connection pools

Goals to Achieve 

  • Reduce connections similar to Aurora
  • Reduce huge number of connections from hardware 
  • Reduce context switching
  • Reduce mutex/contention 
  • Reduce CPU cache usage

ProxySQL Technique Used:

Threading Models 

  • One thread per connection
    • Easier to develop
    • Blocking I/O
  • Thread Pooling
    • Non-blocking I/O
    • Scalable architecture

Here’s what we can control using ProxySQL

Pros

  • Reduced number of connections and overhead to the backend database  
  • Control over database connections
  • Collect metrics of all database connections and monitor 
  • Can be enabled and disabled. 

Cons

  • Certain conditions and limitations apply to use
  • Can cause unexpected behavior to application logic. 

How it works:  

Ad-hoc enable/disable of multiplexing

mysql_query_rules.multiplexing allows to enable or disable multiplexing based on matching criteria. 

The field currently accepts these values:

  • 0: disable multiplex
  • 1: enable multiplex
  • 2: do not disable multiplex for this specific query containing @

Also, ProxySQL has some default behavior that can lead to unexpected results. In fact, ProxySQL can disable on purpose the multiplexing on a connection in the following cases:

  • When a transaction is active
  • When you issued commands like LOCK TABLE, GET_LOCK() and others
  • When you created a temporary table using CREATE TEMPORARY TABLE
  • When you used in the query session or user variables starting the @ symbol

In the majority of cases, ProxySQL can return the connection to the pool enabling the multiplexing. But only in the last case, when using session and user variables starting with @, the connections never return to the pool.

In a recent investigation on one of our client’s systems, we discovered a high amount of connections and threads used, despite the ProxySQL layer. The problem was related to queries containing the @ symbol in the Java connector when establishing a new connection. In a matter of seconds after enabling it, the multiplexing was disabled for all the connections. The effect is having no multiplexing at all: for example, 2000 connections from the front ends were reflected on the same number of connections to the back ends. We needed a way to avoid automatic disabling of multiplexing.  

How to find on ProxySQL stats all the executed queries containing the @ symbol:

SELECT DISTINCT digest, digest_text FROM stats_mysql_query_digest WHERE digest_text LIKE '%@%';
*************************** 1. row ***************************
digest_text: select @@version_comment limit ?
*************************** 2. row ***************************
digest_text: SELECT @@session.auto_increment_increment AS auto_increment_increment, @@character_set_client AS character_set_client, @@character_set_connection AS character_set_connection, @@character_set_results AS character_set_results, @@character_set_server AS character_set_server, @@collation_server AS collation_server, @@init_connect AS init_connect, @@interactive_timeout AS interactive_timeout, @@license AS license, @@lower_case_table_names AS lower_case_table_names, @@max_allowed_packet AS max_allowed_packet, @@net_buffer_length AS net_buffer_length, @@net_write_timeout AS net_write_timeout, @@query_cache_size AS query_cache_size, @@query_cache_type AS query_cache_type, @@sql_mode AS sql_mode, @@system_time_zone AS system_time_zone, @@time_zone AS time_zone, @@tx_isolation AS tx_isolation, @@wait_timeout AS wait_timeout
*************************** 3. row ***************************
digest_text: SELECT @@session.auto_increment_increment AS auto_increment_increment, @@character_set_client AS character_set_client, @@character_set_connection AS character_set_connection, @@character_set_results AS character_set_results, @@character_set_server AS character_set_server, @@init_connect AS init_connect, @@interactive_timeout AS interactive_timeout, @@license AS license, @@lower_case_table_names AS lower_case_table_names, @@max_allowed_packet AS max_allowed_packet, @@net_buffer_length AS net_buffer_length, @@net_write_timeout AS net_write_timeout, @@query_cache_size AS query_cache_size, @@query_cache_type AS query_cache_type, @@sql_mode AS sql_mode, @@system_time_zone AS system_time_zone, @@time_zone AS time_zone, @@tx_isolation AS tx_isolation, @@wait_timeout AS wait_timeout

To create rules on ProxySQL:

INSERT INTO mysql_query_rules(active,digest,multiplex,apply) SELECT DISTINCT 1,digest,2,0 FROM stats_mysql_query_digest WHERE digest_text LIKE '%@%';

LOAD MYSQL QUERY RULES TO RUNTIME;

SAVE MYSQL QUERY RULES TO DISK;

And the result revealed huge savings over connections and thread pool directly captured from PMM.

Conclusion

In conclusion, ProxySQL’s multiplexing feature is something every high traffic site would benefit from. What we always observe is a lower number of connections to the backend lowers the overhead of both memory utilization and context switching. In order to fully benefit multiplexing, be aware of the above limitations and investigate connection types even after implementing. 

References 

https://www.allaboutcircuits.com/technical-articles/an-intro-to-multiplexing-basis-of-telecommunications/

https://www.techopedia.com/definition/8472/multiplexing

Understanding Multiplexing in Telecommunications

https://medium.com/searce/reduce-mysql-memory-utilization-with-proxysql-multiplexing-cbe09da7921c

https://en.wikipedia.org/wiki/Multiplexing

Thanks for their valuable input :

  • Rene Cannao
  • Marco Tusa
  • Daniel Guzman Burgos
  • Corrado Pandiani
  • Tom De Comman

The AnomeMultiplexing diagram, modified, CC BY-SA 3.0

Sep
18
2019
--

Percona XtraDB Cluster 5.7.27-31.39 Is Now Available

Percona XtraDB Cluster

Percona XtraDB ClusterPercona is happy to announce the release of Percona XtraDB Cluster 5.7.27-31.39 on September 18, 2019. Binaries are available from the downloads section or from our software repositories.

Percona XtraDB Cluster 5.7.27-31.39 is now the current release, based on the following:

All Percona software is open-source and free.

Bugs Fixed

  • PXC-2432: PXC was not updating the information_schema user/client statistics properly.
  • PXC-2555: SST initialization delay: fixed a bug where the SST process took too long to detect if a child process was running.
  • PXC-2557: Fixed a crash when a node goes NON-PRIMARY and SHOW STATUS is executed.
  • PXC-2592: PXC restarting automatically on data inconsistency.
  • PXC-2605: PXC could crash when log_slow_verbosity included InnoDB.  Fixed upstream PS-5820.
  • PXC-2639: Fixed an issue where a SQL admin command (like OPTIMIZE) could cause a deadlock.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Sep
17
2019
--

Percona XtraDB Cluster 5.6.45-28.36 Is Now Available

Percona XtraDB Cluster

Percona XtraDB Cluster

Percona is glad to announce the release of Percona XtraDB Cluster 5.6.45-28.36 on September 17, 2019. Binaries are available from the downloads section or from our software repositories.

Percona XtraDB Cluster 5.6.45-28.36 is now the current release, based on the following:

All Percona software is open-source and free.

Bugs Fixed

  • PXC-2432: PXC was not updating the information schema user/client statistics properly.
  • PXC-2555: SST initialization delay: fixed a bug where the SST process took too long to detect if a child process was running.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Aug
21
2019
--

ProxySQL 2.0.6 and proxysql-admin tool Now Available

ProxySQL

ProxySQL 1.4.14

ProxySQL 2.0.6, released by ProxySQL, is now available for download in the Percona Repository along with Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL and database servers in the MySQL ecosystem (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

This release includes ProxySQL 2.0.6 which introduces many new features and enhancements and also fixes a number of bugs. The proxysql-admin tool has not changed since the previous release.

The ProxySQL 2.0.6 source and binary packages available from the Percona download page for ProxySQL include ProxySQL Admin – a tool developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 2.0.6 are available as well. You can download the original ProxySQL from GitHub. GitHub hosts the documentation in the wiki format.

ProxySQL is available under Open Source license GPLv3.

Aug
14
2019
--

How to Manage ProxySQL Cluster with Core and Satellite Nodes

Manage ProxySQL Cluster

Manage ProxySQL ClusterIn this post, we will manage ProxySQL Cluster with “core” and “satellite” nodes. What does that mean? Well, in hundreds of nodes, only one change in one of those nodes will replicate to the entire cluster.

Any mistake or changes will replicate to all nodes, and this can make it difficult to find the most recently updated node or the node of true.

Before continuing, you need to install and configure ProxySQL Cluster. You can check my previous blogs for more information:

The idea to use “core” and “satellite” nodes is to limit only a few nodes as masters (aka core) and the rest of the nodes as slaves (aka satellite). Any change in the “core” nodes will be replicated to all core/satellite nodes, but any change in a “satellite” node will not be replicated. This is useful to manage big amount of nodes because we are minimizing manual errors and false/positive changes, doing the difficult task of finding the problematic node over all the nodes in the cluster.

This works in ProxySQL version 1.4 and 2.

How does it work?

When you configure a classic ProxySQL Cluster, all nodes listen to all nodes, but with this feature, all nodes will only listen to a couple of nodes or the nodes you want to use as “core” nodes.

Any change in one or more nodes not listed in the “proxysql_servers” table will not be replicated, due to the fact that there aren’t nodes listening in the admin port waiting for changes.

Each node opens one thread per server listed in the proxysql_server table and connects to the IP on admin port (default admin port is 6032), waiting for any change in four tables – mysql_servers, mysql_users, proxysql_servers, mysql_query_rules. The only relationship between a core and satellite node is a satellite node connects to the core node and it waits for any change.

How to configure

It’s easy, we will configure only the IPs of core nodes in all cluster nodes, including “core and satellite” nodes, into the proxysql_servers tables. If you read my previous posts and configured a ProxySQL Cluster, we will clean the previous config for the next tables to test from scratch:

delete from mysql_query_rules;
LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;

delete from mysql_servers;
LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;

delete from mysql_users;
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;

delete from proxysql_servers;
LOAD PROXYSQL SERVERS TO RUNTIME;
SAVE PROXYSQL SERVERS TO DISK;

Suppose we have 100 ProxySQL nodes, for example, and here is the list of hostnames and IPs from our instances:

proxysql_node1 = 10.0.0.1
proxysql_node2 = 10.0.0.2
...
proxysql_node100 = 10.0.0.100

And we want to configure and use only 3 core nodes, so we select the first 3 nodes from the cluster:

proxysql_node1 = 10.0.0.1
proxysql_node2 = 10.0.0.2
proxysql_node3 = 10.0.0.3

And the rest of the nodes will be the satellite nodes:

proxysql_node4 = 10.0.0.4
proxysql_node5 = 10.0.0.5
...
proxysql_node100 = 10.0.0.100

We will use the above IPs to configure the proxysql_servers table, with only those 3 IPs over all nodes. So all ProxySQL nodes (from proxysql_node1 to proxysql_node100) will listen for changes only on those 3 nodes.

DELETE FROM proxysql_servers;

INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.0.1',6032,0,'proxysql_node1');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.0.2',6032,0,'proxysql_node2');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.0.3',6032,0,'proxysql_node3');

LOAD PROXYSQL SERVERS TO RUNTIME;
SAVE PROXYSQL SERVERS TO DISK;

Now all nodes from proxysql_node4 to proxysql_node100 are the satellite nodes.

We can see something like this in the proxysql.log file:

2019-08-09 15:50:14 [INFO] Created new Cluster Node Entry for host 10.0.0.1:6032
2019-08-09 15:50:14 [INFO] Created new Cluster Node Entry for host 10.0.0.2:6032
2019-08-09 15:50:14 [INFO] Created new Cluster Node Entry for host 10.0.0.3:6032
...
2019-08-09 15:50:14 [INFO] Cluster: starting thread for peer 10.0.0.1:6032
2019-08-09 15:50:14 [INFO] Cluster: starting thread for peer 10.0.0.2:6032
2019-08-09 15:50:14 [INFO] Cluster: starting thread for peer 10.0.0.3:6032

How to Test

I’ll create a new entry mysql_users table in the core node to test if the replication from core to satellite is working fine.

Connect to proxysql_node1 and run the next queries:

INSERT INTO mysql_users(username,password, active, default_hostgroup) VALUES ('user1','123456', 1, 10);
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;

Now from any satellite node, for example, proxysql_node4, check the ProxySQL log file to find if there are updates. If this is working fine we see something like this:

2019-08-09 18:49:24 [INFO] Cluster: detected a new checksum for mysql_users from peer 10.0.1.113:6032, version 3, epoch 1565376564, checksum 0x5FADD35E6FB75557 . Not syncing yet ...
2019-08-09 18:49:26 [INFO] Cluster: detected a peer 10.0.1.113:6032 with mysql_users version 3, epoch 1565376564, diff_check 3. Own version: 2, epoch: 1565375661. Proceeding with remote sync
2019-08-09 18:49:27 [INFO] Cluster: detected a peer 10.0.1.113:6032 with mysql_users version 3, epoch 1565376564, diff_check 4. Own version: 2, epoch: 1565375661. Proceeding with remote sync
2019-08-09 18:49:27 [INFO] Cluster: detected peer 10.0.1.113:6032 with mysql_users version 3, epoch 1565376564
2019-08-09 18:49:27 [INFO] Cluster: Fetching MySQL Users from peer 10.0.1.113:6032 started
2019-08-09 18:49:27 [INFO] Cluster: Fetching MySQL Users from peer 10.0.1.113:6032 completed
2019-08-09 18:49:27 [INFO] Cluster: Loading to runtime MySQL Users from peer 10.0.1.113:6032
2019-08-09 18:49:27 [INFO] Cluster: Saving to disk MySQL Query Rules from peer 10.0.1.113:6032

Then check if the previous update exists in the mysql_users table on proxysql_node4 or any other satellite node. These updates should exist in the mysql_users and runtime_mysql_users tables.

admin ((none))>select * from runtime_mysql_users;
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| username | password                                  | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| user1    | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | 1      | 0       | 0                 |                | 0             | 1                      | 0            | 1       | 0        | 10000           |
| user1    | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | 1      | 0       | 0                 |                | 0             | 1                      | 0            | 0       | 1        | 10000           |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+

admin ((none))>select * from mysql_users;
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| username | password                                  | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| user1    | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | 1      | 0       | 0                 |                | 0             | 1                      | 0            | 1       | 0        | 10000           |
| user1    | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | 1      | 0       | 0                 |                | 0             | 1                      | 0            | 0       | 1        | 10000           |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+

Now the final test is to create a new MySQL user into a satellite node, connect to proxysql_node4, and run the next queries to create a new username:

INSERT INTO mysql_users(username,password, active, default_hostgroup) VALUES ('user2','123456', 1, 10);
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;

From the proxysql log on proxysql_node4, we see the next output:

[root@ip-10-0-1-10 ~]# tail -f /var/lib/proxysql/proxysql.log -n100
...
2019-08-09 18:59:12 [INFO] Received LOAD MYSQL USERS TO RUNTIME command
2019-08-09 18:59:12 [INFO] Received SAVE MYSQL USERS TO DISK command

The last thing to check is the proxysql log file in the core node, to see if there are updates from the table mysql_users. Below is the output from proxysql_node1:

[root@proxysql proxysql]# tail /var/lib/proxysql/proxysql.log -n100
...
2019-08-09 19:09:21 [INFO] ProxySQL version 1.4.14-percona-1.1
2019-08-09 19:09:21 [INFO] Detected OS: Linux proxysql 4.14.77-81.59.amzn2.x86_64 #1 SMP Mon Nov 12 21:32:48 UTC 2018 x86_64

As you can see there are no updates, because the core nodes are not listening for changes from satellite nodes. Core nodes only listen for changes in other core nodes.

And finally, this feature is really useful when you have many servers to manage. Hope you can test this!

Aug
12
2019
--

ProxySQL 2.0.5 and proxysql-admin tool Now Available

ProxySQL 2.0.5

ProxySQL 1.4.14

ProxySQL 2.0.5, released by ProxySQL, is now available for download in the Percona Repository along with Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL and database servers in the MySQL ecosystem (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

This release includes ProxySQL 2.0.5 which fixes many bugs and introduces a number of features and enhancements. The proxysql-admin tool has been enhanced to support the following new options:

  • The --add-query-rule option creates query rules for synced MySQL users. This option is only applicable for the singlewrite mode and works together with the --syncusers and --sync-multi-cluster-users options.
  • The --force option skips existing configuration checks in mysql_servers, mysql_users and mysql_galera_hostgroups tables. This option will only work together with the –enable option: proxysql-admin --enable --force.
  • The --update-mysql-version option updates the mysql-server_version variable in ProxySQL with the version from a node in Percona XtraDB Cluster.

The ProxySQL 2.0.5 source and binary packages available from the Percona download page for ProxySQL include proxysql-admin – a tool developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 2.0.5 are available as well. You can download the original ProxySQL from GitHub. GitHub hosts the documentation in the wiki format.

ProxySQL 2.0.5 Improvements

  • PSQLADM-49: Create rules for –syncusers. When running with --syncusers or --sync-multi-cluster-users, the --add-query-rule option can now be specified to add the singlewriter query rules for the new users.
  • PSQLADM-51: Update mysql-server_version variable. The --update-mysql-version command has been added to set the mysql-server_version__ global variable in ProxySQL.  This will take the version from a node in the cluster and set it in ProxySQL.

Bugs Fixed

  • PSQLADM-190: The --remove-all-servers option did not work on enable. When running with proxysql-cluster, the galera hostgroups information was not replicated which could result in failing to run --enable on a different ProxySQL node.  The --force option was added for --enable to be able to ignore any errors and always configure the cluster.
  • PSQLADM-199: query-rules removed during proxysql-cluster creation with PXC operator. When using the PXC operator for Kubernetes and creating a proxysql-cluster, the query rules could be removed.  The code was modified to merge the query rules (rather than deleting and recreating).  If the --force option was specified, then a warning was issued in case any existing rules were found; otherwise an error was issued. The --disable-updates option was added to ensure that ProxySQL cluster updates did not interfere with the current command.
  • PSQLADM-200: users were not created for –-syncusers with PXC operator. When using the PXC operator for Kubernetes, the --syncusers command was run but the mysql_users table was not updated.  The fix for PSQLADM-199 that suggested to use --disable-updates also applies here.

ProxySQL is available under Open Source license GPLv3.

Jun
26
2019
--

Percona XtraDB Cluster 5.7.26-31.37 Is Now Available

Percona XtraDB Cluster 5.7

Percona XtraDB Cluster 5.7

Percona is glad to announce the release of Percona XtraDB Cluster 5.7.26-31.37 on June 26, 2019. Binaries are available from the downloads section or from our software repositories.

Percona XtraDB Cluster 5.7.26-31.37 is now the current release, based on the following:

All Percona software is open-source and free.

Bugs Fixed

  • PXC-2480: In some cases, Percona XtraDB Cluster could not replicate CURRENT_USER() used in the ALTER statement. USER() and CURRENT_USER() are no longer allowed in any ALTER statement since they fail when replicated.
  • PXC-2487: The case when a DDL or DML action was in progress from one client and the provider was updated
    from another client could result in a race condition.
  • PXC-2490: Percona XtraDB Cluster could crash when binlog_space_limit was set to a value other than zero during wsrep_recover mode.
  • PXC-2491: SST could fail if the donor had encrypted undo logs.
  • PXC-2497: The user can set the preferred donor by setting the wsrep_sst_donor variable. An IP address is not valid as the value of this variable. If the user still used an IP address, an error message was produced that did not provide sufficient information. The error message has been improved to suggest that the user check the value of the wsrep_sst_donor for an IP address.
  • PXC-2537: Nodes could crash after an attempt to set a password using mysqladmin

Other bugs fixedPXC-2276PXC-2292PXC-2476,  PXC-2560

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com