Oct
07
2019
--

Achieving Disaster Recovery with Percona XtraDB Cluster

Disaster Recovery with Percona XtraDB Cluster

Disaster Recovery with Percona XtraDB ClusterOne thing that comes up often from working with a variety of clients at Percona is “How can I achieve a Disaster Recovery (DR) solution with Percona XtraDB Cluster (PXC)?”  Unfortunately, decisions are sometimes made with far-reaching consequences by individuals who often do not well understand the architecture and its limitations.  As a Technical Account Manager (TAM), I am often engaged to help clients look for better solutions, or at least try to help them with it by mitigating as many issues as possible.  Clearly, in a perfect world, we would like to get the right experts involved in these types of discussions to ensure more appropriate solutions, but we all know this is not a perfect world.

One such example involves the idea that if we take a PXC cluster and split it into two datacenters with two nodes in a primary datacenter and one node in a separate datacenter, we will have a hot standby node at all times.  In this case, the application can be pointed to the third node in the event of something catastrophic.  This sounds great…in theory.  The problem is latency.

Latency can cripple a PXC cluster

By design, PXC is meant to work with nodes that can communicate with one another quickly.  The underlying cluster technology, known as Galera, is considered “virtually synchronous” in nature.  In this architecture, writesets are replicated to all active nodes in the cluster at transaction commit and will go into a queue.  Next, each node performs a certification of the writeset which is deterministic in nature.  A bug, notwithstanding, each node will either accept or reject the certification in the same manner.  So, either the writeset is applied on all nodes or it is rolled back by all nodes.  What matters in this discussion about the above is the write queue.

As writes come in on one of the nodes, the writesets are replicated to each of the other nodes.  In the above three-node cluster, the writes are certified quickly with the two nodes in the same datacenter.  However, the third node is located in a different datacenter some distance away.  In this case, the writeset must travel across the WAN and will go into a queue (wsrep_local_recv_queue).

So, what’s the problem?

To ensure that one node does not get too far behind the rest of the cluster, any node can send a flow control message to the cluster.  This instructs the cluster to stop replicating new events until the slow node catches up to within some number of writesets as defined by gcs.fc_limit in the configuration.  Essentially, when the number of transactions in the queue exceeds the gcs.fc_limit, flow control messages will be sent and the cluster will stop replicating new writesets.  Unless you have changed it, this will be 16 writesets.

Remember that PXC is virtually synchronous

When replication stops, all nodes stop accepting writes momentarily.  In this event, the system seems to stall until the local recv queue makes some space for new writesets, at which point replication will continue.  This can appear as a stall to the application and leads to huge performance issues.

So, what is a better solution to the above situation?  It is preferable to utilize an asynchronous Slave server replicating from the PXC cluster for failover.  This is the same replication that is built into Percona Server and not Galera.  While this may include adding another server, this could be mitigated by the use of garbd. This process will act as an arbitrator to maintain quorum of the cluster and thus decrease the number of data nodes in PXC by running the lightweight garbd process on an app server or some other server in the environment.  This keeps server count down in cases that are cost-sensitive.

The asynchronous nature means no sending of flow controls from the node in the remote datacenter.  Replication will lag and catch up as needed with the PXC cluster none the wiser.  Because all nodes in the PXC cluster are local to one another, ideally latency is minimal and writesets can be applied much more quickly and stalls minimized.  Of course, this does come with a few challenges.

One challenge is that due to the nature of asynchronous replication, there can be significant lag in the DR node.  This is not always an issue, however, as the only time you are using this server is during a disaster.  The writes will be sent over immediately by the Master, so it is reasonable to expect that the DR node will eventually catch up and you hope to not lose any data, although there are no guarantees.

This brings us to another concern.  Simple asynchronous replication has no guarantee of consistency like PXC provides.  In PXC, there are controls in place to guarantee consistency, but asynchronous replication provides none.  There are, therefore, cases where data drift can occur between Master and Slave.  To mitigate this risk, you can use pt-table-checksum from the Percona Toolkit to detect inconsistency between the Master node of the PXC cluster and the Slave and rectify it with pt-table-sync from the same toolkit.  This, of course, requires that you run this process often.  If it is done as part of an automated process, it should also be monitored to ensure it is being done and whether or not data drift is occurring.

You will also want to monitor that the Master node does not go down, as there is no built-in process of failing the DR node over to a new Master within the PXC cluster.  Our very own Yves Trudeau wrote a utility to manage this, and more information can be found here: https://github.com/y-trudeau/Mysql-tools/tree/master/PXC

Improving Performance

While this solution presents some additional complexity, it does provide for a more performant PXC cluster.  As a TAM, I have seen geographically-distributed PXC result in countless incidents of a system down in production.  Even when the system doesn’t go down, it often slows down due to the latency issues.  Why take that performance impact on every transaction for a DR solution that you hope to never use when there is an alternative solution as has been proposed here?  Instead, you maybe could benefit from an alternative approach that provides an acceptable failover solution while improving performance day in and day out.

Oct
04
2019
--

Percona XtraDB Cluster 8.0 (experimental release) : SST Improvements

xtradb sst improvements

xtradb sst improvementsStarting with the experimental release of Percona XtraDB Cluster 8.0, we have made changes to the SST process to make the process more robust and easier to use.

  • mysqldump and rsync are no longer supported SST methods.

    Support for mysqldump was deprecated starting with PXC 5.7 and has now been completely removed.

    MySQL 8.0 introduced a new Redo Log format that limited the use of rsync while upgrading from PXC 5.7 to 8.0. In addition, the new Galera-4 also introduced changes that further limits the use of rsync.

    The only supported SST method is xtrabackup-v2.

  • A separate Percona XtraBackup installation is no longer required.

    The required Percona XtraBackup (PXB) binaries are now shipped as part of PXC 8.0, they are not installed for general use. So if you want to use PXB outside of an SST, you will have to install PXB separately.

  • SST logging now uses MySQL error logging

    Previously, the SST script would write directly to the error log file. Now, the SST script uses MySQL error logging. A side effect of this change is that the SST logs are not immediately visible. This is due to the logging subsystem being initialized after the SST has completed.

  • The wsrep_sst_auth variable has been removed.

    PXC 8.0 now creates an internal user (mysql.pxc.sst.user) with a random password for use by PXB to take the backup. The cleartext of the password is not saved and the user is deleted after the SST has completed.

    (This feature is still in development and may change before PXC 8.0 GA)

  • PXC SST auto-upgrade

    When PXC 8.0 detects that the SST came from a lower version, mysql_upgrade is automatically invoked. Also “RESET SLAVE ALL” is run on the new node if needed. This is invoked when receiving an SST from PXC 5.7 and PXC 8.0.

    (This feature is still in development and may change before PXC 8.0 GA)

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Oct
03
2019
--

Percona XtraDB Cluster 8.0 New Feature: wsrep_sst_auth Removal

Experimental Binary XtraDB 8.0

Experimental Binary XtraDB 8.0The problem

In PXC 5.6 and 5.7, when using xtrabackup-v2 as the SST method, the DBA must create a user with the appropriate privileges for use by Percona XtraBackup (PXB). The username and password of this backup user are specified in the wsrep_sst_auth variable.

This is a problem because this username and password was being stored in plaintext and required that the configuration file be secured.

The PXC 8.0 solution

(This feature is still under development and may change before PXC 8.0 GA)

Because the wsrep_sst_auth is only needed on the donor side to take a backup, PXC 8.0 uses an internal user (created specifically for use by PXC) with a randomly generated password. Since this user is only needed on the donor, the plaintext password is not needed on the joiner node.

This password consists of 32 characters generated at random. A new password is generated for each SST request. The plaintext of the password is never saved and never leaves the node. The username/password is sent to the SST script via unnamed pipes (stdin).

New PXC internal user accounts

mysql.pxc.internal.session

The mysql.pxc.internal.session user account provides the appropriate security context to create and set up the other PXC accounts. This account has a limited set of privileges, enough needed to create the mysql.pxc.sst.user??.

This account is locked and cannot be used to login (the password field will not allow login).

mysql.pxc.sst.user

The mysql.pxc.sst.user is used by XtraBackup to perform the backup. This account has the full set of privileges needed by XtraBackup.?? This account is created for an SST and is dropped at the end of an SST and also when the PXC node is shutdown. The creation/provisioning of this user account is not written to the binlog and is not replicated to other nodes. However, this account is sent with the backup to the joiner node. So the joiner node also has to drop this user after the SST has finished.

mysql.pxc.sst.role

The mysql.pxc.sst.role is the MySQL role that provides the privileges needed for XtraBackup. This allows for easy addition/removal of privileges needed for an SST.

The experimental release of PXC is based on MySQL 8.0.15, and we have not implemented the role-based support due to issues found with MySQL 8.0.15. This will be revisited in future versions of PXC 8.0.

Program flow

  1. DONOR node receives SST request from the JOINER
  2. DONOR node generates a random password and creates the internal SST user
    SET SESSION sql_log_bin = OFF;
    DROP USER IF EXISTS 'mysql.pxc.sst.user'@localhost;
    CREATE USER 'mysql.pxc.sst.user'@localhost IDENTIFIED WITH 'mysql_native_password' BY 'XXXXXXXX' ACCOUNT LOCK;
    GRANT 'mysql.pxc.sst.role'@localhost TO 'mysql.pxc.sst.user'@localhost;
    SET DEFAULT ROLE 'mysql.pxc.sst.role'@localhost to 'mysql.pxc.sst.user'@localhost;
    ALTER USER 'mysql.pxc.sst.user'@localhost ACCOUNT UNLOCK;

    The code that uses role is not being used in the current release due to issues with MySQL 8.0.15. Currently, we create the user with all the permissions needed explicitly.

  3. Launch the SST script (passing the username/password via stdin)
  4. SST uses the username/password to perform the backup
  5. SST script exits
  6. The DONOR node drops the user.
  7. The JOINER node receives the backup and drops the user. Note that the JOINER node also contains the internal SST user!

As a precaution, the user is also dropped when the server is shutdown.

Oct
01
2019
--

Experimental Binary of Percona XtraDB Cluster 8.0

Experimental Binary XtraDB 8.0

Experimental Binary XtraDB 8.0Percona is happy to announce the first experimental binary of Percona XtraDB Cluster 8.0 on October 1, 2019. This is a major step for tuning Percona XtraDB Cluster to be more cloud- and user-friendly. This release combines the updated and feature-rich Galera 4, with substantial improvements made by our development team.

Improvements and New Features

Galera 4, included in Percona XtraDB Cluster 8.0, has many new features. Here is a list of the most essential improvements:

  • Streaming replication supports large transactions
  • The synchronization functions allow action coordination (wsrep_last_seen_gtid, wsrep_last_written_gtid, wsrep_sync_wait_upto_gtid)
  • More granular and improved error logging. wsrep_debug is now a multi-valued variable to assist in controlling the logging, and logging messages have been significantly improved.
  • Some DML and DDL errors on a replicating node can either be ignored or suppressed. Use the wsrep_ignore_apply_errors variable to configure.
  • Multiple system tables help find out more about the state of the cluster state.
  • The wsrep infrastructure of Galera 4 is more robust than that of Galera 3. It features a faster execution of code with better state handling, improved predictability, and error handling.

Percona XtraDB Cluster 8.0 has been reworked in order to improve security and reliability as well as to provide more information about your cluster:

  • There is no need to create a backup user or maintain the credentials in plain text (a security flaw). An internal SST user is created, with a random password for making a backup, and this user is discarded immediately once the backup is done.
  • Percona XtraDB Cluster 8.0 now automatically launches the upgrade as needed (even for minor releases). This avoids manual intervention and simplifies the operation in the cloud.
  • SST (State Snapshot Transfer) rolls back or fixes an unwanted action. It is no more “a copy only block” but a smart operation to make the best use of the copy-phase.
  • Additional visibility statistics are introduced in order to obtain more information about Galera internal objects. This enables easy tracking of the state of execution and flow control.

Installation

You can only install this release from a tarball and it, therefore, cannot be installed through a package management system, such as apt or yum. Note that this release is not ready for use in any production environment.

Percona XtraDB Cluster 8.0 is based on the following:

Please be aware that this release will not be supported in the future, and as such, neither the upgrade to this release nor the downgrade from higher versions is supported.

This release is also packaged with Percona XtraBackup 8.0.5. All Percona software is open-source and free.

In order to experiment with Percona XtraDB Cluster 8.0 in your environment, download and unpack the tarball for your platform.

Note

Be sure to check your system and make sure that the packages are installed which Percona XtraDB Cluster 8.0 depends on.

For Debian or Ubuntu:

$ sudo apt-get install -y \
socat libdbd-mysql-perl \
rsync libaio1 libc6 libcurl3 libev4 libgcc1 libgcrypt20 \
libgpg-error0 libssl1.1 libstdc++6 zlib1g libatomic1

For Red Hat Enterprise Linux or CentOS:

$ sudo yum install -y openssl socat  \
procps-ng chkconfig procps-ng coreutils shadow-utils \
grep libaio libev libcurl perl-DBD-MySQL perl-Digest-MD5 \
libgcc rsync libstdc++ libgcrypt libgpg-error zlib glibc openssl-libs

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Sep
26
2019
--

Running Percona XtraDB Cluster on Raspberry PI 3

Percona XtraDB Cluster on Raspberry PI 3

Percona XtraDB Cluster on Raspberry PI 3In a previous post, I showed you how to compile Percona Mysql 5.7 on Raspberry PI 3. Now, I’ll show you how to compile and run the latest version of Percona XtraDB Cluster 5.7.26.

We will need at least 3 RaspberryPi 3 boards, and I recommend you use an external SSD drive to compile and use as MySQL’s “datadir” to avoid the stalls associated with the microSD card, which will cause PXC to run slow.

In this post, we are going to run many OS commands and configure PXC. I recommend having minimal knowledge about PXC and Linux commands.

How to install CentOS

Download the centos image from this link http://mirror.ufro.cl/centos-altarch/7.6.1810/isos/armhfp/CentOS-Userland-7-armv7hl-RaspberryPI-Minimal-1810-sda.raw.xz

I’m using this open source software https://www.balena.io/etcher/ to flash/burn the microSD card because it is very simple and works fine. I recommend using a 16GB card or larger.

If the previous process finished ok, take out the microSD card and put it into the raspberry and boot it. Wait until the boot process finishes and log in with this user and password

user: root
password: centos

By default, the root partition is very small and you need to expand it. Run the next command, this process is fast.

/usr/bin/rootfs-expand

Finally, to finish the installation process it is necessary to install many packages to compile PXC and percona-xtrabackup,

yum install -y cmake wget automake bzr gcc make telnet gcc-c++ vim libcurl-devel readline-devel ncurses-devel screen zlib-devel bison locate libaio libaio-devel autoconf socat libtool  libgcrypt-devel libev-devel vim-common check-devel openssl-devel boost-devel glib2-devel

Repeat all the previous steps over all those 3 dbnodes because some libs are needed to start mysql and to work with percona-xtrabackup.

How to compile PXC + Galera + Xtrabackup

I am using an external SSD disk because it’s faster than the microSD which I mounted on /mnt directory, so we will use this partition to download, compile, and create the datadir.

For the latest PXC version, download and decompress it using the following commands:

cd /mnt
wget https://www.percona.com/downloads/Percona-XtraDB-Cluster-57/Percona-XtraDB-Cluster-5.7.26-31.37/source/tarball/Percona-XtraDB-Cluster-5.7.26-31.37.tar.gz
tar zxf Percona-XtraDB-Cluster-5.7.26-31.37.tar.gz
cd Percona-XtraDB-Cluster-5.7.26-31.37

RaspberryPi3’s are not x86_64 architecture; it uses an ARMv7 which forces us to compile PXC from scratch. For me, this process took 6-ish hours. I recommend running the next commands in a screen session.

screen -SDRL compile_pxc
cmake -DINSTALL_LAYOUT=STANDALONE -DCMAKE_INSTALL_PREFIX=/mnt/pxc5726 -DDOWNLOAD_BOOST=ON -DWITH_BOOST=/mnt/sda1/my_boost -DWITH_WSREP=1 .
make
make install

Create a new OS user to be used by the mysql process.

useradd --no-create-home mysql

Now to continue it is necessary to create the datadir directory. I recommend to use the SSD disk attached and set the mysql permissions to the directory:

mkdir /mnt/pxc5726/data/
chown mysql.mysql /mnt/pxc5726/data/

Configure my.cnf with the minimal params to start mysql.

vim /etc/my.cnf

[client]
socket=/mnt/pxc5726/data/mysql.sock

[mysqld]
datadir = /mnt/pxc5726/data
binlog-format = row
log_bin = /mnt/pxc5726/data/binlog
innodb_buffer_pool_size = 128M
socket=/mnt/pxc5726/data/mysql.sock
symbolic-links=0

wsrep_provider=/mnt/pxc5726/libgalera_smm.so

wsrep_on=ON
wsrep_cluster_name="pxc-cluster"
wsrep_cluster_address="gcomm://192.168.88.134"

wsrep_node_name="pxc1"
wsrep_node_address=192.168.88.134

wsrep_debug=1

wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sstuser:passw0rd

pxc_strict_mode=ENFORCING

[mysqld_safe]
log-error=/mnt/pxc5726/data/mysqld.log
pid-file=/mnt/pxc5726/data/mysqld.pid

So far we’ve only compiled PXC. Before we can start the mysql process, we also need to compile the Galera library, as, without this library, mysql will start as a standalone db-node and will not have any “cluster” capabilities.

To build libgalera_smm.so you need to install scons.

Download using the next command:

cd /mnt
wget http://prdownloads.sourceforge.net/scons/scons-3.1.0.tar.gz
tar zxf scons-3.1.0.tar.gz
cd scons-3.1.0
python setup.py install

Now we will proceed to compile libgalera. The source code for this library exists in the Percona-XtraDB-Cluster-57 directory download:

cd /mnt/Percona-XtraDB-Cluster-5.7.26-31.37/percona-xtradb-cluster-galera
scons -j4 libgalera_smm.so debug=3 psi=1 BOOSTINCLUDE=/mnt/my_boost/boost_1_59_0/boost BOOST_LIBS=/mnt/my_boost/boost_1_59_0/libs

If the compile process finishes ok, it will create a new file like this:

/mnt/Percona-XtraDB-Cluster-5.7.26-31.37/percona-xtradb-cluster-galera/libgalera_smm.so

I recommend copying libgalera_smm.so to the installed PXC directory to avoid deleting, in that case, remove the source directory.

cp /mnt/Percona-XtraDB-Cluster-5.7.26-31.37/percona-xtradb-cluster-galera/libgalera_smm.so /mnt/pxc5726

Also, this is useful because after installing and compiling all the packages we can create a compressed file and copy to the rest of the db-nodes to avoid having to compile these packages again.

The last step is to download and compile percona-xtrabackup. I used the latest version:

screen -SDRL compile_xtrabackup

cd /mnt
wget https://www.percona.com/downloads/Percona-XtraBackup-2.4/Percona-XtraBackup-2.4.15/source/tarball/percona-xtrabackup-2.4.15.tar.gz
tar zxf percona-xtrabackup-2.4.15.tar.gz
cd percona-xtrabackup-2.4.15
cmake -DBUILD_CONFIG=xtrabackup_release -DWITH_MAN_PAGES=OFF -DCMAKE_INSTALL_PREFIX=/mnt/xtrabackup .
make
make install

If all the previous steps were successful, we will add those binary directories in the PATH variable:

vim /etc/profile.d/percona.sh

export PATH=$PATH:/mnt/xtrabackup/bin:/mnt/pxc5726/bin

Save and exit and run the “export” command manually to set it in the active session:

export PATH=$PATH:/mnt/xtrabackup/bin:/mnt/pxc5726/bin

Well, so far we have PXC, Galera-lib, and percona-xtrabackup compiled for the ARMv7 architecture, which is all that is needed to work with PXC, so now we can start playing.

We will call the first host where we compiled “pxc1”. We will proceed to set the hostname:

$ vim /etc/hostname
pxc1

$ hostname pxc1
$ exit

It is necessary to connect again to ssh to refresh the hostname and you will see something like this:

$ ssh ip
[root@pxc1 ~]#

Now we are ready to open the ports needed by PXC;  xtrabackup, galera, and mysql.

$ firewall-cmd --permanent --add-port=3306/tcp
$ firewall-cmd --permanent --add-port=4567/tcp
$ firewall-cmd --permanent --add-port=4444/tcp

$ firewall-cmd --reload

$ firewall-cmd --list-ports
22/tcp 3306/tcp 4567/tcp 4444/tcp

Repeat the above procedures to set the hostname and open ports in the other db nodes.

Finally, to start the first node you’ll need to initialize the system databases and then launch the mysql process with the –wsrep-new-cluster option. This will bootstrap the cluster so that it starts with 1 node. Run the following from the command-line:

$ mysqld --initialize-insecure --user=mysql --basedir=/mnt/pxc5726 --datadir=/mnt/pxc5726/data
$ mysqld_safe --defaults-file=/etc/my.cnf --user=mysql --wsrep-new-cluster &

Check the mysql error log to see any errors:

$ tailf /mnt/pxc5726/data/mysqld.log

...
2019-09-08T23:23:17.415991Z 0 [Note] /mnt/pxc5726/bin/mysqld: ready for connections.
...

Create the new mysql user, and this will be used for xtrabackup to sync another db node on IST or SST process.

$ mysql

CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'passw0rd';
GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
FLUSH PRIVILEGES;

Lastly, I recommend checking if the galera library was loaded successfully:

$ mysql

show global status where Variable_name='wsrep_provider_name' 
or Variable_name='wsrep_provider_name' 
or Variable_name='wsrep_provider_vendor' 
or Variable_name='wsrep_local_state' 
or Variable_name='wsrep_local_state_comment' 
or Variable_name='wsrep_cluster_size' 
or Variable_name='wsrep_cluster_status' 
or Variable_name='wsrep_connected' 
or Variable_name='wsrep_ready' 
or Variable_name='wsrep_evs_state' 
or Variable_name like 'wsrep_flow_control%';

You will see something like this:

+----------------------------------+-----------------------------------+
| Variable_name                    | Value                             |
+----------------------------------+-----------------------------------+
| wsrep_flow_control_paused_ns     | 0                                 |
| wsrep_flow_control_paused        | 0.000000                          |
| wsrep_flow_control_sent          | 0                                 |
| wsrep_flow_control_recv          | 0                                 |
| wsrep_flow_control_interval      | [ 100, 100 ]                      |
| wsrep_flow_control_interval_low  | 100                               |
| wsrep_flow_control_interval_high | 100                               |
| wsrep_flow_control_status        | OFF                               |
| wsrep_local_state                | 4                                 |
| wsrep_local_state_comment        | Synced                            |
| wsrep_evs_state                  | OPERATIONAL                       |
| wsrep_cluster_size               | 1                                 | <----
| wsrep_cluster_status             | Primary                           |
| wsrep_connected                  | ON                                |
| wsrep_provider_name              | Galera                            |
| wsrep_provider_vendor            | Codership Oy <info@codership.com> |
| wsrep_ready                      | ON                                |
+----------------------------------+-----------------------------------+

As you can see this is the first node and the cluster size is 1.

How to copy the previous compiled source code to the rest of the db nodes

You don’t need to compile again over all the rest of the db nodes, just compress PXC and Xtrabackup directories and copy to the rest of the db nodes, nothing else.

In the first step, we compiled and installed on the next directories:

/mnt/pxc5726
/mnt/xtrabackup

We’re going to proceed to compress both directories and copy to the other servers (pxc2 and pxc3)

$ cd /mnt
$ tar czf pxc5726.tgz pxc5726
$ tar czf xtrabackup.tgz xtrabackup

Let’s copy to each db node:

$ cd /mnt
$ scp pxc5726.tgz xtrabackup.tgz IP_PXC2:/mnt

Now connect to each db node and start decompressing, configure my.cnf, other stuff, and start mysql.

$ ssh IP_PXC2
$ cd /mnt
$ tar zxf pxc5726.tgz
$ tar zxf xtrabackup.tgz

From here we are going to repeat several steps that we did previously.

Connect to IP_PXC2.

$ ssh IP_PXC2

Add the next directories in the PATH variable:

vim /etc/profile.d/percona.sh

export PATH=$PATH:/mnt/xtrabackup/bin:/mnt/pxc5726/bin

Save and exit and run the “export” command manually to set it in the active session:

export PATH=$PATH:/mnt/xtrabackup/bin:/mnt/pxc5726/bin

Create a new OS user to be used by mysql process:

useradd --no-create-home mysql

Now to continue it’s necessary to create the datadir directory. I recommend using the SSD disk attached and set the mysql permissions to the directory.

mkdir /mnt/pxc5726/data/
chown mysql.mysql /mnt/pxc5726/data/

Configure my.cnf with the minimal params to start mysql.

vim /etc/my.cnf

[client]
socket=/mnt/pxc5726/data/mysql.sock

[mysqld]
datadir = /mnt/pxc5726/data
binlog-format = row
log_bin = /mnt/pxc5726/data/binlog
innodb_buffer_pool_size = 128M
socket=/mnt/pxc5726/data/mysql.sock
symbolic-links=0

wsrep_provider=/mnt/pxc5726/libgalera_smm.so

wsrep_on=ON
wsrep_cluster_name="pxc-cluster"
wsrep_cluster_address="gcomm://IP_PXC1,IP_PXC2"

wsrep_node_name="pxc2"
wsrep_node_address=IP_PXC2

wsrep_debug=1

wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sstuser:passw0rd

pxc_strict_mode=ENFORCING

[mysqld_safe]
log-error=/mnt/pxc5726/data/mysqld.log
pid-file=/mnt/pxc5726/data/mysqld.pid

Now we need to start the second db node. This time you don’t need to add “–wsrep-new-cluster” param (from the second and next nodes it’s not needed), just start mysql and run the next command:

$ mysqld_safe --defaults-file=/etc/my.cnf --user=mysql &

Check the mysql error log to see any error:

$ tailf /mnt/pxc5726/data/mysqld.log

...
2019-09-08T23:53:17.415991Z 0 [Note] /mnt/pxc5726/bin/mysqld: ready for connections.
...

Check if the galera library was loaded successfully:

$ mysql

show global status where Variable_name='wsrep_provider_name' 
or Variable_name='wsrep_provider_name' 
or Variable_name='wsrep_provider_vendor' 
or Variable_name='wsrep_local_state' 
or Variable_name='wsrep_local_state_comment' 
or Variable_name='wsrep_cluster_size' 
or Variable_name='wsrep_cluster_status' 
or Variable_name='wsrep_connected' 
or Variable_name='wsrep_ready' 
or Variable_name='wsrep_evs_state' 
or Variable_name like 'wsrep_flow_control%';

You will see something like this:

+----------------------------------+-----------------------------------+
| Variable_name                    | Value                             |
+----------------------------------+-----------------------------------+
| wsrep_flow_control_paused_ns     | 0                                 |
| wsrep_flow_control_paused        | 0.000000                          |
| wsrep_flow_control_sent          | 0                                 |
| wsrep_flow_control_recv          | 0                                 |
| wsrep_flow_control_interval      | [ 100, 100 ]                      |
| wsrep_flow_control_interval_low  | 100                               |
| wsrep_flow_control_interval_high | 100                               |
| wsrep_flow_control_status        | OFF                               |
| wsrep_local_state                | 4                                 |
| wsrep_local_state_comment        | Synced                            |
| wsrep_evs_state                  | OPERATIONAL                       |
| wsrep_cluster_size               | 2                                 | <---
| wsrep_cluster_status             | Primary                           |
| wsrep_connected                  | ON                                |
| wsrep_provider_name              | Galera                            |
| wsrep_provider_vendor            | Codership Oy <info@codership.com> |
| wsrep_ready                      | ON                                |
+----------------------------------+-----------------------------------+

Excellent, we have the second node up, now is it’s part of the cluster and the cluster size is 2.

Repeat the previous steps to configure IP_PXC3 and nexts.

Summary

We are ready to start playing and doing a lot of tests. I recommend running sysbench to test how many transactions this environment supports, kill some nodes to test SST and IST, and I recommend using pt-online-schema-change and check this blog “How to Perform Compatible Schema Changes in Percona XtraDB Cluster” to learn how this tool works in PXC.

I hope you enjoyed this guide, and if you want to compile other PXC 5.7.X versions, please go ahead, the steps will be very similar.

Sep
18
2019
--

Percona XtraDB Cluster 5.7.27-31.39 Is Now Available

Percona XtraDB Cluster

Percona XtraDB ClusterPercona is happy to announce the release of Percona XtraDB Cluster 5.7.27-31.39 on September 18, 2019. Binaries are available from the downloads section or from our software repositories.

Percona XtraDB Cluster 5.7.27-31.39 is now the current release, based on the following:

All Percona software is open-source and free.

Bugs Fixed

  • PXC-2432: PXC was not updating the information_schema user/client statistics properly.
  • PXC-2555: SST initialization delay: fixed a bug where the SST process took too long to detect if a child process was running.
  • PXC-2557: Fixed a crash when a node goes NON-PRIMARY and SHOW STATUS is executed.
  • PXC-2592: PXC restarting automatically on data inconsistency.
  • PXC-2605: PXC could crash when log_slow_verbosity included InnoDB.  Fixed upstream PS-5820.
  • PXC-2639: Fixed an issue where a SQL admin command (like OPTIMIZE) could cause a deadlock.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Sep
17
2019
--

Percona XtraDB Cluster 5.6.45-28.36 Is Now Available

Percona XtraDB Cluster

Percona XtraDB Cluster

Percona is glad to announce the release of Percona XtraDB Cluster 5.6.45-28.36 on September 17, 2019. Binaries are available from the downloads section or from our software repositories.

Percona XtraDB Cluster 5.6.45-28.36 is now the current release, based on the following:

All Percona software is open-source and free.

Bugs Fixed

  • PXC-2432: PXC was not updating the information schema user/client statistics properly.
  • PXC-2555: SST initialization delay: fixed a bug where the SST process took too long to detect if a child process was running.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Aug
21
2019
--

ProxySQL 2.0.6 and proxysql-admin tool Now Available

ProxySQL

ProxySQL 1.4.14

ProxySQL 2.0.6, released by ProxySQL, is now available for download in the Percona Repository along with Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL and database servers in the MySQL ecosystem (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

This release includes ProxySQL 2.0.6 which introduces many new features and enhancements and also fixes a number of bugs. The proxysql-admin tool has not changed since the previous release.

The ProxySQL 2.0.6 source and binary packages available from the Percona download page for ProxySQL include ProxySQL Admin – a tool developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 2.0.6 are available as well. You can download the original ProxySQL from GitHub. GitHub hosts the documentation in the wiki format.

ProxySQL is available under Open Source license GPLv3.

Aug
12
2019
--

ProxySQL 2.0.5 and proxysql-admin tool Now Available

ProxySQL 2.0.5

ProxySQL 1.4.14

ProxySQL 2.0.5, released by ProxySQL, is now available for download in the Percona Repository along with Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL and database servers in the MySQL ecosystem (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

This release includes ProxySQL 2.0.5 which fixes many bugs and introduces a number of features and enhancements. The proxysql-admin tool has been enhanced to support the following new options:

  • The --add-query-rule option creates query rules for synced MySQL users. This option is only applicable for the singlewrite mode and works together with the --syncusers and --sync-multi-cluster-users options.
  • The --force option skips existing configuration checks in mysql_servers, mysql_users and mysql_galera_hostgroups tables. This option will only work together with the –enable option: proxysql-admin --enable --force.
  • The --update-mysql-version option updates the mysql-server_version variable in ProxySQL with the version from a node in Percona XtraDB Cluster.

The ProxySQL 2.0.5 source and binary packages available from the Percona download page for ProxySQL include proxysql-admin – a tool developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 2.0.5 are available as well. You can download the original ProxySQL from GitHub. GitHub hosts the documentation in the wiki format.

ProxySQL 2.0.5 Improvements

  • PSQLADM-49: Create rules for –syncusers. When running with --syncusers or --sync-multi-cluster-users, the --add-query-rule option can now be specified to add the singlewriter query rules for the new users.
  • PSQLADM-51: Update mysql-server_version variable. The --update-mysql-version command has been added to set the mysql-server_version__ global variable in ProxySQL.  This will take the version from a node in the cluster and set it in ProxySQL.

Bugs Fixed

  • PSQLADM-190: The --remove-all-servers option did not work on enable. When running with proxysql-cluster, the galera hostgroups information was not replicated which could result in failing to run --enable on a different ProxySQL node.  The --force option was added for --enable to be able to ignore any errors and always configure the cluster.
  • PSQLADM-199: query-rules removed during proxysql-cluster creation with PXC operator. When using the PXC operator for Kubernetes and creating a proxysql-cluster, the query rules could be removed.  The code was modified to merge the query rules (rather than deleting and recreating).  If the --force option was specified, then a warning was issued in case any existing rules were found; otherwise an error was issued. The --disable-updates option was added to ensure that ProxySQL cluster updates did not interfere with the current command.
  • PSQLADM-200: users were not created for –-syncusers with PXC operator. When using the PXC operator for Kubernetes, the --syncusers command was run but the mysql_users table was not updated.  The fix for PSQLADM-199 that suggested to use --disable-updates also applies here.

ProxySQL is available under Open Source license GPLv3.

Jul
16
2019
--

Percona Kubernetes Operator for Percona XtraDB Cluster 1.1.0 Now Available

Percona Kubernetes Operator for Percona XtraDB Cluster 1.1.0

Percona Kubernetes Operator for Percona XtraDB Cluster 1.1.0We are glad to announce the 1.1.0 release of the  Percona Kubernetes Operator for Percona XtraDB Cluster.

The Percona Kubernetes Operator for Percona XtraDB Cluster automates the lifecycle and provides a consistent Percona XtraDB Cluster instance. The Operator can be used to create a Percona XtraDB Cluster, or scale an existing Cluster, and contains the necessary Kubernetes settings.

The Operator simplifies the deployment and management of the Percona XtraDB Cluster in Kubernetes-based environments. It extends the Kubernetes API with a new custom resource for deploying, configuring and managing the application through the whole life cycle.

The Operator source code is available in our Github repository. All of Percona’s software is open-source and free.

New features and improvements

  • Now the Percona Kubernetes Operator allows upgrading Percona XtraDB Cluster to newer versions, either in semi-automatic or in manual mode.
  • Also, two modes are implemented for updating the Percona XtraDB Cluster my.cnf configuration file: in automatic configuration update mode Percona XtraDB Cluster Pods are immediately re-created to populate changed options from the Operator YAML file, while in manual mode changes are held until Percona XtraDB Cluster Pods are re-created manually.
  • A separate service account is now used by the Operator’s containers which need special privileges, and all other Pods run on default service account with limited permissions.
  • User secrets are now generated automatically if don’t exist: this feature especially helps reduce work in repeated development environment testing and reduces the chance of accidentally pushing predefined development passwords to production environments.
  • The Operator is now able to generate TLS certificates itself which removes the need in manual certificate generation.
  • The list of officially supported platforms now includes Minikube, which provides an easy way to test the Operator locally on your own machine before deploying it on a cloud.
  • Also, Google Kubernetes Engine 1.14 and OpenShift Platform 4.1 are now supported.

Percona XtraDB Cluster is an open source, cost-effective and robust clustering solution for businesses. It integrates Percona Server for MySQL with the Galera replication library to produce a highly-available and scalable MySQL® cluster complete with synchronous multi-master replication, zero data loss and automatic node provisioning using Percona XtraBackup.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com