Sep
12
2014
--

Percona XtraBackup 2.2.4 is now available

href="http://www.percona.com/blog/wp-content/uploads/2013/01/Percona_XtraBackup.jpg"> class="alignright size-full wp-image-12668" src="http://www.percona.com/blog/wp-content/uploads/2013/01/Percona_XtraBackup.jpg" alt="Percona XtraBackup for MySQL" width="229" height="87" /> Percona is glad to announce the release of Percona XtraBackup 2.2.4 on September 12th 2014. Downloads are available from our download site href="http://www.percona.com/downloads/XtraBackup/Percona-XtraBackup-2.2.4/">here and href="http://www.percona.com/doc/percona-xtrabackup/2.2/installation.html">Percona Software Repositories.

href="http://www.percona.com/software/percona-xtrabackup">Percona XtraBackup enables backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backup.

New Features:

  • Percona XtraBackup has implemented support for Galera GTID auto-recovery. Percona XtraBackup retrieves the GTID information, after backing up a server with href="http://www.percona.com/doc/percona-server/5.6/management/backup_locks.html">backup locks support, from the InnoDB trx header on recovery and creates the xtrabackup_galera_info during that stage.

Bugs Fixed:

  • Percona XtraBackup is now built with system zlib library instead of the older bundled one. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtrabackup/+bug/1108016" rel="nofollow">#1108016.
  • apt-get source was downloading older version of Percona XtraBackup. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtrabackup/+bug/1363259" rel="nofollow">#1363259.
  • innobackupex would ignore the innobackupex --databases without innobackupex --stream option and back up all the databases. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtrabackup/+bug/569387" rel="nofollow">#569387.
  • rsync package wasn’t a dependency although it is required for the innobackupex --rsync option. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtrabackup/+bug/1259436" rel="nofollow">#1259436.
  • innobackupex --galera-info was checking only for non-capitalized wsrep_* status variables which was incompatible with MariaDB Galera Cluster 10.0. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtrabackup/+bug/1306875" rel="nofollow">#1306875.
  • Percona XtraBackup would crash trying to remove absent table from InnoDB data dictionary while preparing a partial backup. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtrabackup/+bug/1340717" rel="nofollow">#1340717.
  • Percona XtraBackup now supports MariaDB GTID. Bugs fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtrabackup/+bug/1329539" rel="nofollow">#1329539 and rel="nofollow" href="https://bugs.launchpad.net/percona-xtrabackup/+bug/1326967" rel="nofollow">#1326967 (Nirbhay Choubey).
  • MariaDB 10.1 is now added to the list of supported servers. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtrabackup/+bug/1364398" rel="nofollow">#1364398.
  • Percona XtraBackup would fail to restore (copy-back) tables that have partitions with their own tablespace location. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtrabackup/+bug/1322658" rel="nofollow">#1322658.

Other bugs fixed: rel="nofollow" href="https://bugs.launchpad.net/percona-xtrabackup/+bug/1333570" rel="nofollow">#1333570, rel="nofollow" href="https://bugs.launchpad.net/percona-xtrabackup/+bug/1326224" rel="nofollow">#1326224, and rel="nofollow" href="https://bugs.launchpad.net/percona-xtrabackup/+bug/1181171" rel="nofollow">#1181171.

Release notes with all the bugfixes for Percona XtraBackup 2.2.4 are available in our href="http://www.percona.com/doc/percona-xtrabackup/2.2/release-notes/2.2/2.2.4.html">online documentation. Bugs can be reported on the rel="nofollow" href="https://bugs.launchpad.net/percona-xtrabackup/+filebug" rel="nofollow">launchpad bug tracker

The post rel="nofollow" href="http://www.percona.com/blog/2014/09/12/percona-xtrabackup-2-2-4-now-available/">Percona XtraBackup 2.2.4 is now available appeared first on rel="nofollow" href="http://www.percona.com/blog">MySQL Performance Blog.

Sep
08
2014
--

How to calculate the correct size of Percona XtraDB Cluster’s gcache

href="http://www.percona.com/blog/wp-content/uploads/2014/09/XtraDB-Cluster1.jpg"> class="alignright size-full wp-image-25599" src="http://www.percona.com/blog/wp-content/uploads/2014/09/XtraDB-Cluster1.jpg" alt="How to calculate the correct size of Percona XtraDB Cluster's gcache" width="237" height="82" />When a write query is sent to href="http://www.percona.com/software/percona-xtradb-cluster" >Percona XtraDB Cluster all the nodes store the writeset on a file called gcache. By default the name of that file is galera.cache and it is stored in the MySQL datadir. This is a very important file, and as usual with the most important variables in MySQL, the default value is not good for high-loaded servers. Let’s see why it’s important and how can we calculate a correct value for the workload of our cluster.

What’s the gcache? /> When a node goes out of the cluster (crash or maintenance) it obviously stops receiving changes. When you try to reconnect the node to the cluster the data will be outdated. The joiner node needs to ask a donor to send the changes happened during the downtime.

The donor will first try to transfer an incremental (IST), that is, the writesets the cluster received while the node was down. The donor checks the last writeset received by the joiner and then checks local gcache file. If all needed writesets are on that cache the donor sends them to the joiner. The joiner applies them and that’s all, it is up to date and ready to join the cluster. Therefore, IST can only be achieved if all changes missed by the node that went away are still in that gcache file of the donor.

On the other hand, if the writesets are not there a full transfer would be needed (SST) using one of the supported methods, XtraBackup, Rsync or mysqldump.

In a summary, the difference between a IST and SST is the time that a node needs to join the cluster. The difference could be from seconds to hours. In case of WAN connections and large datasets maybe days.

That’s why having a correct gcache is important. It work as a circular log, so when it is full it starts to rewrite the writesets at the beginning. With a larger gcache a node can be out of the cluster more time without requiring a SST. My colleague Jay Janssen explains in more detail about href="http://www.percona.com/blog/2014/01/08/finding-good-ist-donor-percona-xtradb-cluster-5-6/">how IST works and how to find the right server to use as donor.

Calculating the correct size /> When trick is pretty similar to the one used to calculate the correct InnoDB log file size. We need to check how many bytes are written every minute. The variables to check are:

wsrep_replicated_bytes: Total size (in bytes) of writesets sent to other nodes.

wsrep_received_bytes: Total size (in bytes) of writesets received from other nodes.

mysql> show global status like 'wsrep_received_bytes';
show global status like 'wsrep_replicated_bytes';
select sleep(60);
show global status like 'wsrep_received_bytes';
show global status like 'wsrep_replicated_bytes';
+----------------------+----------+
| Variable_name        | Value    |
+----------------------+----------+
| wsrep_received_bytes | 83976571 |
+----------------------+----------+
+------------------------+-------+
| Variable_name          | Value |
+------------------------+-------+
| wsrep_replicated_bytes | 0     |
+------------------------+-------+
[...]
+----------------------+----------+
| Variable_name        | Value    |
+----------------------+----------+
| wsrep_received_bytes | 90576957 |
+----------------------+----------+
+------------------------+-------+
| Variable_name          | Value |
+------------------------+-------+
| wsrep_replicated_bytes | 800   |
+------------------------+-------+

Therefore:

Bytes per minute:

(second wsrep_received_bytes – first wsrep_received_bytes) + (second wsrep_replicated_bytes – first wsrep_replicated_bytes)

(90576957 – 83976571) + (800 – 0) = 6601186 bytes or 6 MB per minute.

Bytes per hour:

6MB * 60 minutes = 360 MB per hour of writesets received by the cluster.

If you want to allow one hour of maintenance (or downtime) of a node, you need to increase the gcache to that size. If you want more time, just make it bigger.

The post rel="nofollow" href="http://www.percona.com/blog/2014/09/08/calculate-correct-size-percona-xtradb-clusters-gcache/">How to calculate the correct size of Percona XtraDB Cluster’s gcache appeared first on rel="nofollow" href="http://www.percona.com/blog">MySQL Performance Blog.

Sep
05
2014
--

Percona XtraDB Cluster 5.5.39-25.11 is now available

href="http://www.percona.com/blog/wp-content/uploads/2013/02/XtraDB-Cluster.jpg"> class="alignright size-full wp-image-12836" src="http://www.percona.com/blog/wp-content/uploads/2013/02/XtraDB-Cluster.jpg" alt="Percona XtraDB Cluster 5.5.39-25.11" width="237" height="82" />Percona is glad to announce the new release of href="http://www.percona.com/software/percona-xtradb-cluster">Percona XtraDB Cluster 5.5 on September 5th 2014. Binaries are available from href="http://www.percona.com/downloads/">downloads area or from our href="http://www.percona.com/doc/percona-xtradb-cluster/5.5/installation.html">software repositories.

Based on Percona Server href="http://www.percona.com/doc/percona-server/5.5/release-notes/Percona-Server-5.5.39-36.0.html">5.5.39-36.0 including all the bug fixes in it, rel="nofollow" href="https://github.com/codership/galera/issues?milestone=1&page=1&state=closed" rel="nofollow">Galera Replicator 2.11, and on rel="nofollow" href="https://launchpad.net/wsrep-group/+milestone/5.5.38-25.11" rel="nofollow">Codership wsrep API 25.11, Percona XtraDB Cluster 5.5.39-25.11 is now the current 5.5 General Availability release. All of Percona‘s software is open-source and free, and all the details of the release can be found in the rel="nofollow" href="https://launchpad.net/percona-xtradb-cluster/+milestone/5.5.39-25.11" rel="nofollow">5.5.39-25.11 milestone at Launchpad.

New Features:

  • New session variable href="http://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-system-index.html#wsrep_sync_wait">wsrep_sync_wait has been implemented to control causality check. The old session variable href="http://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-system-index.html#wsrep_causal_reads">wsrep_causal_reads is deprecated but is kept for backward compatibility ( rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1277053">#1277053).
  • rel="nofollow" rel="nofollow" href="http://freedesktop.org/wiki/Software/systemd/">systemd integration with RHEL/CentOS 7 is now available for Percona XtraDB Cluster from our testing repository ( rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1342223">#1342223).

Bugs Fixed:

  • Percona XtraDB Cluster has implemented threadpool scheduling fixes. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1333348">#1333348.
  • When href="http://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-provider-index.html#gmcast.listen_addr">gmcast.listen_addr was configured to a certain address, local connection point for outgoing connections was not bound to listen address. This would happen if OS has multiple interfaces with IP addresses in the same subnet, it may happen that OS would pick wrong IP for local connection point and other nodes would see connections originating from IP address which was not listened to. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1240964">#1240964.
  • Client connections were closed unconditionally before generating SST request. Fixed by avoiding closing connections when wsrep is initialized before storage engines. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1258658">#1258658.
  • Issue with re-setting galera provider (in href="http://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-system-index.html#wsrep_provider_options">wsrep_provider_options) has been fixed. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1260283">#1260283.
  • Variable href="http://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-system-index.html#wsrep_provider_options">wsrep_provider_options couldn’t be set in runtime if no provider was loaded. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1260290">#1260290.
  • Node consistency issues with foreign keys have been fixed. This fix introduces two new variables: href="http://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-system-index.html#wsrep_slave_FK_checks">wsrep_slave_FK_checks and href="http://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-system-index.html#wsrep_slave_UK_checks">wsrep_slave_UK_checks. These variables are set to TRUE and FALSE respectively by default. They control whether Foreign Key and Unique Key checking is done for applier threads. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1260713">#1260713.
  • When FLUSH TABLES WITH READ LOCK was used on a node with href="http://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-system-index.html#wsrep_causal_reads">wsrep_causal_reads set to 1 while there was a DML on other nodes then, subsequent SELECTs/SHOW STATUS didn’t hang earlier providing non-causal output, that has been fixed here. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1271177">#1271177.
  • Lowest group communication layer (evs) would fail to handle the situation properly when big number of nodes would suddenly start to see each other. Bugs fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1271918">#1271918 and rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1249805">#1249805.
  • Updating a unique key value could cause server hang if slave node has enabled parallel slaves. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1280896">#1280896.
  • Fixed the events replication inconsistencies. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1312618">#1312618.
  • Truncating the sorted version of multi-byte character conversion could lead to wsrep certification failures. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1314854">#1314854.
  • href="http://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-system-index.html#wsrep_slave_threads">wsrep_slave_threads was counted towards max_connections which could cause ERROR 1040 (HY000): Too many connections error. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1315588">#1315588.
  • Leaving node was not set nonoperational if processed leave message originated from different view than the current one, which could cause other nodes to crash. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1323412">#1323412 ( rel="nofollow" rel="nofollow" href="https://github.com/codership/galera/issues/41">#41).
  • garbd couldn’t be started with init script on RHEL 6.5. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1323652">#1323652.
  • SST would fail when binlogs were in dedicated directory that’s located inside datadir. This bug was a regression introduced by bug fix for rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1273368">#1273368. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1326012">#1326012.
  • GTID of TOI operations is now also synced to InnoDB tablespace in order to get consistent backups. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1329055">#1329055.
  • mysql-debug (UNIV_DEBUG) is now distributed with binary tar.gz along with RPM and DEB packages. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1332073">#1332073.
  • The restart sequence in scripts/mysql.server would fail to capture and return if the start call failed to start the server, so a restart could occur that failed upon start-up, and the script would still return 0 as if it worked without any issues. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1339894">#1339894.
  • wsrep consistency check is now enabled for REPLACE ... SELECT as well. This was implemented because href="http://www.percona.com/doc/percona-toolkit/2.2/pt-table-checksum.html">pt-table-checksum uses REPLACE .. SELECT during checksumming. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1343209">#1343209.
  • A memory leak in wsrep_mysql_parse function has been fixed. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1345023">#1345023.
  • SHOW STATUS was generating debug output in the error log. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1347818">#1347818.
  • percona-xtradb-cluster-garbd-3.x package was installed incorrectly on Debian/Ubuntu. Bugs fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1360633">#1360633 and rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1334530">#1334530.

Release notes for Percona XtraDB Cluster href="http://www.percona.com/doc/percona-xtradb-cluster/5.5/release-notes/Percona-XtraDB-Cluster-5.5.39-25.11.html">5.5.39-25.11 are available in our href="http://www.percona.com/doc/percona-xtradb-cluster/5.5/index.html">online documentation along with the href="http://www.percona.com/doc/percona-xtradb-cluster/5.5/installation.html">installation instructions.

Help us improve our software quality by reporting any bugs you encounter using our rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+filebug" rel="nofollow">bug tracking system. As always, thanks for your continued support of Percona!

The post rel="nofollow" href="http://www.percona.com/blog/2014/09/05/percona-xtradb-cluster-5-5-39-25-11-now-available/">Percona XtraDB Cluster 5.5.39-25.11 is now available appeared first on rel="nofollow" href="http://www.percona.com/blog">MySQL Performance Blog.

Sep
01
2014
--

Percona XtraDB Cluster 5.6.20-25.7 is now available

href="http://www.percona.com/blog/wp-content/uploads/2013/02/XtraDB-Cluster.jpg"> class="alignright size-full wp-image-12836" src="http://www.percona.com/blog/wp-content/uploads/2013/02/XtraDB-Cluster.jpg" alt="Percona XtraDB Cluster 5.6.20-25.7" width="237" height="82" />Percona is glad to announce the new release of href="http://www.percona.com/software/percona-xtradb-cluster">Percona XtraDB Cluster 5.6 on September 1st 2014. Binaries are available from href="http://www.percona.com/downloads/">downloads area or from our href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/installation.html">software repositories.

Based on Percona Server href="http://www.percona.com/doc/percona-server/5.6/release-notes/Percona-Server-5.6.20-68.0.html">5.6.20-68.0 including all the bug fixes in it, rel="nofollow" href="https://github.com/codership/galera/issues?milestone=1&page=1&state=closed" rel="nofollow">Galera Replicator 3.7, and on rel="nofollow" href="https://launchpad.net/wsrep-group/+milestone/5.6.20-25.6" rel="nofollow">Codership wsrep API 25.7, Percona XtraDB Cluster 5.6.20-25.7 is now the current General Availability release. All of Percona‘s software is open-source and free, and all the details of the release can be found in the rel="nofollow" href="https://launchpad.net/percona-xtradb-cluster/+milestone/5.6.20-25.7" rel="nofollow">5.6.20-25.7 milestone at Launchpad.

New Features:

  • New session variable wsrep_sync_wait has been implemented to control causality check. The old session variable wsrep_causal_reads is deprecated but is kept for backward compatibility ( rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1277053">#1277053).
  • rel="nofollow" rel="nofollow" href="http://freedesktop.org/wiki/Software/systemd/">systemd integration with RHEL/CentOS 7 is now available for Percona XtraDB Cluster from our testing repository ( rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1342223">#1342223).

Bugs Fixed:

  • Running START TRANSACTION WITH CONSISTENT SNAPSHOT, mysqldump with --single-transaction or mydumper with disabled binlog would lead to a server crash. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1353644">#1353644.
  • percona-xtradb-cluster-garbd-3.x package was installed incorrectly on Debian/Ubuntu. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1360633">#1360633.
  • Fixed netcat in SST script for CentOS 7 nmap-ncat. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1359767">#1359767.
  • TO isolation was run even when wsrep plugin was not loaded. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1358681">#1358681.
  • The error from net read was not handled in native MySQL mode. This would cause duplicate key error if there was unfinished transaction at the time of shutdown, because it would be committed during the startup recovery. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1358264">#1358264.
  • The netcat in garbd init script has been replaced with nmap for compatibility in CentOS 7. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1349384">#1349384.
  • SHOW STATUS was generating debug output in the error log. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1347818">#1347818.
  • Incorrect source string length could lead to server crash. This fix allows maximum of 3500 bytes of key material to be populated, longer keys will be truncated. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1347768">#1347768.
  • wsrep consistency check is now enabled for REPLACE ... SELECT as well. This was implemented because pt-table-checksum uses REPLACE .. SELECT during checksumming. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1343209">#1343209.
  • Client connections were closed unconditionally before generating SST request. Fixed by avoiding closing connections when wsrep is initialized before storage engines. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1258658">#1258658.
  • Session-level binlog_format change to STATEMENT is now allowed to support pt-table-checksum. A warning (to not use it otherwise) is also added to error log.

Other bug fixes: rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1280270">#1280270.

Release notes for Percona XtraDB Cluster href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/release-notes/Percona-XtraDB-Cluster-5.6.20-25.7.html">5.6.20-25.7 are available in our href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/index.html">online documentation along with the href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/installation.html">installation and href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/upgrading_guide_55_56.html">upgrade instructions.

Help us improve our software quality by reporting any bugs you encounter using our rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+filebug" rel="nofollow">bug tracking system. As always, thanks for your continued support of Percona!

The post rel="nofollow" href="http://www.percona.com/blog/2014/09/01/percona-xtradb-cluster-5-6-20-25-7-now-available/">Percona XtraDB Cluster 5.6.20-25.7 is now available appeared first on rel="nofollow" href="http://www.percona.com/blog">MySQL Performance Blog.

Aug
29
2014
--

Percona Server 5.6.20-68.0 is now available

href="http://www.percona.com/blog/wp-content/uploads/2014/05/percona_server.jpeg"> class="alignright size-thumbnail wp-image-22759" src="http://www.percona.com/blog/wp-content/uploads/2014/05/percona_server-150x150.jpeg" alt="Percona Server" width="150" height="150" />Percona is glad to announce the release of href="http://www.percona.com/software/percona-server">Percona Server 5.6.20-68.0 on August 29, 2014. Download the latest version from the title="Percona Server 5.6" href="http://www.percona.com/downloads/Percona-Server-5.6/Percona-Server-5.6.20-68.0/" >Percona web site or from the Percona href="http://www.percona.com/doc/percona-server/5.6/installation.html#using-percona-software-repositories">Software Repositories.

Based on MySQL rel="nofollow" href="http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-20.html" rel="nofollow">5.6.20, including all the bug fixes in it, Percona Server 5.6.20-68.0 is the current GA release in the Percona Server 5.6 series. All of Percona’s software is open-source and free. Complete details of this release can be found in the rel="nofollow" href="https://launchpad.net/percona-server/+milestone/5.6.20-68.0" rel="nofollow">5.6.20-68.0 milestone on Launchpad.

New Features:

  • Percona Server has implemented the MySQL 5.7 SHOW SLAVE STATUS NONBLOCKING syntax for href="http://www.percona.com/doc/percona-server/5.6/reliability/show_slave_status_nolock.html#show-slave-status-nolock">Lock-Free SHOW SLAVE STATUS feature. The existing SHOW SLAVE STATUS NOLOCK is kept as a deprecated alias and will be removed in Percona Server 5.7. There were no functional changes for the feature.
  • Percona Server href="http://www.percona.com/doc/percona-server/5.6/management/audit_log_plugin.html#audit-log-plugin">Audit Log Plugin now supports JSON and CSV formats. The format choice is controlled by href="http://www.percona.com/doc/percona-server/5.6/management/audit_log_plugin.html#audit_log_format">audit_log_format variable.
  • Percona Server href="http://www.percona.com/doc/percona-server/5.6/management/audit_log_plugin.html#audit-log-plugin">Audit Log Plugin now supports href="http://www.percona.com/doc/percona-server/5.6/management/audit_log_plugin.html#streaming-to-syslog">streaming the audit log to syslog.
  • TokuDB storage engine package has been updated to version 7.1.8.

Bugs Fixed:

  • Querying href="http://www.percona.com/doc/percona-server/5.6/management/changed_page_tracking.html#INNODB_CHANGED_PAGES">INNODB_CHANGED_PAGES table with a range condition START_LSN > x AND END_LSN < y would lead to a server crash if the range was empty with x greater than y. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1202252">#1202252 (Jan Lindström and Sergei Petrunia).
  • SQL statements of other connections were missing in the output of SHOW ENGINE INNODB STATUS, in LATEST DETECTED DEADLOCK and TRANSACTIONS sections. This bug was introduced by href="http://www.percona.com/doc/percona-server/5.6/management/statement_timeout.html#statement-timeout">Statement Timeout patch in Percona Server href="http://www.percona.com/doc/percona-server/5.6/release-notes/Percona-Server-5.6.13-61.0.html#5.6.13-61.0">5.6.13-61.0. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1328824">#1328824.
  • Some of TokuDB distribution files were missing in the TokuDB binary tarball. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1338945">#1338945.
  • With href="http://www.percona.com/doc/percona-server/5.6/management/changed_page_tracking.html#changed-page-tracking">XtraDB changed page tracking feature enabled, queries from the href="http://www.percona.com/doc/percona-server/5.6/management/changed_page_tracking.html#INNODB_CHANGED_PAGES">INNODB_CHANGED_PAGES could read the bitmap data whose write was in still progress. This would cause the query to fail with an ER_CANT_FIND_SYSTEM_REC and a warning printed to the server error log. The workaround has been to add an appropriate END_LSN-limiting condition to the query. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1193332">#1193332.
  • mysqld-debug was missing from Debian packages. This regression was introduced in Percona Server href="http://www.percona.com/doc/percona-server/5.6/release-notes/Percona-Server-5.6.16-64.0.html#5.6.16-64.0">5.6.16-64.0. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1290087">#1290087.
  • Fixed a memory leak in href="http://www.percona.com/doc/percona-server/5.6/flexibility/slowlog_rotation.html#slowlog-rotation">Slow Query Log Rotation and Expiration. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1314138">#1314138.
  • The audit log plugin would write log with XML syntax errors when OLD and NEW formats were used. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1320879">#1320879.
  • Combination of href="http://www.percona.com/doc/percona-server/5.6/management/log_archiving.html#log-archiving">Log Archiving for XtraDB, href="http://www.percona.com/doc/percona-server/5.6/management/changed_page_tracking.html#changed-page-tracking">XtraDB changed page tracking, and small InnoDB logs could hang the server on the bootstrap shutdown. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1326379">#1326379.
  • --tc-heuristic-recover option values were broken. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1334330">#1334330 (upstream rel="nofollow" rel="nofollow" href="http://bugs.mysql.com/bug.php?id=70860">#70860).
  • If the bitmap directory has a bitmap file sequence with a start LSN of one file less than a start LSN of the previous file, a debug build would assert when queries were run on href="http://www.percona.com/doc/percona-server/5.6/management/changed_page_tracking.html#INNODB_CHANGED_PAGES">INNODB_CHANGED_PAGES table. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1342494">#1342494.

Other bugs fixed: rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1337247">#1337247, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1350386">#1350386, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1208371">#1208371, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1261341">#1261341, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1151723">#1151723, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1182050">#1182050, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1182068">#1182068, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1182072">#1182072, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1184287">#1184287, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1280875">#1280875, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1338937">#1338937, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1334743">#1334743, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1349394">#1349394, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1182046">#1182046, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1182049">#1182049, and rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1328482">#1328482 (upstream rel="nofollow" rel="nofollow" href="http://bugs.mysql.com/bug.php?id=73418">#73418).

Release notes for Percona Server 5.6.20-68.0 are available in the  href="http://www.percona.com/doc/percona-server/5.6/release-notes/Percona-Server-5.6.20-68.0.html">online documentation. Please report any bugs on the rel="nofollow" href="https://bugs.launchpad.net/percona-server/+filebug" rel="nofollow">launchpad bug tracker.

The post rel="nofollow" href="http://www.percona.com/blog/2014/08/29/percona-server-5-6-20-68-0-now-available/">Percona Server 5.6.20-68.0 is now available appeared first on rel="nofollow" href="http://www.percona.com/blog">MySQL Performance Blog.

Aug
29
2014
--

Percona Server 5.5.39-36.0 is now available

href="http://www.percona.com/blog/wp-content/uploads/2014/05/percona_server.jpeg"> class="alignright size-thumbnail wp-image-22759" src="http://www.percona.com/blog/wp-content/uploads/2014/05/percona_server-150x150.jpeg" alt="Percona Server" width="150" height="150" />Percona is glad to announce the release of  href="http://www.percona.com/software/percona-server">Percona Server 5.5.39-36.0 on August 29, 2014 (Downloads are available  href="http://www.percona.com/downloads/">here and from the  href="http://www.percona.com/doc/percona-server/5.5/installation.html">Percona Software Repositories). Based on  rel="nofollow" href="http://dev.mysql.com/doc/relnotes/mysql/5.5/en/news-5-5-39.html" rel="nofollow">MySQL 5.5.39, including all the bug fixes in it, Percona Server 5.5.39-36.0 is now the current stable release in the 5.5 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the  rel="nofollow" href="https://launchpad.net/percona-server/+milestone/5.5.39-36.0" rel="nofollow">5.5.39-36.0 milestone at Launchpad.

New Features:

  • Percona Server href="http://www.percona.com/doc/percona-server/5.5/management/audit_log_plugin.html#audit-log-plugin">Audit Log Plugin now supports JSON and CSV formats. The format choice is controlled by href="http://www.percona.com/doc/percona-server/5.5/management/audit_log_plugin.html#audit_log_format">audit_log_format variable.
  • Percona Server href="http://www.percona.com/doc/percona-server/5.5/management/audit_log_plugin.html#audit-log-plugin">Audit Log Plugin now supports href="http://www.percona.com/doc/percona-server/5.5/management/audit_log_plugin.html#streaming-to-syslog">streaming the audit log to syslog.

Bugs Fixed:

  • Querying href="http://www.percona.com/doc/percona-server/5.5/management/changed_page_tracking.html#INNODB_CHANGED_PAGES">INNODB_CHANGED_PAGES with a range condition START_LSN > x AND END_LSN < y would lead to a server crash if the range was empty with x greater than y. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1202252">#1202252 (Jan Lindström and Sergei Petrunia).
  • With href="http://www.percona.com/doc/percona-server/5.5/management/changed_page_tracking.html#changed-page-tracking">XtraDB changed page tracking feature enabled, queries from the href="http://www.percona.com/doc/percona-server/5.5/management/changed_page_tracking.html#INNODB_CHANGED_PAGES">INNODB_CHANGED_PAGES could read the bitmap data whose write was in still progress. This would cause the query to fail with an ER_CANT_FIND_SYSTEM_REC and a warning printed to the server error log. The workaround has been to add an appropriate END_LSN-limiting condition to the query. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1346122">#1346122.
  • mysqld-debug was missing from Debian packages. This regression was introduced in Percona Server href="http://www.percona.com/doc/percona-server/5.5/release-notes/Percona-Server-5.5.36-34.0.html#5.5.36-34.0">5.5.36-34.0. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1290087">#1290087.
  • Fixed a memory leak in href="http://www.percona.com/doc/percona-server/5.5/flexibility/slowlog_rotation.html#slowlog-rotation">Slow Query Log Rotation and Expiration. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1314138">#1314138.
  • The audit log plugin would write log with XML syntax errors when OLD and NEW formats were used. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1320879">#1320879.
  • A server built with system OpenSSL support, such as the distributed Percona Server binaries, had SSL-related memory leaks. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1334743">#1334743 (upstream rel="nofollow" rel="nofollow" href="http://bugs.mysql.com/bug.php?id=73126">#73126).
  • If the bitmap directory has a bitmap file sequence with a start LSN of one file less than a start LSN of the previous file, a debug build would assert when queries were run on href="http://www.percona.com/doc/percona-server/5.5/management/changed_page_tracking.html#INNODB_CHANGED_PAGES">INNODB_CHANGED_PAGES table. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1342494">#1342494.

Other bugs fixed: rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1337324">#1337324, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1151723">#1151723, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1182050">#1182050, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1182072">#1182072, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1280875">#1280875, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1182046">#1182046, rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1328482">#1328482 (upstream rel="nofollow" rel="nofollow" href="http://bugs.mysql.com/bug.php?id=73418">#73418), and rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1334317">#1334317 (upstream rel="nofollow" rel="nofollow" href="http://bugs.mysql.com/bug.php?id=73111">#73111).

Release notes for Percona Server 5.5.39-36.0 are available in our href="http://www.percona.com/doc/percona-server/5.5/release-notes/Percona-Server-5.5.39-36.0.html">online documentation. Bugs can be reported on the rel="nofollow" href="https://bugs.launchpad.net/percona-server/+filebug" rel="nofollow">launchpad bug tracker.

The post rel="nofollow" href="http://www.percona.com/blog/2014/08/29/percona-server-5-5-39-36-0-now-available/">Percona Server 5.5.39-36.0 is now available appeared first on rel="nofollow" href="http://www.percona.com/blog">MySQL Performance Blog.

Aug
25
2014
--

OpenStack’s Trove: The benefits of this database as a service (DBaaS)

In a previous post, my colleague href="http://www.mysqlperformanceblog.com/2014/07/24/dbaas-openstack-and-trove-101-introduction-to-the-basics/" >Dimitri Vanoverbeke discussed at a high level the concepts of database as a service (DBaaS), OpenStack and OpenStack’s implementation of a DBaaS, Trove. Today I’d like to delve a bit further into Trove and discuss where it fits in, and who benefits.

Just to recap, Trove is OpenStack’s implementation of a database as a service for its cloud infrastructure as a service (IaaS). And as the rel="nofollow" href="https://wiki.openstack.org/wiki/Trove" rel="nofollow">mission statement declares, the Trove project seeks to provide a scalable and reliable cloud database service providing functionality for both relational and non-relational database engines. With the current release of Icehouse, the technology has begun to show maturity providing both stability and a rich feature set.

In my opinion, there are two primary markets that will benefit from Trove: the first being service providers such as RackSpace who provide cloud-based services similar to Amazon’s AWS. These are companies that wish to expand beyond the basic cloud services of storage and networking and provide their customer base with a richer cloud experience by providing higher level services such as DBaaS functionality. The other players are those companies that wish to “cloudify” their own internal systems. The reasons for this decision are varied, ranging from the desire to maintain complete control over all the architecture and the cloud components to legal constraints limiting the use of public cloud infrastructures.

With Trove, much of the management of your database system is taken care of by automating a significant portion of the configuration and initial setup steps necessitated when launching a new server. This includes deployment, configuration, patching, backups, restores, and monitoring that can be administered from either a CLI interface, RESTful API’s or OpenStack’s Horizon dashboard. At this point, what Trove doesn’t provide is failover, replication and clustering. This functionality is slated to be implemented in the Kilo release of OpenStack due out in April/2015.

The process flow is relatively simple. The OpenStack Administrator first configures the basic infrastructure by installing the database service. He or she would then create an image for each type of database they wish to support such as MySQL or MongoDB. They would then import the images and offer them to their tenants. From the end users perspective only a few commandes are necessary to get up and running. First issuing the <trove create> command to create a database service instance, followed by <trove list> command to get the ID of the instance and finally trove show command to get the IP address of it.

For example to create a database, you first start off by creating a database instance. This is an isolated database environment with compute and storage resources in a single tenant environment on a shared physical host machine. You can run a database instance with a variety of database engines such as MySQL or MongoDB.

From the Trove client I can issue the following command to create a database instance called PS_troveinstance, with a volume size of 2 GB, a user called PS_user, a password PS_password and the MySQL datastore (or database engine):

$ trove create –size 2 –users PS_user:PS_password –datastore MySQL PS_troveinstance

Next I issue the following command to get the ID of the database instance:

$ trove list PS_troveinstance

And finally, to create a database called PS_trovedb, I execute:

$ trove database-create PS_troveinstance PS_trovedb

Alternatively, I could have just combined the above commands as:

$ trove create –size 2 —-database PS_trovedb users PS_user:PS_password –datastore MySQL PS_troveinstance

And thus we now have a MySQL database server containing a database called PS_trovedb.

In our next post on OpenStack/Trove, we’ll dig even further and discuss the software and hardware requirements, and how to actually set up Trove.

On a related note, Percona has several experts attending this week’s OpenStack Operations Summit in San Antonio, Texas. One of them is Matt Griffin, director of product management, who pointed out in a recent post that many OpenStack operators use Percona open source software including the MySQL drop-in compatible  title="Percona Server" href="http://www.percona.com/software/percona-server" >Percona Server and Galera-based  title="Percona XtraDB Cluster" href="http://www.percona.com/software/percona-xtradb-cluster" >Percona XtraDB Cluster as well as tools such as title="Percona XtraBackup" href="http://www.percona.com/software/percona-xtrabackup" >Percona XtraBackup and  title="Percona Toolkit" href="http://www.percona.com/software/percona-toolkit" >Percona Toolkit. “We see a need in the community to understand how to improve MySQL performance in OpenStack. As a result, title="Percona Consulting" href="http://www.percona.com/products/mysql-consulting/overview" >Percona, submitted 16 presentations for November’s Paris OpenStack Summit,” Matt said. So stay tuned for related news from him, too, on that front.

The post rel="nofollow" href="http://www.mysqlperformanceblog.com/2014/08/25/openstacks-trove-the-benefits-of-this-database-as-a-service-dbaas/">OpenStack’s Trove: The benefits of this database as a service (DBaaS) appeared first on rel="nofollow" href="http://www.mysqlperformanceblog.com/">MySQL Performance Blog.

Aug
25
2014
--

OpenStack’s Trove: The benefits of this database as a service (DBaaS)

In a previous post, my colleague href="http://www.percona.com/blog/2014/07/24/dbaas-openstack-and-trove-101-introduction-to-the-basics/" >Dimitri Vanoverbeke discussed at a high level the concepts of database as a service (DBaaS), OpenStack and OpenStack’s implementation of a DBaaS, Trove. Today I’d like to delve a bit further into Trove and discuss where it fits in, and who benefits.

Just to recap, Trove is OpenStack’s implementation of a database as a service for its cloud infrastructure as a service (IaaS). And as the rel="nofollow" href="https://wiki.openstack.org/wiki/Trove" rel="nofollow">mission statement declares, the Trove project seeks to provide a scalable and reliable cloud database service providing functionality for both relational and non-relational database engines. With the current release of Icehouse, the technology has begun to show maturity providing both stability and a rich feature set.

In my opinion, there are two primary markets that will benefit from Trove: the first being service providers such as RackSpace who provide cloud-based services similar to Amazon’s AWS. These are companies that wish to expand beyond the basic cloud services of storage and networking and provide their customer base with a richer cloud experience by providing higher level services such as DBaaS functionality. The other players are those companies that wish to “cloudify” their own internal systems. The reasons for this decision are varied, ranging from the desire to maintain complete control over all the architecture and the cloud components to legal constraints limiting the use of public cloud infrastructures.

With Trove, much of the management of your database system is taken care of by automating a significant portion of the configuration and initial setup steps necessitated when launching a new server. This includes deployment, configuration, patching, backups, restores, and monitoring that can be administered from either a CLI interface, RESTful API’s or OpenStack’s Horizon dashboard. At this point, what Trove doesn’t provide is failover, replication and clustering. This functionality is slated to be implemented in the Kilo release of OpenStack due out in April/2015.

The process flow is relatively simple. The OpenStack Administrator first configures the basic infrastructure by installing the database service. He or she would then create an image for each type of database they wish to support such as MySQL or MongoDB. They would then import the images and offer them to their tenants. From the end users perspective only a few commandes are necessary to get up and running. First issuing the <trove create> command to create a database service instance, followed by <trove list> command to get the ID of the instance and finally trove show command to get the IP address of it.

For example to create a database, you first start off by creating a database instance. This is an isolated database environment with compute and storage resources in a single tenant environment on a shared physical host machine. You can run a database instance with a variety of database engines such as MySQL or MongoDB.

From the Trove client I can issue the following command to create a database instance called PS_troveinstance, with a volume size of 2 GB, a user called PS_user, a password PS_password and the MySQL datastore (or database engine):

$ trove create –size 2 –users PS_user:PS_password –datastore MySQL PS_troveinstance

Next I issue the following command to get the ID of the database instance:

$ trove list PS_troveinstance

And finally, to create a database called PS_trovedb, I execute:

$ trove database-create PS_troveinstance PS_trovedb

Alternatively, I could have just combined the above commands as:

$ trove create –size 2 —-database PS_trovedb users PS_user:PS_password –datastore MySQL PS_troveinstance

And thus we now have a MySQL database server containing a database called PS_trovedb.

In our next post on OpenStack/Trove, we’ll dig even further and discuss the software and hardware requirements, and how to actually set up Trove.

On a related note, Percona has several experts attending this week’s OpenStack Operations Summit in San Antonio, Texas. One of them is Matt Griffin, director of product management, who pointed out in a recent post that many OpenStack operators use Percona open source software including the MySQL drop-in compatible  title="Percona Server" href="http://www.percona.com/software/percona-server" >Percona Server and Galera-based  title="Percona XtraDB Cluster" href="http://www.percona.com/software/percona-xtradb-cluster" >Percona XtraDB Cluster as well as tools such as title="Percona XtraBackup" href="http://www.percona.com/software/percona-xtrabackup" >Percona XtraBackup and  title="Percona Toolkit" href="http://www.percona.com/software/percona-toolkit" >Percona Toolkit. “We see a need in the community to understand how to improve MySQL performance in OpenStack. As a result, title="Percona Consulting" href="http://www.percona.com/products/mysql-consulting/overview" >Percona, submitted 16 presentations for November’s Paris OpenStack Summit,” Matt said. So stay tuned for related news from him, too, on that front.

The post rel="nofollow" href="http://www.percona.com/blog/2014/08/25/openstacks-trove-the-benefits-of-this-database-as-a-service-dbaas/">OpenStack’s Trove: The benefits of this database as a service (DBaaS) appeared first on rel="nofollow" href="http://www.percona.com/blog">MySQL Performance Blog.

Aug
19
2014
--

Measuring failover time for ScaleArc load balancer

ScaleArc hired Percona to benchmark failover times for the ScaleArc database traffic management software in different scenarios. We tested failover times for various clustered setups, where ScaleArc itself was the load balancer for the cluster. These tests complement other performance tests on the ScaleArc software – href="http://www.mysqlperformanceblog.com/2014/03/31/scalearc-benchmarking-sysbench/">sysbench testing for latency and testing for href="" rel="nofollow">WordPress acceleration.

We tested failover times for href="http://www.percona.com/software/percona-xtradb-cluster" >Percona XtraDB Cluster (PXC) and MHA (any traditional MySQL replication-based solution works pretty much the same way).

In each case, we tested failover with a rate limited sysbench benchmark. In this mode, sysbench generates roughly 10 transactions each second, even if not all 10 were completed in the previous second. In that case, the newly generated transactions are queued.

The sysbench command we used for testing is the following.

# while true ; do sysbench --test=sysbench/tests/db/oltp.lua
                           --mysql-host=172.31.10.231
                           --mysql-user=root
                           --mysql-password=admin
                           --mysql-db=sbtest
                           --oltp-table-size=10000
                           --oltp-tables-count=4
                           --num-threads=4
                           --max-time=0
                           --max-requests=0
                           --report-interval=1
                           --tx-rate=10
                           run ; sleep 1; done

The command is run in a loop, because typically at the time of failover, the application receives some kind of error while the virtual IP is moved, or while the current node is declared dead. Well-behaving applications are reconnecting and retrying in this case. Sysbench is not a well-behaving application from this perspective – after failover it has to be restarted.

This is good for testing the duration of errors during a failover procedure – but not the number of errors. In a simple failover scenario (ScaleArc is just used as a regular load balancer), the number of errors won’t be any higher than usual with ScaleArc’s queueing mechanism.

ScaleArc+MHA

In this test, we used MHA as a replication manager. The test result would be similar regardless of how its asynchronous replication is managed – only the ScaleArc level checks would be different. In the case of MHA, we tested graceful and non-graceful failover. In the graceful case, we stopped the manager and performed a manual master switchover, after which we informed the ScaleArc software via an API call for failover.

We ran two tests:

  • A manual switchover with the manager stopped, switching over manually, and informing the load balancer about the change.
  • An automatic failover where the master was killed, and we let MHA and the ScaleArc software discover it.

ScaleArc+MHA manual switchover

The manual switchover was performed with the following command.

# time (
          masterha_master_switch --conf=/etc/cluster_5.cnf
          --master_state=alive
          --new_master_host=10.11.0.107
          --orig_master_is_new_slave
          --interactive=0
          &&
          curl -k -X PUT https://172.31.10.30/api/cluster/5/manual_failover
          -d '{"apikey": "0c8aa9384ddf2a5756299a9e7650742a87bbb550"}' )
{"success":true,"message":"4214   Failover status updated successfully.","timestamp":1404465709,"data":{"apikey":"0c8aa9384ddf2a5756299a9e7650742a87bbb550"}}
real    0m8.698s
user    0m0.302s
sys     0m0.220s

The curl command calls ScaleArc’s API to inform the ScaleArc software about the master switchover.

During this time, sysbench output was the following.

[  21s] threads: 4, tps: 13.00, reads/s: 139.00, writes/s: 36.00, response time: 304.07ms (95%)
[  21s] queue length: 0, concurrency: 1
[  22s] threads: 4, tps: 1.00, reads/s: 57.00, writes/s: 20.00, response time: 570.13ms (95%)
[  22s] queue length: 8, concurrency: 4
[  23s] threads: 4, tps: 19.00, reads/s: 237.99, writes/s: 68.00, response time: 976.61ms (95%)
[  23s] queue length: 0, concurrency: 2
[  24s] threads: 4, tps: 9.00, reads/s: 140.00, writes/s: 40.00, response time: 477.55ms (95%)
[  24s] queue length: 0, concurrency: 3
[  25s] threads: 4, tps: 10.00, reads/s: 105.01, writes/s: 28.00, response time: 586.23ms (95%)
[  25s] queue length: 0, concurrency: 1

Only a slight hiccup is visible at 22 seconds. In this second, only 1 transaction was done, and 8 others were queued. These results show a sub-second failover time. The reason no errors were received is that ScaleArc itself queued the transactions during the failover process. If the transaction in question were done from an interactive client, the queuing itself would be visible as increased response time – for example a START TRANSACTION or an INSERT command is taking longer than usual, but no errors result. This is as good as it gets for graceful failover. ScaleArc knows about the failover (and in the case of a switchover initiated by a DBA, notifying the ScaleArc software can be part of the failover process). The queueing mechanism is quite configurable. Administrators can set up the timeout for the queue – we set it to 60 seconds, so if the failover doesn’t complete in that timeframe transactions start to fail.

ScaleArc+MHA non-graceful failover

In the case of the non-graceful failover MHA and the ScaleArc software have to figure out that the node died.

[  14s] threads: 4, tps: 11.00, reads/s: 154.00, writes/s: 44.00, response time: 1210.04ms (95%)
[  14s] queue length: 4, concurrency: 4
( sysbench restarted )
[   1s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   1s] queue length: 13, concurrency: 4
[   2s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   2s] queue length: 23, concurrency: 4
[   3s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   3s] queue length: 38, concurrency: 4
[   4s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   4s] queue length: 46, concurrency: 4
[   5s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   5s] queue length: 59, concurrency: 4
[   6s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   6s] queue length: 69, concurrency: 4
[   7s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   7s] queue length: 82, concurrency: 4
[   8s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   8s] queue length: 92, concurrency: 4
[   9s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   9s] queue length: 99, concurrency: 4
[  10s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[  10s] queue length: 108, concurrency: 4
[  11s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[  11s] queue length: 116, concurrency: 4
[  12s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[  12s] queue length: 126, concurrency: 4
[  13s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[  13s] queue length: 134, concurrency: 4
[  14s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[  14s] queue length: 144, concurrency: 4
[  15s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[  15s] queue length: 153, concurrency: 4
[  16s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[  16s] queue length: 167, concurrency: 4
[  17s] threads: 4, tps: 5.00, reads/s: 123.00, writes/s: 32.00, response time: 16807.85ms (95%)
[  17s] queue length: 170, concurrency: 4
[  18s] threads: 4, tps: 18.00, reads/s: 255.00, writes/s: 76.00, response time: 16888.55ms (95%)
[  18s] queue length: 161, concurrency: 4

The failover time in this case was 18 seconds. We looked at the results and found that a check the ScaleArc software does (which involves opening an SSH connection to all nodes in MHA) take 5 seconds, and ScaleArc declares the node dead after 3 consecutive checks (this parameter is configurable). Hence the high failover time. A much lower one can be achieved with more frequent checks and a different check method – for example checking the read-only flag, or making MHA store its state in the databases.

ScaleArc+Percona XtraDB Cluster tests

Percona XtraDB Cluster is special when it comes to high availability testing because of the many ways one can use it. Many people write only to one node to avoid rollback on conflicts, but we have also seen lots of folks using all the nodes for writes.

Also, graceful failover can be interpreted differently.

  • Failover can be graceful from both Percona XtraDB Cluster’s and from the ScaleArc software’s perspective. First the traffic is switched to another node, or removed from the node in question, and then MySQL is stopped.
  • Failover can be graceful from Percona XtraDB Cluster’s perspective, but not from the ScaleArc software’s perspective. In this case the database is simply stopped with service mysql stop, and the load balancer figures out what happened. I will refer to this approach as semi-graceful from now on.
  • Failover can be completely non-graceful if a node is killed, where neither Percona XtraDB Cluster, nor ScaleArc knows about its departure.

We did all the tests using one node at a time, and using all three nodes. What makes these test sets even more complex is that when only one node at a time used, some tests (the semi-graceful and the non-graceful one) don’t have the same result if the node removed is the used one or an unused one. This process involves a lot of tests, so for the sake of brevity, I omit the actual sysbench output here – they look either like the graceful MHA and non-graceful MHA case – and only present the results in a tabular format. In the case of active/active setups, to remove the nodes gracefully we first have to lower the maximum number of connections on that node to 0.

class="table-responsive">
style="width:100%; " class="easy-table easy-table-cuscosky " > Failover type 1 node (active) 1 node (passive) all nodes Graceful sub-second (no errors) no effect at all sub-second (no errors) Semi-graceful 4 seconds (errors) no effect at all 3 seconds Non-graceful 4 seconds (errors) 6 seconds (no errors) 7 seconds (errors)

In the previous table, active means that the failed node did receive sysbench transactions and passive means that it didn’t.

All the graceful cases are similar to MHA’s graceful case.

If only one node is used and a passive node is removed from the cluster, by stopping the database itself gracefully with the

# service mysql stop

command, it doesn’t have an effect on the cluster. For the subsequent cases of the graceful failover, switching on ScaleArc will enable queuing similar to MHA’s case. In case of the semi-graceful, if the passive node (which has no traffic) departs, it has no effect. Otherwise, the application will get errors (because of the unexpected mysql stop), and the failover time is around 3-4 seconds for the cases when only 1 node is active and when all 3 are active. This makes sense, because ScaleArc was configured to do checks (using clustercheck) every second, and declare a node dead after three consecutive failed checks. After the ScaleArc software determined that it should fail over, and it did so, the case is practically the same as the passive node’s removal from that point (because ScaleArc removes traffic in practice making that node passive).

The non-graceful case is tied to suspect timeout. In this case, XtraDB Cluster doesn’t know that the node departed the cluster, and the originating nodes are trying to send write sets to it. Those write sets will never be certified because the node is gone, so the writes will be stalled. The exception here is the case when the active node failed. After ScaleArc figures out that the node dies (three consecutive checks at one second intervals) a new node is chosen, but because only the failed node was doing transactions, no write sets are in the remaining two nodes’ queues, which are waiting for certification, so there is no need to wait for suspect timeout here.

Conclusion

ScaleArc does have reasonably good failover time, especially in the case when it doesn’t have to interrupt transactions. Its promise of zero-downtime maintenance is fulfilled both in the case of MHA and XtraDB Cluster.

The post rel="nofollow" href="http://www.mysqlperformanceblog.com/2014/08/19/measuring-failover-time-for-scalearc-load-balancer/">Measuring failover time for ScaleArc load balancer appeared first on rel="nofollow" href="http://www.mysqlperformanceblog.com/">MySQL Performance Blog.

Aug
08
2014
--

Release Candidate Packages for Red Hat Enterprise Linux 7 and CentOS 7 now available

The Percona team is pleased to announce the Release Candidate for Percona Software packages for rel="nofollow" href="http://www.redhat.com/about/news/press-archive/2014/6/red-hat-unveils-rhel-7" rel="nofollow">Red Hat Enterprise Linux 7 and rel="nofollow" href="http://www.karan.org/blog/2014/07/15/the-centos-7-release-announcement/" rel="nofollow">CentOS 7.

With more than 1 million downloads and thousands of production deployments at small, mid-size and large enterprises, Percona software is a industry leading distribution that enhances the performance, scale, management and diagnosis of MySQL deployments.

This new packages bring the benefits of Percona Software to developers and administrators that are using Red Hat Enterprise Linux 7 and CentOS 7.

The new packages are available from our testing repository. The packages included are: :

  • href="http://www.percona.com/software/percona-server">Percona Server 5.5
  • href="http://www.percona.com/software/percona-server">Percona Server 5.6
  • href="http://www.percona.com/software/percona-xtradb-cluster">Percona XtraDB Cluster 5.5
  • href="http://www.percona.com/software/percona-xtradb-cluster">Percona XtraDB Cluster 5.6
  • href="http://www.percona.com/software/percona-xtrabackup">Percona XtraBackup 2.2
  • href="http://www.percona.com/software/percona-toolkit">Percona Toolkit 2.2

In addition, we have included rel="nofollow" href="http://freedesktop.org/wiki/Software/systemd/" rel="nofollow">systemd service scripts for Percona Server and Percona XtraDB Cluster.

Please try out the Percona Software for Red Hat Enterprise Linux 7 and CentOS 7 by downloading and installing the packages from our href="http://www.percona.com/doc/percona-server/5.6/installation/yum_repo.html#percona-yum-experimental-repository">testing repository as follows:

yum install http://repo.percona.com/testing/centos/7/os/noarch/percona-testing-0.0-1.noarch.rpm

Help us improve our software quality by reporting any bugs you encounter using our rel="nofollow" href="https://bugs.launchpad.net/percona-server/+filebug" rel="nofollow">bug tracking system. As always, thanks for your continued support of Percona!

NOTE: These packages aren’t Generally Available (GA) and thus are not recommended for production deployments.

The post rel="nofollow" href="http://www.mysqlperformanceblog.com/2014/08/08/release-candidate-packages-for-red-hat-enterprise-linux-7-and-centos-7-are-now-available/">Release Candidate Packages for Red Hat Enterprise Linux 7 and CentOS 7 now available appeared first on rel="nofollow" href="http://www.mysqlperformanceblog.com/">MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com