Aug
25
2014
--

OpenStack’s Trove: The benefits of this database as a service (DBaaS)

In a previous post, my colleague href="http://www.mysqlperformanceblog.com/2014/07/24/dbaas-openstack-and-trove-101-introduction-to-the-basics/" >Dimitri Vanoverbeke discussed at a high level the concepts of database as a service (DBaaS), OpenStack and OpenStack’s implementation of a DBaaS, Trove. Today I’d like to delve a bit further into Trove and discuss where it fits in, and who benefits.

Just to recap, Trove is OpenStack’s implementation of a database as a service for its cloud infrastructure as a service (IaaS). And as the rel="nofollow" href="https://wiki.openstack.org/wiki/Trove" rel="nofollow">mission statement declares, the Trove project seeks to provide a scalable and reliable cloud database service providing functionality for both relational and non-relational database engines. With the current release of Icehouse, the technology has begun to show maturity providing both stability and a rich feature set.

In my opinion, there are two primary markets that will benefit from Trove: the first being service providers such as RackSpace who provide cloud-based services similar to Amazon’s AWS. These are companies that wish to expand beyond the basic cloud services of storage and networking and provide their customer base with a richer cloud experience by providing higher level services such as DBaaS functionality. The other players are those companies that wish to “cloudify” their own internal systems. The reasons for this decision are varied, ranging from the desire to maintain complete control over all the architecture and the cloud components to legal constraints limiting the use of public cloud infrastructures.

With Trove, much of the management of your database system is taken care of by automating a significant portion of the configuration and initial setup steps necessitated when launching a new server. This includes deployment, configuration, patching, backups, restores, and monitoring that can be administered from either a CLI interface, RESTful API’s or OpenStack’s Horizon dashboard. At this point, what Trove doesn’t provide is failover, replication and clustering. This functionality is slated to be implemented in the Kilo release of OpenStack due out in April/2015.

The process flow is relatively simple. The OpenStack Administrator first configures the basic infrastructure by installing the database service. He or she would then create an image for each type of database they wish to support such as MySQL or MongoDB. They would then import the images and offer them to their tenants. From the end users perspective only a few commandes are necessary to get up and running. First issuing the <trove create> command to create a database service instance, followed by <trove list> command to get the ID of the instance and finally trove show command to get the IP address of it.

For example to create a database, you first start off by creating a database instance. This is an isolated database environment with compute and storage resources in a single tenant environment on a shared physical host machine. You can run a database instance with a variety of database engines such as MySQL or MongoDB.

From the Trove client I can issue the following command to create a database instance called PS_troveinstance, with a volume size of 2 GB, a user called PS_user, a password PS_password and the MySQL datastore (or database engine):

$ trove create –size 2 –users PS_user:PS_password –datastore MySQL PS_troveinstance

Next I issue the following command to get the ID of the database instance:

$ trove list PS_troveinstance

And finally, to create a database called PS_trovedb, I execute:

$ trove database-create PS_troveinstance PS_trovedb

Alternatively, I could have just combined the above commands as:

$ trove create –size 2 —-database PS_trovedb users PS_user:PS_password –datastore MySQL PS_troveinstance

And thus we now have a MySQL database server containing a database called PS_trovedb.

In our next post on OpenStack/Trove, we’ll dig even further and discuss the software and hardware requirements, and how to actually set up Trove.

On a related note, Percona has several experts attending this week’s OpenStack Operations Summit in San Antonio, Texas. One of them is Matt Griffin, director of product management, who pointed out in a recent post that many OpenStack operators use Percona open source software including the MySQL drop-in compatible  title="Percona Server" href="http://www.percona.com/software/percona-server" >Percona Server and Galera-based  title="Percona XtraDB Cluster" href="http://www.percona.com/software/percona-xtradb-cluster" >Percona XtraDB Cluster as well as tools such as title="Percona XtraBackup" href="http://www.percona.com/software/percona-xtrabackup" >Percona XtraBackup and  title="Percona Toolkit" href="http://www.percona.com/software/percona-toolkit" >Percona Toolkit. “We see a need in the community to understand how to improve MySQL performance in OpenStack. As a result, title="Percona Consulting" href="http://www.percona.com/products/mysql-consulting/overview" >Percona, submitted 16 presentations for November’s Paris OpenStack Summit,” Matt said. So stay tuned for related news from him, too, on that front.

The post rel="nofollow" href="http://www.mysqlperformanceblog.com/2014/08/25/openstacks-trove-the-benefits-of-this-database-as-a-service-dbaas/">OpenStack’s Trove: The benefits of this database as a service (DBaaS) appeared first on rel="nofollow" href="http://www.mysqlperformanceblog.com/">MySQL Performance Blog.

Aug
19
2014
--

Measuring failover time for ScaleArc load balancer

ScaleArc hired Percona to benchmark failover times for the ScaleArc database traffic management software in different scenarios. We tested failover times for various clustered setups, where ScaleArc itself was the load balancer for the cluster. These tests complement other performance tests on the ScaleArc software – href="http://www.mysqlperformanceblog.com/2014/03/31/scalearc-benchmarking-sysbench/">sysbench testing for latency and testing for href="" rel="nofollow">WordPress acceleration.

We tested failover times for href="http://www.percona.com/software/percona-xtradb-cluster" >Percona XtraDB Cluster (PXC) and MHA (any traditional MySQL replication-based solution works pretty much the same way).

In each case, we tested failover with a rate limited sysbench benchmark. In this mode, sysbench generates roughly 10 transactions each second, even if not all 10 were completed in the previous second. In that case, the newly generated transactions are queued.

The sysbench command we used for testing is the following.

# while true ; do sysbench --test=sysbench/tests/db/oltp.lua
                           --mysql-host=172.31.10.231
                           --mysql-user=root
                           --mysql-password=admin
                           --mysql-db=sbtest
                           --oltp-table-size=10000
                           --oltp-tables-count=4
                           --num-threads=4
                           --max-time=0
                           --max-requests=0
                           --report-interval=1
                           --tx-rate=10
                           run ; sleep 1; done

The command is run in a loop, because typically at the time of failover, the application receives some kind of error while the virtual IP is moved, or while the current node is declared dead. Well-behaving applications are reconnecting and retrying in this case. Sysbench is not a well-behaving application from this perspective – after failover it has to be restarted.

This is good for testing the duration of errors during a failover procedure – but not the number of errors. In a simple failover scenario (ScaleArc is just used as a regular load balancer), the number of errors won’t be any higher than usual with ScaleArc’s queueing mechanism.

ScaleArc+MHA

In this test, we used MHA as a replication manager. The test result would be similar regardless of how its asynchronous replication is managed – only the ScaleArc level checks would be different. In the case of MHA, we tested graceful and non-graceful failover. In the graceful case, we stopped the manager and performed a manual master switchover, after which we informed the ScaleArc software via an API call for failover.

We ran two tests:

  • A manual switchover with the manager stopped, switching over manually, and informing the load balancer about the change.
  • An automatic failover where the master was killed, and we let MHA and the ScaleArc software discover it.

ScaleArc+MHA manual switchover

The manual switchover was performed with the following command.

# time (
          masterha_master_switch --conf=/etc/cluster_5.cnf
          --master_state=alive
          --new_master_host=10.11.0.107
          --orig_master_is_new_slave
          --interactive=0
          &&
          curl -k -X PUT https://172.31.10.30/api/cluster/5/manual_failover
          -d '{"apikey": "0c8aa9384ddf2a5756299a9e7650742a87bbb550"}' )
{"success":true,"message":"4214   Failover status updated successfully.","timestamp":1404465709,"data":{"apikey":"0c8aa9384ddf2a5756299a9e7650742a87bbb550"}}
real    0m8.698s
user    0m0.302s
sys     0m0.220s

The curl command calls ScaleArc’s API to inform the ScaleArc software about the master switchover.

During this time, sysbench output was the following.

[  21s] threads: 4, tps: 13.00, reads/s: 139.00, writes/s: 36.00, response time: 304.07ms (95%)
[  21s] queue length: 0, concurrency: 1
[  22s] threads: 4, tps: 1.00, reads/s: 57.00, writes/s: 20.00, response time: 570.13ms (95%)
[  22s] queue length: 8, concurrency: 4
[  23s] threads: 4, tps: 19.00, reads/s: 237.99, writes/s: 68.00, response time: 976.61ms (95%)
[  23s] queue length: 0, concurrency: 2
[  24s] threads: 4, tps: 9.00, reads/s: 140.00, writes/s: 40.00, response time: 477.55ms (95%)
[  24s] queue length: 0, concurrency: 3
[  25s] threads: 4, tps: 10.00, reads/s: 105.01, writes/s: 28.00, response time: 586.23ms (95%)
[  25s] queue length: 0, concurrency: 1

Only a slight hiccup is visible at 22 seconds. In this second, only 1 transaction was done, and 8 others were queued. These results show a sub-second failover time. The reason no errors were received is that ScaleArc itself queued the transactions during the failover process. If the transaction in question were done from an interactive client, the queuing itself would be visible as increased response time – for example a START TRANSACTION or an INSERT command is taking longer than usual, but no errors result. This is as good as it gets for graceful failover. ScaleArc knows about the failover (and in the case of a switchover initiated by a DBA, notifying the ScaleArc software can be part of the failover process). The queueing mechanism is quite configurable. Administrators can set up the timeout for the queue – we set it to 60 seconds, so if the failover doesn’t complete in that timeframe transactions start to fail.

ScaleArc+MHA non-graceful failover

In the case of the non-graceful failover MHA and the ScaleArc software have to figure out that the node died.

[  14s] threads: 4, tps: 11.00, reads/s: 154.00, writes/s: 44.00, response time: 1210.04ms (95%)
[  14s] queue length: 4, concurrency: 4
( sysbench restarted )
[   1s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   1s] queue length: 13, concurrency: 4
[   2s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   2s] queue length: 23, concurrency: 4
[   3s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   3s] queue length: 38, concurrency: 4
[   4s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   4s] queue length: 46, concurrency: 4
[   5s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   5s] queue length: 59, concurrency: 4
[   6s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   6s] queue length: 69, concurrency: 4
[   7s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   7s] queue length: 82, concurrency: 4
[   8s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   8s] queue length: 92, concurrency: 4
[   9s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[   9s] queue length: 99, concurrency: 4
[  10s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[  10s] queue length: 108, concurrency: 4
[  11s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[  11s] queue length: 116, concurrency: 4
[  12s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[  12s] queue length: 126, concurrency: 4
[  13s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[  13s] queue length: 134, concurrency: 4
[  14s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[  14s] queue length: 144, concurrency: 4
[  15s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[  15s] queue length: 153, concurrency: 4
[  16s] threads: 4, tps: 0.00, reads/s: 0.00, writes/s: 0.00, response time: 0.00ms (95%)
[  16s] queue length: 167, concurrency: 4
[  17s] threads: 4, tps: 5.00, reads/s: 123.00, writes/s: 32.00, response time: 16807.85ms (95%)
[  17s] queue length: 170, concurrency: 4
[  18s] threads: 4, tps: 18.00, reads/s: 255.00, writes/s: 76.00, response time: 16888.55ms (95%)
[  18s] queue length: 161, concurrency: 4

The failover time in this case was 18 seconds. We looked at the results and found that a check the ScaleArc software does (which involves opening an SSH connection to all nodes in MHA) take 5 seconds, and ScaleArc declares the node dead after 3 consecutive checks (this parameter is configurable). Hence the high failover time. A much lower one can be achieved with more frequent checks and a different check method – for example checking the read-only flag, or making MHA store its state in the databases.

ScaleArc+Percona XtraDB Cluster tests

Percona XtraDB Cluster is special when it comes to high availability testing because of the many ways one can use it. Many people write only to one node to avoid rollback on conflicts, but we have also seen lots of folks using all the nodes for writes.

Also, graceful failover can be interpreted differently.

  • Failover can be graceful from both Percona XtraDB Cluster’s and from the ScaleArc software’s perspective. First the traffic is switched to another node, or removed from the node in question, and then MySQL is stopped.
  • Failover can be graceful from Percona XtraDB Cluster’s perspective, but not from the ScaleArc software’s perspective. In this case the database is simply stopped with service mysql stop, and the load balancer figures out what happened. I will refer to this approach as semi-graceful from now on.
  • Failover can be completely non-graceful if a node is killed, where neither Percona XtraDB Cluster, nor ScaleArc knows about its departure.

We did all the tests using one node at a time, and using all three nodes. What makes these test sets even more complex is that when only one node at a time used, some tests (the semi-graceful and the non-graceful one) don’t have the same result if the node removed is the used one or an unused one. This process involves a lot of tests, so for the sake of brevity, I omit the actual sysbench output here – they look either like the graceful MHA and non-graceful MHA case – and only present the results in a tabular format. In the case of active/active setups, to remove the nodes gracefully we first have to lower the maximum number of connections on that node to 0.

class="table-responsive">
style="width:100%; " class="easy-table easy-table-cuscosky " > Failover type 1 node (active) 1 node (passive) all nodes Graceful sub-second (no errors) no effect at all sub-second (no errors) Semi-graceful 4 seconds (errors) no effect at all 3 seconds Non-graceful 4 seconds (errors) 6 seconds (no errors) 7 seconds (errors)

In the previous table, active means that the failed node did receive sysbench transactions and passive means that it didn’t.

All the graceful cases are similar to MHA’s graceful case.

If only one node is used and a passive node is removed from the cluster, by stopping the database itself gracefully with the

# service mysql stop

command, it doesn’t have an effect on the cluster. For the subsequent cases of the graceful failover, switching on ScaleArc will enable queuing similar to MHA’s case. In case of the semi-graceful, if the passive node (which has no traffic) departs, it has no effect. Otherwise, the application will get errors (because of the unexpected mysql stop), and the failover time is around 3-4 seconds for the cases when only 1 node is active and when all 3 are active. This makes sense, because ScaleArc was configured to do checks (using clustercheck) every second, and declare a node dead after three consecutive failed checks. After the ScaleArc software determined that it should fail over, and it did so, the case is practically the same as the passive node’s removal from that point (because ScaleArc removes traffic in practice making that node passive).

The non-graceful case is tied to suspect timeout. In this case, XtraDB Cluster doesn’t know that the node departed the cluster, and the originating nodes are trying to send write sets to it. Those write sets will never be certified because the node is gone, so the writes will be stalled. The exception here is the case when the active node failed. After ScaleArc figures out that the node dies (three consecutive checks at one second intervals) a new node is chosen, but because only the failed node was doing transactions, no write sets are in the remaining two nodes’ queues, which are waiting for certification, so there is no need to wait for suspect timeout here.

Conclusion

ScaleArc does have reasonably good failover time, especially in the case when it doesn’t have to interrupt transactions. Its promise of zero-downtime maintenance is fulfilled both in the case of MHA and XtraDB Cluster.

The post rel="nofollow" href="http://www.mysqlperformanceblog.com/2014/08/19/measuring-failover-time-for-scalearc-load-balancer/">Measuring failover time for ScaleArc load balancer appeared first on rel="nofollow" href="http://www.mysqlperformanceblog.com/">MySQL Performance Blog.

Aug
08
2014
--

Release Candidate Packages for Red Hat Enterprise Linux 7 and CentOS 7 now available

The Percona team is pleased to announce the Release Candidate for Percona Software packages for rel="nofollow" href="http://www.redhat.com/about/news/press-archive/2014/6/red-hat-unveils-rhel-7" rel="nofollow">Red Hat Enterprise Linux 7 and rel="nofollow" href="http://www.karan.org/blog/2014/07/15/the-centos-7-release-announcement/" rel="nofollow">CentOS 7.

With more than 1 million downloads and thousands of production deployments at small, mid-size and large enterprises, Percona software is a industry leading distribution that enhances the performance, scale, management and diagnosis of MySQL deployments.

This new packages bring the benefits of Percona Software to developers and administrators that are using Red Hat Enterprise Linux 7 and CentOS 7.

The new packages are available from our testing repository. The packages included are: :

  • href="http://www.percona.com/software/percona-server">Percona Server 5.5
  • href="http://www.percona.com/software/percona-server">Percona Server 5.6
  • href="http://www.percona.com/software/percona-xtradb-cluster">Percona XtraDB Cluster 5.5
  • href="http://www.percona.com/software/percona-xtradb-cluster">Percona XtraDB Cluster 5.6
  • href="http://www.percona.com/software/percona-xtrabackup">Percona XtraBackup 2.2
  • href="http://www.percona.com/software/percona-toolkit">Percona Toolkit 2.2

In addition, we have included rel="nofollow" href="http://freedesktop.org/wiki/Software/systemd/" rel="nofollow">systemd service scripts for Percona Server and Percona XtraDB Cluster.

Please try out the Percona Software for Red Hat Enterprise Linux 7 and CentOS 7 by downloading and installing the packages from our href="http://www.percona.com/doc/percona-server/5.6/installation/yum_repo.html#percona-yum-experimental-repository">testing repository as follows:

yum install http://repo.percona.com/testing/centos/7/os/noarch/percona-testing-0.0-1.noarch.rpm

Help us improve our software quality by reporting any bugs you encounter using our rel="nofollow" href="https://bugs.launchpad.net/percona-server/+filebug" rel="nofollow">bug tracking system. As always, thanks for your continued support of Percona!

NOTE: These packages aren’t Generally Available (GA) and thus are not recommended for production deployments.

The post rel="nofollow" href="http://www.mysqlperformanceblog.com/2014/08/08/release-candidate-packages-for-red-hat-enterprise-linux-7-and-centos-7-are-now-available/">Release Candidate Packages for Red Hat Enterprise Linux 7 and CentOS 7 now available appeared first on rel="nofollow" href="http://www.mysqlperformanceblog.com/">MySQL Performance Blog.

Aug
08
2014
--

Percona Toolkit 2.2.10 is now available

href="http://www.mysqlperformanceblog.com/wp-content/uploads/2013/03/Percona_ToolkitLogoVert_RGB.png"> class="alignright wp-image-13951" style="margin: 7px;" src="http://www.mysqlperformanceblog.com/wp-content/uploads/2013/03/Percona_ToolkitLogoVert_RGB-300x249.png" alt="Percona Toolkit" width="180" height="149" />Percona is glad to announce the release of  href="http://www.percona.com/software/percona-toolkit">Percona Toolkit 2.2.10 on August 8, 2014 (downloads are available  href="http://www.percona.com/downloads/percona-toolkit/2.2.10/">here and from the  href="http://www.percona.com/doc/percona-server/5.5/installation.html">Percona Software Repositories). This release is the current GA (Generally Available) stable release in the 2.2 series.

Bugs Fixed:

  • Fixed bug rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/bugs/1287253">1287253: pt-table-checksum would exit with error if it would encounter deadlock when doing checksum. This was fixed by retrying the command in case of deadlock error.
  • Fixed bug rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/bugs/1311654">1311654: When used with href="http://www.percona.com/software/percona-xtradb-cluster">Percona XtraDB Cluster, href="http://www.percona.com/doc/percona-toolkit/2.2/pt-table-checksum.html">pt-table-checksum could show incorrect result if --resume option was used. This was fixed by adding a new --replicate-check-retries command line parameter. If you are having resume problems you can now set --replicate-check-retries N , where N is the number of times to retry a discrepant checksum (default = 1 , no retries). Setting a value of 3 is enough to completely eliminate spurious differences.
  • Fixed bug rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/bugs/1299387">1299387: href="http://www.percona.com/doc/percona-toolkit/2.2/pt-query-digest.html">pt-query-digest didn’t work correctly due to a changed logging format when field Thread_id has been renamed to Id. Fixed by implementing support for the new format.
  • Fixed bug rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/bugs/1340728">1340728: in some cases, where the index was of type “hash” , href="http://www.percona.com/doc/percona-toolkit/2.2/pt-online-schema-change.html">pt-online-schema-change would refuse to run because MySQL reported it would not use an index for the select. This check should have been able to be skipped using --nocheck-plan option, but it wasn’t. Option --nocheck-plan now ignores the chosen index correctly.
  • Fixed bug rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/bugs/1253872">1253872: When running href="http://www.percona.com/doc/percona-toolkit/2.2/pt-table-checksum.html">pt-table-checksum or href="http://www.percona.com/doc/percona-toolkit/2.2/pt-online-schema-change.html">pt-online-schema on a server that is unused, setting the 20% max load would fail due to tools rounding the value down. This has been fixed by rounding the value up.
  • Fixed bug rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/bugs/1340364">1340364: Due to incompatibility of dash and bash syntax some shell tools were showing error when queried for version.

Percona Toolkit is free and open-source. Details of the release can be found in the  href="http://www.percona.com/doc/percona-toolkit/2.2/release_notes.html#v2-2-10-released-2014-08-06">release notes and the  rel="nofollow" href="https://launchpad.net/percona-toolkit/+milestone/2.2.10" rel="nofollow">2.2.10 milestone at Launchpad. Bugs can be reported on the Percona Toolkit rel="nofollow" href="https://bugs.launchpad.net/percona-toolkit/+filebug" rel="nofollow">launchpad bug tracker.

The post rel="nofollow" href="http://www.mysqlperformanceblog.com/2014/08/08/percona-toolkit-2-2-10-now-available/">Percona Toolkit 2.2.10 is now available appeared first on rel="nofollow" href="http://www.mysqlperformanceblog.com/">MySQL Performance Blog.

Jul
31
2014
--

Paris OpenStack Summit Voting – Percona Submits 16 MySQL Talks

href="http://www.mysqlperformanceblog.com/wp-content/uploads/2014/07/OpenStack-Paris-2014.jpg"> class="alignleft size-medium wp-image-24876" src="http://www.mysqlperformanceblog.com/wp-content/uploads/2014/07/OpenStack-Paris-2014-300x161.jpg" alt="Paris OpenStack Summit Voting - Percona Submits 16 MySQL Talks" width="300" height="161" />MySQL plays a critical role in OpenStack. It serves as the host database supporting most components such as Nova, Glance, and Keystone and is the most mature guest database in Trove. Many OpenStack operators use Percona open source software including the MySQL drop-in compatible  title="Percona Server" href="http://www.percona.com/software/percona-server" >Percona Server and Galera-based  title="Percona XtraDB Cluster" href="http://www.percona.com/software/percona-xtradb-cluster" >Percona XtraDB Cluster as well as tools such as title="Percona XtraBackup" href="http://www.percona.com/software/percona-xtrabackup" >Percona XtraBackup and  title="Percona Toolkit" href="http://www.percona.com/software/percona-toolkit" >Percona Toolkit. We see a need in the community to understand how to improve MySQL performance in OpenStack. As a result, title="Percona Consulting" href="http://www.percona.com/products/mysql-consulting/overview" >Percona, submitted 16 presentations for the Paris OpenStack Summit.

Paris OpenStack Summit presentations are chosen by OpenStack member voting. Please vote for our talks by clicking the titles below that interest you. You must be an OpenStack Foundation member to vote. If you aren’t a member, title="Sign up for the OpenStack Foundation" rel="nofollow" href="https://www.openstack.org/join/register/" rel="nofollow">sign up here – it’s free and only takes a minute. The deadline to vote is Wednesday, August 6, 2014!

Paris OpenStack Summit MySQL Talks Submitted by Percona

OpenStack Operations

title="MySQL Database Operations in the OpenStack World" rel="nofollow" href="https://www.openstack.org/vote-paris/Presentation/mysql-database-operations-in-the-openstack-world" rel="nofollow">MySQL Database Operations in the OpenStack World /> Speaker: Stéphane Combaudon

title="MySQL High Availability Options for Openstack" rel="nofollow" href="https://www.openstack.org/vote-paris/Presentation/mysql-high-availability-options-for-openstack" rel="nofollow">MySQL High Availability Options for Openstack /> Speakers: Stéphane Combaudon

title="Host and Guest Database Backup and Recovery for OpenStack Ops" rel="nofollow" href="https://www.openstack.org/vote-paris/Presentation/host-and-guest-database-backup-and-recovery-for-openstack-ops" rel="nofollow">Host and Guest Database Backup and Recovery for OpenStack Ops /> Speakers: George Lorch, David Busby

title="Benchmarking the Different Cinder Storage Backends" rel="nofollow" href="https://www.openstack.org/vote-paris/Presentation/benchmarking-the-different-cinder-storage-backends" rel="nofollow">Benchmarking the Different Cinder Storage Backends /> Speaker: Peter Boros

title="MySQL and OpenStack Deep Dive" rel="nofollow" href="https://www.openstack.org/vote-paris/Presentation/mysql-and-openstack-deep-dive" rel="nofollow">MySQL and OpenStack Deep Dive /> Speakers: Peter Boros, Jay Pipes (Mirantis)

title="Trove Performance Tuning for MySQL" rel="nofollow" href="https://www.openstack.org/vote-paris/Presentation/trove-performance-tuning-for-mysql" rel="nofollow">Trove Performance Tuning for MySQL /> Speaker: Alexander Rubin

title="Schema Management: Versioning and Automation with Puppet and MySQL Utilities" rel="nofollow" href="https://www.openstack.org/vote-paris/Presentation/schema-management-versioning-and-automation-with-puppet-and-mysql-utilities" rel="nofollow">Schema Management: Versioning and Automation with Puppet and MySQL Utilities /> Speaker: Frederic Descamps

title="Deploying Databases for OpenStack" rel="nofollow" href="http://www.openstack.org/vote-paris/Presentation/deploying-databases-for-openstack" rel="nofollow">Deploying Databases for OpenStack /> Speakers: Matt Griffin, Jay Pipes (Mirantis), Amrith Kumar (Tesora), Vinay Joosery (Severalnines)

Related Open Source Software Projects

title="Introduction to Percona XtraDB Cluster" rel="nofollow" href="https://www.openstack.org/vote-paris/Presentation/introduction-to-percona-xtradb-cluster" rel="nofollow">Introduction to Percona XtraDB Cluster /> Speaker: Kenny Gryp

title="Percona Server Features for OpenStack and Trove Ops" rel="nofollow" href="https://www.openstack.org/vote-paris/Presentation/percona-server-features-for-openstack-and-trove-ops" rel="nofollow">Percona Server Features for OpenStack and Trove Ops /> Speakers: George Lorch, Vipul Sabhaya (HP Cloud)

Products, Tools & Services

title="ClusterControl: Efficient and reliable MySQL Management, Monitoring, and Troubleshooting for OpenStack HA" rel="nofollow" href="https://www.openstack.org/vote-paris/Presentation/clustercontrol-efficient-and-reliable-mysql-management-for-openstack-ha" rel="nofollow">ClusterControl: Efficient and reliable MySQL Management, Monitoring, and Troubleshooting for OpenStack HA /> Speakers: Peter Boros, Vinay Joosery (Severalnines)

title="Advanced MySQL Performance Monitoring for OpenStack Ops" rel="nofollow" href="http://www.openstack.org/vote-paris/Presentation/managing-mysql-performance-with-percona-cloud-tools" rel="nofollow">Advanced MySQL Performance Monitoring for OpenStack Ops /> Speaker: Daniel Nichter

Targeting Apps for OpenStack Clouds

title="Oars in the Cloud: Virtualization-aware Galera instances" rel="nofollow" href="http://www.openstack.org/vote-paris/Presentation/oars-in-the-cloud-virtualization-aware-galera-instances" rel="nofollow">Oars in the Cloud: Virtualization-aware Galera instances /> Speaker: Raghavendra Prabhu

title="ACIDic Clusters: Review of contemporary ACID-compliant databases with synchronous replication" rel="nofollow" href="http://www.openstack.org/vote-paris/Presentation/acidic-clusters-review-of-contemporary-acid-compliant-databases-with-synchronous-replication" rel="nofollow">ACIDic Clusters: Review of contemporary ACID-compliant databases with synchronous replication /> Speaker: Raghavendra Prabhu

Cloud Security

title="Security: It’s more than just your database you should worry about" rel="nofollow" href="https://www.openstack.org/vote-paris/Presentation/security-it-s-more-than-just-your-database-you-should-worry-about" rel="nofollow">Security: It’s more than just your database you should worry about /> Speaker: David Busby

Planning Your OpenStack Project

title="Infrastructure at Scale" rel="nofollow" href="http://www.openstack.org/vote-paris/Presentation/infrastructure-at-scale" rel="nofollow">Infrastructure at Scale /> Speaker: Michael Coburn

The title="" rel="nofollow" href="https://www.openstack.org/summit/openstack-paris-summit-2014/" rel="nofollow">Paris OpenStack Summit will offer developers, operators, and service providers with valuable insights into OpenStack. The Design Summit sessions will be filled with lively discussions driving OpenStack development including sessions defining the future of Trove, the DBaaS (database as a service) component  title="DBaaS, OpenStack and Trove 101: Introduction to the basics" href="http://www.mysqlperformanceblog.com/2014/07/24/dbaas-openstack-and-trove-101-introduction-to-the-basics/" >near and dear to Percona’s heart. There will also be many valuable presentations in the main Paris OpenStack Summit conference about operating OpenStack, utilizing the latest features, complimentary software and services, and real world case studies.

Thank you for your support. We’re looking forward to seeing many Percona software users at the Paris OpenStack Summit in November.

The post rel="nofollow" href="http://www.mysqlperformanceblog.com/2014/08/01/paris-openstack-summit-voting-percona-submits-mysql-talks/">Paris OpenStack Summit Voting – Percona Submits 16 MySQL Talks appeared first on rel="nofollow" href="http://www.mysqlperformanceblog.com/">MySQL Performance Blog.

Jul
31
2014
--

Percona Server 5.1.73-14.12 is now available

id="attachment_13396" style="width: 257px" class="wp-caption alignright"> href="http://www.mysqlperformanceblog.com/wp-content/uploads/2013/03/Percona-Server.jpg"> class="size-full wp-image-13396" src="http://www.mysqlperformanceblog.com/wp-content/uploads/2013/03/Percona-Server.jpg" alt="Percona Server version 5.1.73-14.12" width="247" height="97" /> class="wp-caption-text">Percona Server version 5.1.73-14.12

Percona is glad to announce the release of  href="http://www.percona.com/software/percona-server">Percona Server 5.1.73-14.12 on July 31st, 2014 (Downloads are available  href="http://www.percona.com/downloads/Percona-Server-5.1/Percona-Server-5.1.73-rel14.12/">here and from the  href="http://www.percona.com/doc/percona-server/5.1/installation.html">Percona Software Repositories). Based on  rel="nofollow" href="http://dev.mysql.com/doc/relnotes/mysql/5.1/en/news-5-1-73.html" rel="nofollow">MySQL 5.1.73, including all the bug fixes in it, Percona Server 5.1.73-14.12 is now the current stable release in the 5.1 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the  rel="nofollow" href="https://launchpad.net/percona-server/+milestone/5.1.73-14.12" rel="nofollow">5.1.73-14.12 milestone at Launchpad.

NOTE: Packages for Debian 7.0 (wheezy), Ubuntu 13.10 (saucy) and 14.04 (trusty) are not available for this release due to conflict with newer packages available in those releases.

Bugs Fixed:

  • Percona Server couldn’t be built with Bison 3.0. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1262439" rel="nofollow">#1262439, upstream rel="nofollow" href="http://bugs.mysql.com/bug.php?id=71250" rel="nofollow">#71250.
  • href="http://www.percona.com/doc/percona-server/5.1/performance/query_cache_enhance.html#ignoring-comments">Ignoring Query Cache Comments feature could cause server crash. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/705688" rel="nofollow">#705688.
  • Database administrator password could be seen in plain text when debconf-get-selections was executed. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1018291" rel="nofollow">#1018291.
  • If XtraDB variable innodb_dict_size was set, the server could attempt to remove a used index from the in-memory InnoDB data dictionary, resulting in a server crash. Bugs fixed rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1250018" rel="nofollow">#1250018 and rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/758788" rel="nofollow">#758788.
  • Ported a fix from MySQL 5.5 for upstream bug rel="nofollow" href="http://bugs.mysql.com/bug.php?id=71315" rel="nofollow">#71315 that could cause a server crash f a malformed GROUP_CONCAT function call was followed by another GROUP_CONCAT call. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1266980" rel="nofollow">#1266980.
  • MTR tests from binary tarball didn’t work out of the box. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1158036" rel="nofollow">#1158036.
  • InnoDB did not handle the cases of asynchronous and synchronous I/O requests completing partially or being interrupted. Bugs fixed rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1262500" rel="nofollow">#1262500 (upstream rel="nofollow" href="http://bugs.mysql.com/bug.php?id=54430" rel="nofollow">#54430).
  • Percona Server version was reported incorrectly in Debian/Ubuntu packages. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1319670" rel="nofollow">#1319670.
  • Percona Server source files were referencing Maatkit instead of href="http://www.percona.com/software/percona-toolkit">Percona Toolkit. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1174779" rel="nofollow">#1174779.
  • The XtraDB version number in univ.i was incorrect. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1277383" rel="nofollow">#1277383.

Other bug fixes: rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1272732" rel="nofollow">#1272732, rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1167486" rel="nofollow">#1167486, and rel="nofollow" href="https://bugs.launchpad.net/percona-server/+bug/1314568" rel="nofollow">#1314568.

Release notes for Percona Server 5.1.73-14.12 are available in our href="http://www.percona.com/doc/percona-server/5.1/release-notes/Percona-Server-5.1.73-14.12.html">online documentation. Bugs can be reported on the rel="nofollow" href="https://bugs.launchpad.net/percona-server/+filebug" rel="nofollow">launchpad bug tracker.

The post rel="nofollow" href="http://www.mysqlperformanceblog.com/2014/07/31/percona-server-5-1-73-14-12-now-available/">Percona Server 5.1.73-14.12 is now available appeared first on rel="nofollow" href="http://www.mysqlperformanceblog.com/">MySQL Performance Blog.

Jul
25
2014
--

Monitoring MySQL flow control in Percona XtraDB Cluster 5.6

Monitoring flow control in a Galera cluster is very important. If you do not, you will not understand why writes may sometimes be stalled.  href="http://www.percona.com/software/percona-xtradb-cluster" >Percona XtraDB Cluster 5.6 provides 2 status variables for such monitoring: wsrep_flow_control_paused and wsrep_flow_control_paused_ns. Which one should you use?

What is flow control?

Flow control does not exist with regular MySQL replication, but only with Galera replication. It is simply the mechanism nodes are using when they are not able to keep up with the write load: to keep replication synchronous, the node that is starting to lag instructs the other nodes that writes should be paused for some time so it does not get too far behind.

If you are not familiar with this notion, you should read this href="http://www.mysqlperformanceblog.com/2013/05/02/galera-flow-control-in-percona-xtradb-cluster-for-mysql/" >blogpost.

Triggering flow control and graphing it

For this test, we’ll use a 3-node Percona XtraDB Cluster 5.6 cluster. On node 3, we will adjust gcs.fc_limit so that flow control is triggered very quickly and then we will lock the node:

pxc3> set global wsrep_provider_options="gcs.fc_limit=1";
pxc3> flush tables with read lock;

Now we will use sysbench to insert rows on node 1:

$ sysbench --test=oltp --oltp-table-size=50000 --mysql-user=root --mysql-socket=/tmp/pxc1.sock prepare

Because of flow control, writes will be stalled and sysbench will hang. So after some time, we will release the lock on node 3:

pxc3> unlock tables;

During the whole process, wsrep_flow_control_paused and wsrep_flow_control_paused_ns are recorded every second with mysqladmin ext -i1. We can then build a graph of the evolution of both variables:

href="http://www.mysqlperformanceblog.com/wp-content/uploads/2014/07/wsrep_flow_control_pxc3.png"> class="alignnone size-full wp-image-24687" src="http://www.mysqlperformanceblog.com/wp-content/uploads/2014/07/wsrep_flow_control_pxc3.png" alt="wsrep_flow_control_pxc3" width="800" height="400" />

While we can clearly see when flow control was triggered on both graphs, it is much easier to know when flow control was stopped with wsrep_flow_control_paused_ns. It would be even more obvious if we have had several timeframes when flow control is in effect.

Conclusion

Monitoring a server is obviously necessary if you want to be able to catch issues. But you need to look at the right metrics. So don’t be scared if you are seeing that wsrep_flow_control_paused is not 0: it simply means that flow control has been triggered at some point since the server started up. If you want to know what is happening right now, prefer wsrep_flow_control_paused_ns.

The post rel="nofollow" href="http://www.mysqlperformanceblog.com/2014/07/25/monitoring-flow-control-percona-xtradb-cluster-5-6/">Monitoring MySQL flow control in Percona XtraDB Cluster 5.6 appeared first on rel="nofollow" href="http://www.mysqlperformanceblog.com/">MySQL Performance Blog.

Jul
21
2014
--

Percona XtraDB Cluster 5.6.19-25.6 is now available

href="http://www.mysqlperformanceblog.com/wp-content/uploads/2013/02/XtraDB-Cluster.jpg"> class="alignright size-full wp-image-12836" src="http://www.mysqlperformanceblog.com/wp-content/uploads/2013/02/XtraDB-Cluster.jpg" alt="Percona XtraDB Cluster 5.6.19-25.6" width="237" height="82" />Percona is glad to announce the new release of href="http://www.percona.com/software/percona-xtradb-cluster">Percona XtraDB Cluster 5.6 on July 21st 2014. Binaries are available from href="http://www.percona.com/downloads/">downloads area or from our href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/installation.html">software repositories. We’re also happy to announce that Ubuntu 14.04 LTS users can now download, install, and upgrade Percona XtraDB Cluster 5.6 from title="Percona XtraDB Cluster 5.6 software repository" href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/installation.html#using-percona-software-repositories?id=repositories:start" >Percona’s software repositories.

Based on Percona Server href="http://www.percona.com/doc/percona-server/5.6/release-notes/Percona-Server-5.6.19-67.0.html">5.6.19-67.0 including all the bug fixes in it, rel="nofollow" href="https://github.com/codership/galera/issues?milestone=1&page=1&state=closed" rel="nofollow">Galera Replicator 3.6, and on rel="nofollow" href="https://launchpad.net/wsrep-group/+milestone/5.6.19-25.6" rel="nofollow">Codership wsrep API 25.6, Percona XtraDB Cluster 5.6.19-25.6 is now the current General Availability release. All of Percona‘s software is open-source and free, and all the details of the release can be found in the rel="nofollow" href="https://launchpad.net/percona-xtradb-cluster/+milestone/5.6.19-25.6" rel="nofollow">5.6.19-25.6 milestone at Launchpad.

New Features:

  • Percona XtraDB Cluster now supports storing the Primary Component state to disk by setting the href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/wsrep-provider-index.html#pc.recovery">pc.recovery variable to true. The Primary Component can then recover automatically when all nodes that were part of the last saved state reestablish communications with each other. This feature can be used for automatic recovery from full cluster crashes, such as in the case of a data center power outage and graceful full cluster restarts without the need for explicitly bootstrapping a new Primary Component.
  • When joining the cluster, the state message exchange provides us with gcache seqno limits. That information is now used to choose a donor through IST first, and, if this is not possible, only then SST is attempted. The  href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/wsrep-system-index.html#wsrep_sst_donor">wsrep_sst_donor setting is honored, though, and it is also segment aware.
  • An asynchronous replication slave thread was stopped when the node tried to apply the next replication event while the node was in non-primary state. But it would then remain stopped after the node successfully re-joined the cluster. A new variable,  href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/wsrep-system-index.html#wsrep_restart_slave">wsrep_restart_slave, has been implemented which controls if the MySQL slave should be restarted automatically when the node re-joins the cluster.
  • Handling install message and install state message processing has been improved to make group forming a more stable process in cases when many nodes are joining the cluster.
  • A new href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/wsrep-status-index.html#wsrep_evs_repl_latency">wsrep_evs_repl_latency status variable has been implemented which provides the group communication replication latency information.
  • Node consistency issues with foreign key grammar have been fixed. This fix introduces two new variables: href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/wsrep-system-index.html#wsrep_slave_FK_checks">wsrep_slave_FK_checks and href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/wsrep-system-index.html#wsrep_slave_UK_checks">wsrep_slave_UK_checks. These variables are set to TRUE and FALSE respectively by default. They control whether Foreign Key and Unique Key checking is done for applier threads.

Bugs Fixed:

  • Fixed the race condition in Foreign Key processing that could cause assertion. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1342959" rel="nofollow">#1342959.
  • The restart sequence in scripts/mysql.server would fail to capture and return if the start call failed to start the server. As a result, a restart could occur that failed upon start-up, and the script would still return 0 as if it worked without any issues. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1339894" rel="nofollow">#1339894.
  • Updating a unique key value could cause the server to hang if a slave node had enabled parallel slaves. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1280896" rel="nofollow">#1280896.
  • Percona XtraDB Cluster has implemented threadpool scheduling fixes. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1333348" rel="nofollow">#1333348.
  • garbd was returning an incorrect return code, ie. when garbd was already started, return code was 0. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1308103" rel="nofollow">#1308103.
  • rsync SST would silently fail on joiner when the rsync server port was already taken. Bug fixed href="http://www.mysqlperformanceblog.com/2014/07/21/percona-xtradb-cluster-5-6-19-25-6-now-available/wsrep_sst_rsync%20would%20silently%20fail%20on%20joiner%20when%20rsync%20server%20port%20was%20already%20taken.%20Bug%20fixed%20#1099783.">#1099783.
  • When href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/wsrep-provider-index.html#gmcast.listen_addr">gmcast.listen_addr was configured to a certain address, the local connection point for outgoing connections was not bound to the listen address. This would happen if the OS has multiple interfaces with IP addresses in the same subnet. The OS would pick the wrong IP for a local connection point and other nodes would see connections originating from an IP address which was not listened to. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1240964" rel="nofollow">#1240964.
  • An issue with re-setting galera provider (in href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/wsrep-system-index.html#wsrep_provider_options">wsrep_provider_options) has been fixed. Bug fixed #1260283.
  • Variable wsrep_provider_options couldn’t be set in runtime if no provider was loaded. Bug fixed #1260290.
  • Percona XtraDB Cluster couldn’t be built with Bison 3.0. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1262439" rel="nofollow">#1262439.
  • MySQL wasn’t handling exceeding the max writeset size wsrep error correctly. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1270920" rel="nofollow">#1270920.
  • Fixed the issue which caused a node to hang/fail when SELECTs/SHOW STATUS was run after FLUSH TABLES WITH READ LOCK was used on a node with wsrep_causal_reads set to 1 while there was a DML on other nodes. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1271177" rel="nofollow">#1271177.
  • Lowest group communication layer (evs) would fail to handle the situation properly when a large number of nodes would suddenly start recognizing each other. Bugs fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1271918" rel="nofollow">#1271918 and rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1249805" rel="nofollow">#1249805.
  • Percona XtraBackup SST would fail if the progress option was used with a large number of files. Bug fixed rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1294431" rel="nofollow">#1294431.

NOTE: When performing an upgrade from an older 5.6 version on Debian/Ubuntu systems, in order to upgrade the Galera package correctly, you’ll need to href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/installation/apt_repo.html#apt-pinning-the-packages">pin the Percona repository and run: apt-get install percona-xtradb-cluster-56. This is required because older Galera deb packages have an incorrect version number. The correct href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/wsrep-status-index.html#wsrep_provider_version">wsrep_provider_version after upgrade should be 3.6(r3a949e6).

This release contains 50 fixed bugs. The complete list of fixed bugs can be found in our href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/release-notes/Percona-XtraDB-Cluster-5.6.19-25.6.html">release notes.

Release notes for Percona XtraDB Cluster href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/release-notes/Percona-XtraDB-Cluster-5.6.19-25.6.html">5.6.19-25.6 are available in our href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/index.html">online documentation along with the href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/installation.html">installation and href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/upgrading_guide_55_56.html">upgrade instructions.

Help us improve our software quality by reporting any bugs you encounter using our rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+filebug" rel="nofollow">bug tracking system. As always, thanks for your continued support of Percona!

Percona XtraDB Cluster href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/errata.html">Errata can be found in our documentation.

[UPDATE 2014-07-24]: Package Percona-XtraDB-Cluster-client-56-5.6.19-25.6.824.el6.x86_64.rpm has been updated to resolve the conflict with Percona-XtraDB-Cluster-devel package.

The post rel="nofollow" href="http://www.mysqlperformanceblog.com/2014/07/21/percona-xtradb-cluster-5-6-19-25-6-now-available/">Percona XtraDB Cluster 5.6.19-25.6 is now available appeared first on rel="nofollow" href="http://www.mysqlperformanceblog.com/">MySQL Performance Blog.

Jul
10
2014
--

Percona Toolkit 2.2.9 is now available

href="http://www.mysqlperformanceblog.com/wp-content/uploads/2013/03/Percona_ToolkitLogoVert_RGB.png"> class="alignright wp-image-13951" style="margin: 7px;" src="http://www.mysqlperformanceblog.com/wp-content/uploads/2013/03/Percona_ToolkitLogoVert_RGB-300x249.png" alt="Percona Toolkit" width="180" height="149" />Percona is glad to announce the release of  href="http://www.percona.com/software/percona-toolkit">Percona Toolkit 2.2.9 on July 10, 2014 (downloads are available  href="http://www.percona.com/downloads/percona-toolkit/2.2.9/">here and from the  href="http://www.percona.com/doc/percona-server/5.5/installation.html">Percona Software Repositories). This release is the current GA (Generally Available) stable release in the 2.2 series.

Bugs Fixed:

  • Fixed bug rel="nofollow" href="https://bugs.launchpad.net/bugs/1335960" rel="nofollow">1335960: href="http://www.percona.com/doc/percona-toolkit/2.2/pt-query-digest.html">pt-query-digest could not parse the binlogs from MySQL 5.6 because the binlog format was changed.
  • Fixed bug rel="nofollow" href="https://bugs.launchpad.net/bugs/1315130" rel="nofollow">1315130: href="http://www.percona.com/doc/percona-toolkit/2.2/pt-online-schema-change.html">pt-online-schema-change did not find child tables as expected. It could incorrectly locate tables – tables which reference a table with the same name in a different schema, and could miss tables referencing the altered table – if they were in a different schema..
  • Fixed bug rel="nofollow" href="https://bugs.launchpad.net/bugs/1335322" rel="nofollow">1335322: href="http://www.percona.com/doc/percona-toolkit/2.2/pt-stalk.html">pt-stalk would fail when variable or threshold was a non-integer.
  • Fixed bug rel="nofollow" href="https://bugs.launchpad.net/bugs/1258135" rel="nofollow">1258135: href="http://www.percona.com/doc/percona-toolkit/2.2/pt-deadlock-logger.html">pt-deadlock-logger was inserting older deadlocks into the deadlock table even if it was already there, therby creating unnecessary noise. For example, if the deadlock happened 1 year ago, and MySQL keeps it in the memory, pt-deadlock-logger would INSERT it into percona.deadlocks table every minute until server was restarted. This was fixed by comparing with the last deadlock fingerprint before issuing the INSERT query.
  • Fixed bug rel="nofollow" href="https://bugs.launchpad.net/bugs/1329422" rel="nofollow">1329422: href="http://www.percona.com/doc/percona-toolkit/2.2/pt-online-schema-change.html">pt-online-schema-change foreign-keys-method=none can break FK constraints in a way that is hard to recover from. Although this method of handling foreign key constraints is provided so that the database administrator can disable the tool’s built-in functionality if desired, a warning and confirmation request when using alter-foreign-keys-method “none” has been added to warn users when using this option.

Percona Toolkit is free and open-source. Details of the release can be found in the  href="http://www.percona.com/doc/percona-toolkit/2.2/release_notes.html#v2-2-9-released-2014-07-08">release notes and the  rel="nofollow" href="https://launchpad.net/percona-toolkit/+milestone/2.2.9" rel="nofollow">2.2.9 milestone at Launchpad. Bugs can be reported on the Percona Toolkit rel="nofollow" href="https://bugs.launchpad.net/percona-toolkit/+filebug" rel="nofollow">launchpad bug tracker.

The post rel="nofollow" href="http://www.mysqlperformanceblog.com/2014/07/10/percona-toolkit-2-2-9-now-available/">Percona Toolkit 2.2.9 is now available appeared first on rel="nofollow" href="http://www.mysqlperformanceblog.com/">MySQL Performance Blog.

Jul
09
2014
--

TokuDB gotchas: slow INFORMATION_SCHEMA TABLES

We are using title="Percona Server 5.6.19-67.0 with TokuDB (GA) now available" href="http://www.mysqlperformanceblog.com/2014/07/01/percona-server-5-6-19-67-0-with-tokudb-ga-now-available/" >Percona Server + TokuDB engine extensively in href="https://cloud.percona.com/" >Percona Cloud Tools and getting real usage operational experience with this engine. So I want to share some findings we came across, in hope it may help someone in their work with TokuDB.

So, one problem I faced is that SELECT * FROM INFORMATION_SCHEMA.TABLES is quite slow when I have thousands tables in TokuDB. How slow? For example…

select * from information_schema.tables limit 1000;
...
1000 rows in set (18 min 31.93 sec)

This is very similar to what InnoDB faced a couple years back. InnoDB solved it by adding rel="nofollow" href="http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_stats_on_metadata" >variable innodb_stats_on_metadata.

So what happens with TokuDB? There is an explanation from Rich Prohaska at Tokutek: “Tokudb has too much overhead for table opens. TokuDB does a calculation on the table when it is opened to see if it is empty. This calculation can be disabled when ‘tokudb_empty_scan=disabled‘. ”

So let’s see what we have with tokudb_empty_scan=disabled

select * from information_schema.tables limit 1000;
...
1000 rows in set (3 min 4.59 sec)

An impressive improvement, but still somewhat slow. Tokutek promises a fix to improve it in the next TokuDB 7.2 release.

The post rel="nofollow" href="http://www.mysqlperformanceblog.com/2014/07/09/tokudb-gotchas-slow-information_schema-tables/">TokuDB gotchas: slow INFORMATION_SCHEMA TABLES appeared first on rel="nofollow" href="http://www.mysqlperformanceblog.com/">MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com