Dec
30
2010
--

Percona Welcomes Sasha Pachev

Percona is pleased to officially (and belatedly) welcome Sasha Pachev to our team of consultants.

Before joining Percona, Sasha worked as an independent MySQL consultant. Sasha was the original implementer and maintainer of MySQL replication from 3.23 to 4.02. He is also the author of MySQL Enterprise Solutions and Understanding MySQL Internals.

Sasha, a big welcome – we are fortunate indeed you’re working with us!


Entry posted by Ryan Lowe |
No comment

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
29
2010
--

Percona Server 5.1.53-12.4

Percona Server version 5.1.53-12.4 is now available for download. It is now the current stable release version.

Functionality Added or Changed

  •  Percona Server 5.1.53-12.4 is based on MySQL 5.1.53.
  •  New Features Added:
    • Precompiled UDFs for Maatkit (FNV and MurmurHash hash functions to provide faster checksums) are now included in distributions. Fixes feature request #689992. (Aleksandr Kuzminsky)
  •  Other Changes:
    • innodb_doublewrite_file – It’s no longer necessary to recreate your database and InnoDB system files when a dedicated file to contain the doublewrite buffer is specified. (Yasufumi Kinoyasu)

Bugs Fixed

  • Bug #643149 – Slow query log entries were not written in the usual parsing format. (Alexey Kopytov)
  • Bug #671764innochecksum wasn’t distributed with RPM and .DEB packages. (Aleksandr Kuzminsky)
  • Bug #673426 – Use of these system variables as command-line options could cause a crash or undefined behavior: log_slow_timestamp_every, log_slow_sp_statements, slow_query_log_microseconds_timestamp, use_global_long_query_time. (Oleg Tsarev)
  • Bug #673567 – Compiler could produce spurious warnings when building on non_Linux platforms. A check is now made to see if clock_gettime() is present in librt at the configure stage. If yes, -lrt is added to LIBS. (Alexey Kopytov)
  • Bug #673929 – Query cache misses were being reported for some queries when hits were actually occurring. (Oleg Tsarev)
  • Bug #676146 – The development environment test of log_slow_verbosity=innodb on a slave for row-based replication was not working correctly. (Oleg Tsarev)
  • Bug #676147, Bug #676148 – The development environment tests of options log_slow_slave_statements and use_global_long_query_time work only on statement-based replication. They were failing when row-based replication was attempted. A check is now made for the replication type to test. (Oleg Tsarev)
  • Bug #676158 – Setting the query cache size to 512M caused test failure on low memory systems. (Aleksandr Kuzminsky)
  • Bug #677407 – The innodb.innodb_information_schema test could fail sporadically due to flawed logic in the INFORMATION_SCHEMA.INNODB_LOCKS caching mechanism. (contributed by Kristian Nielsen) (Alexey Kopytov)
  • Bug #681486 – A dependency between Debian packages libmysqlclient16 and percona-server-common was removed. (Aleksandr Kuzminsky)
  • Bug #693415 – The test percona_innodb_buffer_pool_shm was failing. It should be run with the --big-test option. As the buffer pool size used in the test is 128M, the shared memory segment should be increased appropriately in order to run the test successfully. (Aleksandr Kuzminsky)
  • Bug #693814, Bug #693815, Bug #693816, Bug #693817, Bug #693819 – Tests in the test environment were updated to reflect past INFORMATION_SCHEMA changes. (Aleksandr Kuzminsky)
  • Bug #693818 – Warning and error messages for stored routines could incorrectly report row numbers due to a change in the slow_extended patch. (Alexey Kopytov)

The Release Notes for this and previous releases can be found in our Wiki.

Downloads are available here and from the Percona Software Repositories. The latest source code for Percona Server, including the development branch, can be found on Launchpad.

Please report any bugs found at Bugs in Percona Server.

For general questions, use our Percona Discussions Group, and for development questions our Percona Development Group.

For support, commercial, and sponsorship inquiries, contact Percona.


Entry posted by Fred Linhoss |
No comment

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
25
2010
--

Spreading .ibd files across multiple disks; the optimization that isn’t

Inspired by Baron’s earlier post, here is one I hear quite frequently –

“If you enable innodb_file_per_table, each table is it’s own .ibd file.  You can then relocate the heavy hit tables to a different location and create symlinks to the original location.”

There are a few things wrong with this advice:

  1. InnoDB does not support these symlinks.  If you run an ALTER TABLE command, what you will find is that a new temporary table is created (in the original location!), the symlink is destroyed, and the temporary table is renamed.  Your “optimization” is lost.
  2. Striping (with RAID) is usually a far better optimization.  Striping a table across multiple disks effectively balances the  ‘heavy hit’ access across many more disks.  With 1 disk/table you are more likely to have the unbalance one disk overloaded, and many idle.
  3. You restrict your backup methods.  You can’t LVM snapshot across logical volumes.

Another common claim with this recommendation is that it allows you to quickly add space when running out.  LVM actually allows you to add physical volumes, and increase the size of logical volumes ;)   This is much easier to do than more one large table around.


Entry posted by Morgan Tocker |
22 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
22
2010
--

How long is recovery from 8G innodb_log_file

In my previous posts I highlighted that one of improvements in Percona Server is support of innodb_log_file_size > 4G. This test was done using Percona Server 5.5.7, but the same performance expected for InnoDB-plugin and MySQL 5.5.

The valid question how long is recovery in this case, so let’s test it. I took the same tpcc-mysql 1000W workload with 52GB and 144GB innodb_buffer_pool_size with data located on Virident tachIOn card and killed mysqld after 30 mins of work.

The recovery time after start is:
for 52GB innodb_buffer_pool_size:

CODE:

  1. 101220 21:54:31  InnoDB: Database was not shut down normally!
  2. ..
  3. 101220 22:02:40  InnoDB: Rollback of non-prepared transactions completed

that is 7min 51sec

for 144GB innodb_buffer_pool_size:

CODE:

  1. 101220 22:45:37  InnoDB: Database was not shut down normally!
  2.  
  3. ..
  4. 101220 22:55:00  InnoDB: Rollback of non-prepared transactions completed

that is 9min 23sec

and
for 144GB innodb_buffer_pool_size with data stored on RAID10:

CODE:

  1. 101220 23:46:01  InnoDB: Database was not shut down normally!
  2. ..
  3. 101221  0:00:58  InnoDB: Rollback of non-prepared transactions completed

that is 14min 57sec

I think this time is acceptable as trade-off for performance.

However it should be taken into account if you use HA solution like DRBD, as it basically means this is time for fail-over period, and your system will be down during this time.


Entry posted by Vadim |
11 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
21
2010
--

MySQL 5.5.8 and Percona Server on Fast Flash card (Virident tachIOn)

This is to follow up on my previous post and show the results for MySQL 5.5.8 and Percona Server on the fastest hardware I have in our lab: a Cisco UCS C250 server with 384GB of RAM, powered by a Virident tachIOn 400GB SLC card.

To see different I/O patterns, I used different innodb_buffer_pool_size settings: 13G, 52G, an 144G on a tpcc-mysql workload with 1000W (around 100GB of data). This combination of buffer pool sizes gives us different data/memory ratios (for 13G – an I/O intensive workload, for 52G – half of the data fits into the buffer pool, for 144G – the data all fits into memory). For the cases when the data fits into memory, it is especially important to have big transactional log files, as in these cases the main I/O pressure comes from checkpoint activity, and the smaller the log size, the more I/O per second InnoDB needs to perform.

So let me point out the optimizations I used for Percona Server:

  • innodb_log_file_size=4G (innodb_log_files_in_group=2)
  • innodb_flush_neighbor_pages=0
  • innodb_adaptive_checkpoint=keep_average
  • innodb_read_ahead=none

For MySQL 5.5.8, I used:

  • innodb_log_file_size=2000M (innodb_log_files_in_group=2), as the maximal available setting
  • innodb_buffer_pool_instances=8 (for a 13GB buffer pool); 16 (for 52 and 144GB buffer pools), as it is seems in this configuration this setting provides the best throughput
  • innodb_io_capacity=20000; a difference from the FusionIO case, it gives better results for MySQL 5.5.8.

For both servers I used:

  • innodb_flush_log_at_trx_commit=2
  • ibdata1 and innodb_log_files located on separate RAID10 partitions, InnoDB datafiles on the Virident tachIOn 400G card

The raw results, config, and script are in our Benchmarks Wiki.
Here are the graphs:

13G innodb_buffer_pool_size:

In this case, both servers show a straight line, and it seems having 8 innodb_buffer_pool_instances was helpful.

52G innodb_buffer_pool_size:

144G innodb_buffer_pool_size:

The final graph shows the difference between different settings of innodb_io_capacity for MySQL 5.5.8.

Small innodb_io_capacity values are really bad, while 20000 allows us to get a more stable line.

In summary, if we take the average NOTPM for the final 30 minutes of the runs (to avoid the warmup stage), we get the following results:

  • 13GB: MySQL 5.5.8 – 23,513 NOTPM, Percona Server – 30,436 NOTPM, advantage: 1.29x
  • 52GB: MySQL 5.5.8 – 71,774 NOTPM, Percona Server – 88,792 NOTPM, advantage: 1.23x
  • 144GB: MySQL 5.5.8 – 78,091 NOTPM, Percona Server – 109,631 NOTPM, advantage: 1.4x

This is actually the first case where I’ve seen NOTPM greater than 100,000 for a tpcc-mysql workload with 1000W.

The main factors that allow us to get a 1.4x improvement in Percona Server are:

  • Big log files. Total size of logs are: innodb_log_file_size=8G
  • Disabling flushing of neighborhood pages: innodb_flush_neighbor_pages=0
  • New adaptive checkpointing algorithm innodb_adaptive_checkpoint=keep_average
  • Disabled read-ahead logic: innodb_read_ahead=none
  • Buffer pool scalability fixes (different from innodb_buffer_pool_instances)

We recognize that hardware like the Cisco UCS C250 and the Virident tachIOn card may not be for the mass market yet, but
it is a good choice for if you are looking for high MySQL performance, and we tune Percona Server to get the most from such hardware. Actually, from my benchmarks, I see that the Virident card is not fully loaded, and we may benefit from running two separate instances of MySQL on a single card. This is a topic for another round.

(Edited by: Fred Linhoss)


Entry posted by Vadim |
10 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
20
2010
--

MySQL 5.5.8 and Percona Server: being adaptive

As we can see, MySQL 5.5.8 comes with great improvements and scalability fixes. Adding up all the new features, you have a great release. However, there is one area I want to touch on in this post. At Percona, we consider it important not only to have the best peak performance, but also stable and predictable performance. I refer you to Peter’s post, Performance Optimization and Six Sigma.

In Percona Server (and actually even before that, in percona-patches builds for 5.0), we added adaptive checkpoint algorithms, and later the InnoDB-plugin included an implementation of  “adaptive flushing”. This post shows the differences between them and MySQL.

The post also answers the question of whether we are going to have releases of Percona Server/XtraDB based on the MySQL 5.5 code line. The answer: Yes, we are. My benchmarks here are based on Percona Server 5.5.7. (You can get the source code from lp:~percona-dev/percona-server/5.5.7 , but it is very beta quality at the moment.)

For this post, I made tpcc-runs on our Dell PowerEdge R900 box, using RAID10 over 8 disks and a FusionIO 320GB MLC card.

First,  the results for tpcc-mysql, 500w (around 50GB of data) on RAID10. I used innodb_buffer_pool_size=24G, innodb_log_file_size=2000M (innodb_log_files_in_group=2), and innodb_flush_log_at_trx_commit=2. Also, innodb_adaptive_flushing (ON) / innodb_adaptive_checkpoint (estimate) were the default values.

The raw results, full config files, and scripts are in our Benchmarks Wiki.

The graphical result below shows the throughput on the server over 8 hours. (Yes, 8 hours, to show MySQL performance over a long time period. It is not a short, 5-minute exercise.)

Although it takes a decent time for the Percona Server results to stabilize, for MySQL 5.5.8 we have regular dips (3 times per hour) from 24900 NOTPM to 17700 NOTPM (dips of around 30%).

Next, the second run on the FusionIO card. There I should say that we were not able to get stable results with the existing adaptive_checkpoint or adaptive_flushing algorithms. So, Yasufumi invested a lot of research time and came up with the new innodb_adaptive_checkpoint=”keep_average” method. This method requires setting innodb_flush_neighbor_pages=0 , to disable flushing of neighborhood pages (not available in MySQL 5.5.8). The problem with flushing neighborhood pages is that it makes an exact calculation of how many pages were handled impossible. The flushing neighborhoods feature was created as an optimization for hard drives, since InnoDB tries to combine writing as many pages as possible into a single sequential write, which means that a single I/O may have a size of 32K, 64K, 96K, …, etc. And again, that makes a prediction of how many I/O operations there are impossible. Furthermore, this optimization is not needed for flash devices, like FusionIO or Virident cards.

An additional optimization we have for SSDs is big log files. For this run, I used innodb_log_file_size=4G (innodb_log_files_in_group=2) for Percona Server. That gave 8GB in total size for log files (MySQL 5.5.8 has a 4GB limit). In additional to increasing log_size we added option innodb_log_block_size which allows to change IO block size for logs files. Default is 512 bytes, in test with FusionIO I use 4096 bytes, to align IO with internal FusionIO size.

You can see that MySQL 5.5.8 has periodic drops here, too. The margin between Percona Server and MySQL is about 2500-2800 NOTPM (~15% difference).

MySQL 5.5.8 now has features related to having several buffer pool instances that are supposed to fix the buffer pool scalability issue. Let’s see how MySQL performance changes for the last workload if we set innodb_buffer_pool_instances=8 or 16.

As you see, having several buffer pools makes the dips deeper and longer. It seems that for Percona Server the best choice is innodb_buffer_pool_instances=1, as we implemented buffer pool scalability in a different way.

UPDATE
By request from commenter I put also results with different innodb_io_capacity for MySQL 5.5.8. It is 500 ( which I used in benchmarks above), 4000 and 20000.

As you see there is no improvements from bigger innodb_io_capacity, and it also concurs with my previous experience, that with bigger io_capacity you rather getting worse results.

For reference, here is the config file used for benchmarks on FusionIO:

CODE:

  1. [client]
  2. socket=/var/lib/mysql/mysql.sock
  3. [mysqld]
  4. core
  5. basedir=/usr/local/mysql
  6. user=root
  7. socket=/var/lib/mysql/mysql.sock
  8. skip-grant-tables
  9. server_id=1
  10. local_infile=1
  11. datadir=/mnt/fio320
  12. innodb_buffer_pool_size=24G
  13. innodb_data_file_path=ibdata1:10M:autoextend
  14. innodb_file_per_table=1
  15. innodb_flush_log_at_trx_commit=2
  16. innodb_log_buffer_size=8M
  17. innodb_log_files_in_group=2
  18. innodb_log_file_size=4G
  19. innodb_log_block_size=4096
  20. innodb_thread_concurrency=0
  21. innodb_flush_method = O_DIRECT
  22. innodb_read_ahead = none
  23. innodb_flush_neighbor_pages = 0
  24. innodb_write_io_threads=8
  25. innodb_read_io_threads=8
  26. innodb_io_capacity=500
  27. max_connections=3000
  28. query_cache_size=0
  29. skip-name-resolve
  30. table_cache=10000
  31. [mysql]
  32. socket=/tmp/mysql.sock

(post edited by Fred Linhoss)


Entry posted by Vadim |
23 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
20
2010
--

Impact of the number of idle connections in MySQL (version 2)

Last Friday I published results of DBT2 performance while varying the number of idle connections here, but I had compiled MySQL with the debugging code enabled. That completely screw up my results, be aware… debug options have a huge performance impact. So, I recompiled Percona-Server 11.2 without the debug options and did another benchmark run. The result is shown below:

As you can see, the impact is more moderate and far less shocking. The performance loss is approximately of 1% per 1000 of idle connections. Although it is something to keep in mind, there is no big stress with these idle connections.


Entry posted by Yves Trudeau |
5 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Written by in: MySQL,Zend Developer |
Dec
17
2010
--

Impact of the number of idle connections in MySQL

Be careful with my findings, I appear to have compile in debug mode, I am redoing the benchmarks. Updated version here.

I recently had to work with many customers having large number of connections opened in MySQL and although I told them this was not optimal, I had no solid arguments to present. More than once, I heard: it is not a problem, those connections are Sleeping! In order to remedy to this situation, I decided to run a series of DBT2 benchmarks while the number of idle connections was varied from 0 to 1500. I was expecting an influence, because those idle threads are presents in internal MySQL list objects that need to be scanned and also the socket SELECT call needs to build a FD bitmap after every network activity and the size of the bitmap is proportional to the number of active threads. What I found is not a small impact at all.

For my daily work, I use 2 computers linked with a gigabit switch so I decided to use them for benchmarking. On the desktop, I installed Percona-Server-11.2 which I configured with the following Innodb settings:

CODE:

  1. innodb_buffer_pool_size=1G
  2. innodb_log_file_size=64M
  3. innodb_log_buffer_size=8M
  4. innodb_flush_method=O_DIRECT
  5. innodb_flush_log_at_trx_commit=0
  6. max_connections = 10000
  7. open_files_limit = 32768

The desktop is running Ubuntu 10.04, has 8GB of RAM and a Core i5 CPU (dual core with HT enabled). Since my goal is to test concurrency, I decided to use only one warehouse for DBT2 which easily fits inside the buffer pool. Then, with innodb_flush_log_at_trx_commit=0, I ensured I was not benchmarking my disk.

On the laptop, I ran the following script:

CODE:

  1. #!/bin/bash
  2. for i in `seq 0 4`
  3. do
  4.   numconn=$((i*20))
  5.   echo “Doing $numconn idle connections”
  6.   php ./make_conn.php $numconn &
  7.   cat dbt2.sql | mysql -h 10.2.2.129 -u root dbt2
  8.   sleep 120
  9.   ./run_mysql.sh -h 10.2.2.129 -u root -c 4 -t 600 -w 1 -s 1 -m 100 -e yes
  10.   killall php
  11.   sleep 2
  12. done
  13.  
  14. for i in `seq 1 15`
  15. do
  16.   numconn=$((i*100))
  17.   echo “Doing $numconn idle connections”
  18.   php ./make_conn.php $numconn &
  19.   cat dbt2.sql | mysql -h 10.2.2.129 -u root dbt2
  20.   sleep 120
  21.   ./run_mysql.sh -h 10.2.2.129 -u root -c 4 -t 600 -w 1 -s 1 -m 100 -e yes
  22.   killall php
  23.   sleep 2
  24. done

Where 10.2.2.129 is the IP of the Desktop. This script uses a PHP script, make_conn.php, to generate the idle connections. This script is the following:

CODE:

  1. <?php
  2.         $connarray = array();
  3.         for ($i = 0;$i <$argv[1];$i++) {
  4.            $connarray[$i] = mysql_connect(‘10.2.2.129’, ‘root’,,TRUE);
  5.         }
  6.    sleep(1000);
  7. ?>

Before each benchmark, the database is reinitialized to ensure consistent benchmarks. During all the benchmarks, the CPU load on the laptop was never above 10%. I also hacked a bit the “run_mysql.sh” script to allow more processing threads (100) instead of the max of 20. This is required for another series of benchmarks I’ll present soon, this time looking at the number active connections. For these benchmarks, I use pools of 4 connections, basically to match the number of available computing threads the Core i5 allows. I know by experience that this is about the max DBT2 NOTPM for small number of threads.

So, here are the results:

As you can see, the performance drop is shocking. At 1500 idle connections, the performance is 1.3% of the one with 0 idle. Even for as few as 20 idle connections, the drop is already of 40%. I also verified the laptop (running dbt2) was not slowed down by handling all those connections. I tried generating the idle connections from a third box and the results were the exact same. The conclusion is quite straightforward, idle connections hurts performance a lot!!! I am curious as to where MySQL is spending its time, I am planning to use profiling tools to identify the culprit but I had no time yet to do that. I am afraid the same scalability issues affects actives connections although these are also affected by concurrency issues. With new servers having 24+ cores, this phenomenon is seriously affecting performances.


Entry posted by Yves Trudeau |
18 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
16
2010
--

Percona Server 5.1.53-11.7

Percona Server version 5.1.53-11.7 is now available for download.

The main purpose of this release is to update the current Percona stable release to the latest version of MySQL 5.1.

Functionality Added or Changed

  •  Percona Server 5.1.53-11.7 is now based on MySQL 5.1.53.
  •  New Features Added: None
  •  Other Changes: None

Bugs Fixed

  • Bug #643149 – Slow query log entries were not being done in the usual parsing format. (Alexey Kopytov)
  • Bug #677407 – The innodb.innodb_information_schema test could fail sporadically due to flawed logic in the INFORMATION_SCHEMA.INNODB_LOCKS caching mechanism. (Alexey Kopytov)

Release Notes for this and previous releases can be found in our Wiki.

Downloads are available here and from the Percona release repositories. The latest source code for Percona Server, including the development branch, can be found on Launchpad.

Please report any bugs found at Bugs in Percona Server.

For general questions, use our Percona Discussions Group, and for development questions our Percona Development Group.

For support, commercial, and sponsorship inquiries, contact Percona.


Entry posted by Fred Linhoss |
2 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
14
2010
--

Percona Server now both SQL and NOSQL

Just yesterday we released Percona Server 5.1.52-12.3 which includes HandlerSocket. This is third-party plugin, developed Inada Naoki, DeNA Co., Ltd and explained in Yoshinori Matsunobu’s blog post.

What is so special about it:

  • It provides NOSQL-like requests to data stored in XtraDB. So in the same time you can access your data in SQL and NOSQL ways. This is first open source solution which allows that.
  • It has persistent storage (XtraDB is persistent)
  • It handles really high load. In my tests using 2 dedicated web servers ( using perl clients) I reached 200,000 req/sec and the clients were real bottleneck, while Percona Server was busy only 5-7%. I did not have more clients in my lab to put more load, but I have no doubts we have handle 1,000,000 req/sec with 5 separate web applications. The tests were done with Percona Server installed on Cisco UCS C250 server with 12cores/24threads and 380GB of RAM .

How it can be used:

  • To provide high requests rate for simple requests like: PK-lookup, index-lookup or index-range
  • To replace caching layer (or remove it at all). With declared access rate, there is no needs to have memcached.
    This also solves cache management and data invalidation problems

So give it a try and provide us feedback.


Entry posted by Vadim |
18 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com