Dec
21
2010
--

MySQL 5.5.8 and Percona Server on Fast Flash card (Virident tachIOn)

This is to follow up on my previous post and show the results for MySQL 5.5.8 and Percona Server on the fastest hardware I have in our lab: a Cisco UCS C250 server with 384GB of RAM, powered by a Virident tachIOn 400GB SLC card.

To see different I/O patterns, I used different innodb_buffer_pool_size settings: 13G, 52G, an 144G on a tpcc-mysql workload with 1000W (around 100GB of data). This combination of buffer pool sizes gives us different data/memory ratios (for 13G – an I/O intensive workload, for 52G – half of the data fits into the buffer pool, for 144G – the data all fits into memory). For the cases when the data fits into memory, it is especially important to have big transactional log files, as in these cases the main I/O pressure comes from checkpoint activity, and the smaller the log size, the more I/O per second InnoDB needs to perform.

So let me point out the optimizations I used for Percona Server:

  • innodb_log_file_size=4G (innodb_log_files_in_group=2)
  • innodb_flush_neighbor_pages=0
  • innodb_adaptive_checkpoint=keep_average
  • innodb_read_ahead=none

For MySQL 5.5.8, I used:

  • innodb_log_file_size=2000M (innodb_log_files_in_group=2), as the maximal available setting
  • innodb_buffer_pool_instances=8 (for a 13GB buffer pool); 16 (for 52 and 144GB buffer pools), as it is seems in this configuration this setting provides the best throughput
  • innodb_io_capacity=20000; a difference from the FusionIO case, it gives better results for MySQL 5.5.8.

For both servers I used:

  • innodb_flush_log_at_trx_commit=2
  • ibdata1 and innodb_log_files located on separate RAID10 partitions, InnoDB datafiles on the Virident tachIOn 400G card

The raw results, config, and script are in our Benchmarks Wiki.
Here are the graphs:

13G innodb_buffer_pool_size:

In this case, both servers show a straight line, and it seems having 8 innodb_buffer_pool_instances was helpful.

52G innodb_buffer_pool_size:

144G innodb_buffer_pool_size:

The final graph shows the difference between different settings of innodb_io_capacity for MySQL 5.5.8.

Small innodb_io_capacity values are really bad, while 20000 allows us to get a more stable line.

In summary, if we take the average NOTPM for the final 30 minutes of the runs (to avoid the warmup stage), we get the following results:

  • 13GB: MySQL 5.5.8 – 23,513 NOTPM, Percona Server – 30,436 NOTPM, advantage: 1.29x
  • 52GB: MySQL 5.5.8 – 71,774 NOTPM, Percona Server – 88,792 NOTPM, advantage: 1.23x
  • 144GB: MySQL 5.5.8 – 78,091 NOTPM, Percona Server – 109,631 NOTPM, advantage: 1.4x

This is actually the first case where I’ve seen NOTPM greater than 100,000 for a tpcc-mysql workload with 1000W.

The main factors that allow us to get a 1.4x improvement in Percona Server are:

  • Big log files. Total size of logs are: innodb_log_file_size=8G
  • Disabling flushing of neighborhood pages: innodb_flush_neighbor_pages=0
  • New adaptive checkpointing algorithm innodb_adaptive_checkpoint=keep_average
  • Disabled read-ahead logic: innodb_read_ahead=none
  • Buffer pool scalability fixes (different from innodb_buffer_pool_instances)

We recognize that hardware like the Cisco UCS C250 and the Virident tachIOn card may not be for the mass market yet, but
it is a good choice for if you are looking for high MySQL performance, and we tune Percona Server to get the most from such hardware. Actually, from my benchmarks, I see that the Virident card is not fully loaded, and we may benefit from running two separate instances of MySQL on a single card. This is a topic for another round.

(Edited by: Fred Linhoss)


Entry posted by Vadim |
10 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
20
2010
--

MySQL 5.5.8 and Percona Server: being adaptive

As we can see, MySQL 5.5.8 comes with great improvements and scalability fixes. Adding up all the new features, you have a great release. However, there is one area I want to touch on in this post. At Percona, we consider it important not only to have the best peak performance, but also stable and predictable performance. I refer you to Peter’s post, Performance Optimization and Six Sigma.

In Percona Server (and actually even before that, in percona-patches builds for 5.0), we added adaptive checkpoint algorithms, and later the InnoDB-plugin included an implementation of  “adaptive flushing”. This post shows the differences between them and MySQL.

The post also answers the question of whether we are going to have releases of Percona Server/XtraDB based on the MySQL 5.5 code line. The answer: Yes, we are. My benchmarks here are based on Percona Server 5.5.7. (You can get the source code from lp:~percona-dev/percona-server/5.5.7 , but it is very beta quality at the moment.)

For this post, I made tpcc-runs on our Dell PowerEdge R900 box, using RAID10 over 8 disks and a FusionIO 320GB MLC card.

First,  the results for tpcc-mysql, 500w (around 50GB of data) on RAID10. I used innodb_buffer_pool_size=24G, innodb_log_file_size=2000M (innodb_log_files_in_group=2), and innodb_flush_log_at_trx_commit=2. Also, innodb_adaptive_flushing (ON) / innodb_adaptive_checkpoint (estimate) were the default values.

The raw results, full config files, and scripts are in our Benchmarks Wiki.

The graphical result below shows the throughput on the server over 8 hours. (Yes, 8 hours, to show MySQL performance over a long time period. It is not a short, 5-minute exercise.)

Although it takes a decent time for the Percona Server results to stabilize, for MySQL 5.5.8 we have regular dips (3 times per hour) from 24900 NOTPM to 17700 NOTPM (dips of around 30%).

Next, the second run on the FusionIO card. There I should say that we were not able to get stable results with the existing adaptive_checkpoint or adaptive_flushing algorithms. So, Yasufumi invested a lot of research time and came up with the new innodb_adaptive_checkpoint=”keep_average” method. This method requires setting innodb_flush_neighbor_pages=0 , to disable flushing of neighborhood pages (not available in MySQL 5.5.8). The problem with flushing neighborhood pages is that it makes an exact calculation of how many pages were handled impossible. The flushing neighborhoods feature was created as an optimization for hard drives, since InnoDB tries to combine writing as many pages as possible into a single sequential write, which means that a single I/O may have a size of 32K, 64K, 96K, …, etc. And again, that makes a prediction of how many I/O operations there are impossible. Furthermore, this optimization is not needed for flash devices, like FusionIO or Virident cards.

An additional optimization we have for SSDs is big log files. For this run, I used innodb_log_file_size=4G (innodb_log_files_in_group=2) for Percona Server. That gave 8GB in total size for log files (MySQL 5.5.8 has a 4GB limit). In additional to increasing log_size we added option innodb_log_block_size which allows to change IO block size for logs files. Default is 512 bytes, in test with FusionIO I use 4096 bytes, to align IO with internal FusionIO size.

You can see that MySQL 5.5.8 has periodic drops here, too. The margin between Percona Server and MySQL is about 2500-2800 NOTPM (~15% difference).

MySQL 5.5.8 now has features related to having several buffer pool instances that are supposed to fix the buffer pool scalability issue. Let’s see how MySQL performance changes for the last workload if we set innodb_buffer_pool_instances=8 or 16.

As you see, having several buffer pools makes the dips deeper and longer. It seems that for Percona Server the best choice is innodb_buffer_pool_instances=1, as we implemented buffer pool scalability in a different way.

UPDATE
By request from commenter I put also results with different innodb_io_capacity for MySQL 5.5.8. It is 500 ( which I used in benchmarks above), 4000 and 20000.

As you see there is no improvements from bigger innodb_io_capacity, and it also concurs with my previous experience, that with bigger io_capacity you rather getting worse results.

For reference, here is the config file used for benchmarks on FusionIO:

CODE:

  1. [client]
  2. socket=/var/lib/mysql/mysql.sock
  3. [mysqld]
  4. core
  5. basedir=/usr/local/mysql
  6. user=root
  7. socket=/var/lib/mysql/mysql.sock
  8. skip-grant-tables
  9. server_id=1
  10. local_infile=1
  11. datadir=/mnt/fio320
  12. innodb_buffer_pool_size=24G
  13. innodb_data_file_path=ibdata1:10M:autoextend
  14. innodb_file_per_table=1
  15. innodb_flush_log_at_trx_commit=2
  16. innodb_log_buffer_size=8M
  17. innodb_log_files_in_group=2
  18. innodb_log_file_size=4G
  19. innodb_log_block_size=4096
  20. innodb_thread_concurrency=0
  21. innodb_flush_method = O_DIRECT
  22. innodb_read_ahead = none
  23. innodb_flush_neighbor_pages = 0
  24. innodb_write_io_threads=8
  25. innodb_read_io_threads=8
  26. innodb_io_capacity=500
  27. max_connections=3000
  28. query_cache_size=0
  29. skip-name-resolve
  30. table_cache=10000
  31. [mysql]
  32. socket=/tmp/mysql.sock

(post edited by Fred Linhoss)


Entry posted by Vadim |
23 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
16
2010
--

Percona Server 5.1.53-11.7

Percona Server version 5.1.53-11.7 is now available for download.

The main purpose of this release is to update the current Percona stable release to the latest version of MySQL 5.1.

Functionality Added or Changed

  •  Percona Server 5.1.53-11.7 is now based on MySQL 5.1.53.
  •  New Features Added: None
  •  Other Changes: None

Bugs Fixed

  • Bug #643149 – Slow query log entries were not being done in the usual parsing format. (Alexey Kopytov)
  • Bug #677407 – The innodb.innodb_information_schema test could fail sporadically due to flawed logic in the INFORMATION_SCHEMA.INNODB_LOCKS caching mechanism. (Alexey Kopytov)

Release Notes for this and previous releases can be found in our Wiki.

Downloads are available here and from the Percona release repositories. The latest source code for Percona Server, including the development branch, can be found on Launchpad.

Please report any bugs found at Bugs in Percona Server.

For general questions, use our Percona Discussions Group, and for development questions our Percona Development Group.

For support, commercial, and sponsorship inquiries, contact Percona.


Entry posted by Fred Linhoss |
2 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
14
2010
--

Percona Server now both SQL and NOSQL

Just yesterday we released Percona Server 5.1.52-12.3 which includes HandlerSocket. This is third-party plugin, developed Inada Naoki, DeNA Co., Ltd and explained in Yoshinori Matsunobu’s blog post.

What is so special about it:

  • It provides NOSQL-like requests to data stored in XtraDB. So in the same time you can access your data in SQL and NOSQL ways. This is first open source solution which allows that.
  • It has persistent storage (XtraDB is persistent)
  • It handles really high load. In my tests using 2 dedicated web servers ( using perl clients) I reached 200,000 req/sec and the clients were real bottleneck, while Percona Server was busy only 5-7%. I did not have more clients in my lab to put more load, but I have no doubts we have handle 1,000,000 req/sec with 5 separate web applications. The tests were done with Percona Server installed on Cisco UCS C250 server with 12cores/24threads and 380GB of RAM .

How it can be used:

  • To provide high requests rate for simple requests like: PK-lookup, index-lookup or index-range
  • To replace caching layer (or remove it at all). With declared access rate, there is no needs to have memcached.
    This also solves cache management and data invalidation problems

So give it a try and provide us feedback.


Entry posted by Vadim |
18 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
13
2010
--

Percona Training in London and Frankfurt

For those that missed it – last week we added training in Frankfurt and London to our website.  This includes our new Operations Training announced in September.

(While in Europe, we’d love to speak at your meetup group on Percona Server and/or MySQL Optimization.  If you’re a meetup organizer in London or Paris, please feel free to get in touch.)


Entry posted by Morgan Tocker |
No comment

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
13
2010
--

Percona Server 5.1.52-12.3

Percona Server version 5.1.52-12.3 is now available for download.

The main purpose of this release is to update the current Percona release candidate to the latest version of MySQL 5.1. The release also includes the HandlerSocket plugin, which provides NoSQL features in Percona Server.

Functionality Added or Changed

  • Percona Server 5.1.52-12.3 is now based on MySQL 5.1.52.
  • New Features Added:
    • HandlerSocket plugin support has been added to all RPM and Debian packages. This is an experimental feature. (Inada Naoki, DeNA Co., Ltd.)
    • show_slave_status_nolock – Allows examination of slave status even when a lock prevents SHOW SLAVE STATUS from operating. (Oleg Tsarev)
    •  innodb_fast_shutdown – Sleeping threads could cause a delay of up to 10 seconds during InnoDB shutdown. The sleeping state of threads was made interruptible to avoid this. (contributed by Kristian Nielsen) (Alexey Kopytov)
  • Other Changes: None

Bugs Fixed

  • Bug #640576 – The false error “innodb_extra_undoslots option is disabled but it was enabled before” was sometimes reported after upgrading from MySQL 5.0. (Yasufumi Kinoyasu)
  • Bug #643463 – Shutting down XtraDB could take up to 10 seconds, even when there was no actual work to do. (Fix contributed by Kristian Nielsen) (Alexey Kopytov)
  • Bug #663757 – On the FreeBSD platform, the “gcc atomic built-in” function isn’t provided, so response_time_distribution now uses the native atomic API on FreeBSD. (Oleg Tsarev)
  • Bug #673426 – Use of some system variables as command-line options caused a crash or undefined behavior. (Oleg Tsarev)
  • Bug #673562 – Debug build was broken due a to failing compile-time assertion in mysqld.cc. (Alexey Kopytov)
  • Bug #673567 – Compiler could produce spurious warnings when building on non-Linux platforms. A check is now made to see if clock_gettime() is present in librt at the configure stage. If yes, -lrt is added to LIBS. (Alexey Kopytov)
  • Bug #673929 – Query cache misses were being reported for some queries when hits were actually occurring. (Oleg Tsarev)
  • Bug #676146 – The development environment test of log_slow_verbosity=innodb on a slave for row-based replication was not working correctly. (Oleg Tsarev)
  • Bug #676147 – The development environment test of option log_slow_slave_statements for row-based replication was not working correctly. (Oleg Tsarev)
  • Bug #676148 – Similar to Bug #676147. A check is now made for the replication type to test. (Oleg Tsarev)

The Release Notes for this and previous releases can be found in our Wiki.

Downloads are available here and from the Percona Release Candidates Repository. The latest source code for Percona Server, including the development branch, can be found on LaunchPAD.

Please report any bugs found at Bugs in Percona Server.

For general questions, use our Percona Discussions Group, and for development questions our Percona Development Group.

For support, commercial, and sponsorship inquiries, contact Percona.


Entry posted by Fred Linhoss |
2 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
13
2010
--

Percona XtraBackup 1.5-Beta

Percona XtraBackup 1.5-Beta is now available for download.

This release adds additional functionality to Percona XtraBackup 1.4, the current general availability version of XtraBackup.

This is a beta release.

Functionality Added or Changed

  • Support for MySQL 5.5 databases has been implemented. (Yasufumi Kinoshita)
  • XtraBackup can now be built from the MySQL 5.1.52, MySQL 5.5.7, or Percona Server 5.1.53-12 code bases (fixes bug #683507). (Alexey Kopytov)
  • The program is now distributed as three separate binaries: 
    • xtrabackup – for use with Percona Server with the built-in InnoDB plugin
    • xtrabackup_51 – for use with MySQL 5.0 & 5.1 with built-in InnoDB
    • xtrabackup_55 – for use with MySQL 5.5 (this binary is not provided for the FreeBSD platform)
  • Backing up only specific tables can now be done by specifying them in a file, using the --tables-file option. (Yasufumi Kinoyasu & Daniel Nichter)
  • Additional checks were added to monitor the rate the log file is being overwritten, to determine if XtraBackup is keeping up. If the log file is being overwritten faster than XtraBackup can keep up, a warning is given that the backup may be inconsistent. (Yasufumi Kinoyasu)
  • The XtraBackup binaries are now compiled with the -O3 gcc option, which may improve backup speed in stream mode in some cases.
  • It is now possible to copy multiple data files concurrently in parallel threads when creating a backup, using the --parallel option. See the xtrabackup Option Reference and Parallel Backups. (Alexey Kopytov)

Bugs Fixed

  • Bug #683507 – xtrabackup has been updated to build from the MySQL 5.1.52, MySQL 5.5.7, or Percona Server 5.1.53-12 code bases. (Alexey Kopytov)

Release Notes for this and previous releases of Percona Xtrabackup can be found in our Wiki.

The latest downloads are available on our website. The latest source code can be found on Launchpad.

Please report any bugs found at Bugs in Percona XtraBackup.

For general questions, use our Percona Discussions Group, and for development questions our Percona Development Group.

For support, commercial, and sponsorship inquiries, contact Percona.


Entry posted by Fred Linhoss |
4 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
09
2010
--

How well does your table fits in innodb buffer pool ?

Understanding how well your tables and indexes fit to buffer pool are often very helpful to understand why some queries are IO bound and others not – it may be because the tables and indexes they are accessing are not in cache, for example being washed away by other queries. MySQL Server does not provide any information of this type, Percona Server however adds number of tables to Information Schema which makes this information possible. It is just few queries away:

SQL:

  1. mysql> SELECT `schema` AS table_schema,innodb_sys_tables.name AS table_name,innodb_sys_indexes.name AS index_name,cnt,dirty,hashed,round(cnt*100/index_size,2) fit_pct   FROM (SELECT index_id,count(*) cnt,sum(dirty=1) dirty ,sum(hashed=1) hashed FROM innodb_buffer_pool_pages_index GROUP BY index_id) bp JOIN innodb_sys_indexes ON id=index_id JOIN innodb_sys_tables ON table_id=innodb_sys_tables.id JOIN innodb_index_stats ON innodb_index_stats.table_name=innodb_sys_tables.name AND innodb_sys_indexes.name=innodb_index_stats.index_name AND innodb_index_stats.table_schema=innodb_sys_tables.schema  ORDER BY cnt DESC LIMIT 20;
  2. +————–+————–+————–+——+——-+——–+———+
  3. | table_schema | table_name   | index_name   | cnt  | dirty | hashed | fit_pct |
  4. +————–+————–+————–+——+——-+——–+———+
  5. | test         | a            | c            | 7976 |     0 |      0 |   13.73 |
  6. | test         | a            | PRIMARY      |   59 |     0 |      0 |    0.08 |
  7. | sbtest       | sbtest#P#p1  | PRIMARY      |   22 |     0 |      0 |   22.68 |
  8. | sbtest       | sbtest#P#p0  | PRIMARY      |   22 |     0 |      0 |   22.68 |
  9. | sbtest       | sbtest#P#p2  | PRIMARY      |   21 |     0 |      0 |   21.65 |
  10. | sbtest       | sbtest#P#p3  | PRIMARY      |   18 |     0 |      0 |   18.56 |
  11. | sbtest       | sbtest#P#p3  | k            |    4 |     0 |      0 |  100.00 |
  12. | sbtest       | sbtest#P#p2  | k            |    4 |     0 |      0 |  100.00 |
  13. | sbtest       | sbtest#P#p1  | k            |    4 |     0 |      0 |  100.00 |
  14. | sbtest       | sbtest#P#p0  | k            |    4 |     0 |      0 |  100.00 |
  15. | stats        | TABLES       | PRIMARY      |    2 |     0 |      0 |   66.67 |
  16. | stats        | TABLES       | TABLE_SCHEMA |    1 |     0 |      0100.00 |
  17. | percona      | transactions | PRIMARY      |    1 |     0 |      0100.00 |
  18. +————–+————–+————–+——+——-+——–+———+
  19. 13 rows IN SET (0.04 sec)

This query shows information about how many pages are in buffer pool for given table (cnt), how many of them are dirty (dirty), and what is the percentage of index fits in memory (fit_pct)
For illustration purposes I’ve created one table with partitions to show you will have the real “physical” table name which identifies table down to partition, which is very helpful for analyzes of your
access to partitions – you can actually check if your “hot” partitions really end up in the cache and “cold” are out of the cache, or is something happening which pushes them away from the cache.

You can use this feature to tune buffer pool invalidation strategy, for example play with innodb_old_blocks_pct and innodb_old_blocks_time actually observing data stored in buffer pool rather than using some form of temporary measures.

I often check these stats during warmup to see what is really getting warmed up first as well as how buffer pool is affected by batch jobs, alter tables, optimize table etc – the lasting impact these may have on system performance is often caused by impact they have on buffer pool which may take hours to recover.

This tool can be also helpful for capacity planning/performance management. In many cases you would learn you need a certain fit to buffer pool for tables/indexes for reasonable performance, you may try to count it too but it may be pretty hard as there are a lot of variables, including page fill factors etc.


Entry posted by Peter Zaitsev |
4 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
09
2010
--

Thinking about running OPTIMIZE on your Innodb Table ? Stop!

Innodb/XtraDB tables do benefit from being reorganized often. You can get data physically laid out in primary key order as well as get better feel for primary key and index pages and so using less space,
it is just OPTIMIZE TABLE might not be best way to do it.

If you’re running Innodb Plugin on Percona Server with XtraDB you get benefit of a great new feature – ability to build indexes by sort instead of via insertion. This process can be a lot faster, especially for large indexes which would get inserts in very random order, such as indexes on UUID column or something similar. It also produces a lot better fill factor. The problem is…. OPTIMIZE TABLE for Innodb tables does not get advantage of it for whatever reason.

Lets take a look at little benchmark I done by running OPTIMIZE for a second time on a table which is some 10 times larger than amount of memory I allocated for buffer pool:

SQL:

  1. CREATE TABLE `a` (
  2.   `id` int(10) UNSIGNED NOT NULL AUTO_INCREMENT,
  3.   `c` char(64) DEFAULT NULL,
  4.   PRIMARY KEY (`id`),
  5.   KEY `c` (`c`)
  6. ) ENGINE=InnoDB AUTO_INCREMENT=12582913 DEFAULT CHARSET=latin1
  7.  
  8. mysql> SELECT * FROM a ORDER BY id LIMIT 10;
  9. +—-+——————————————+
  10. | id | c                                        |
  11. +—-+——————————————+
  12. 1 | 813cf02d7d65de2639014dd1fb574d4c481ecac7 |
  13. 2 | 62960f5d5d50651e5a5983dacaedfa9a73a9ee87 |
  14. 3 | cea33998792ffe28b16b9272b950102a9633439f |
  15. 4 | 8346a7afa0a0791693338d96a07a944874340a1c |
  16. 5 | b00faaa432f507a0d16d2940ca8ec36699f141c8 |
  17. 6 | 8e00926cf6c9b13dc8e0664a744b7116c5c61036 |
  18. 7 | f151fe34b66fd4d28521d5e7ccb68b0d5d81f21b |
  19. 8 | 7fceb5afa200a27b81cab45f94903ce04d6f24db |
  20. 9 | 0397562dc35b5242842d68de424aa9f0b409d60f |
  21. | 10 | af8efbaef7010a1a3bfdff6609e5c233c897e1d5 |
  22. +—-+——————————————+
  23. 10 rows IN SET (0.04 sec)
  24.  
  25. # This is just random SHA(1) hashes
  26.  
  27. mysql> OPTIMIZE TABLE a;
  28. +——–+———-+———-+——————————————————————-+
  29. | TABLE  | Op       | Msg_type | Msg_text                                                          |
  30. +——–+———-+———-+——————————————————————-+
  31. | test.a | OPTIMIZE | note     | TABLE does NOT support OPTIMIZE, doing recreate + analyze instead |
  32. | test.a | OPTIMIZE | STATUS   | OK                                                                |
  33. +——–+———-+———-+——————————————————————-+
  34. 2 rows IN SET (3 hours 3 min 35.15 sec)
  35.  
  36. mysql> ALTER TABLE a DROP KEY c;
  37. Query OK, 0 rows affected (0.46 sec)
  38. Records: 0  Duplicates: 0  Warnings: 0
  39.  
  40. mysql> OPTIMIZE TABLE a;
  41. +——–+———-+———-+——————————————————————-+
  42. | TABLE  | Op       | Msg_type | Msg_text                                                          |
  43. +——–+———-+———-+——————————————————————-+
  44. | test.a | OPTIMIZE | note     | TABLE does NOT support OPTIMIZE, doing recreate + analyze instead |
  45. | test.a | OPTIMIZE | STATUS   | OK                                                                |
  46. +——–+———-+———-+——————————————————————-+
  47. 2 rows IN SET (4 min 5.52 sec)
  48.  
  49. mysql> ALTER TABLE a ADD KEY(c);
  50. Query OK, 0 rows affected (5 min 51.83 sec)
  51. Records: 0  Duplicates: 0  Warnings: 0

That’s right ! Optimizing table straight away takes over 3 hours, while dropping indexes besides primary key, optimizing table and adding them back takes about 10 minutes, which is close than 20x speed difference and more compact index in the end.

So if you’re considering running OPTIMIZE on your tables consider using this trick, it is especially handy when you’re running it on the Slave where it is OK table is exposed without indexes for some time.
Note though nothing stops you from using LOCK TABLES on Innodb table to ensure there is not ton of queries starting reading table with no indexes and bringing box down.

You can also use this trick for ALTER TABLE which requires table rebuild. Dropping all indexes; doing ALTER and when adding them back can be a lot faster than straight ALTER TABLE.

P.S I do not know why this was not done when support for creating index by sorting was implemented. It looks very strange to me to have this feature implemented but majority of high level commands
or tools (like mysqldump) do not get advantage of it and will use old slow method of building indexes by insertion.


Entry posted by Peter Zaitsev |
7 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Nov
29
2010
--

Speaking on San Francisco Meetup 14-Dec : What’s new in XtraDB/InnoDB-plugin 5.1+

Erin and Mike, organizers of SF Meetup generously invited me to talk on coming SF Meetup on Dec-14 about new features in InnoDB in MySQL 5.1 and 5.5 and, what is pay attention to, when you upgrade from MySQL 5.0.
Although I personally mostly in 5.1->5.5 upgrade area, Erin insured me that upgrade from MySQL 5.0 is still actually question for many users, so I accepted invitation. Event details are on Meetup.com page. So if you are up for free pizza, soda and do not mind to listen to me, welcome to join!


Entry posted by Vadim |
No comment

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com