Jun
28
2013
--

Untethered Realms: Julie Flanders

UR A continuing series about each author in Untethered Realms, a neat spec-fic publishing group. Check us out:  Facebook | Twitter | Website

 

Julie Flanders

Julie Flanders is a novelist and a freelance writer who has written for both online and print publications. She is an animal lover and shares her home in Cincinnati, Ohio with her dog and cat. Her debut novel Polar Night, a paranormal suspense thriller, is now available from Ink Smith Publishing.

Website | Facebook | Twitter

Julie Flanders
Tell us about your latest book?
Polar Night is my latest (and actually first) book. It’s a paranormal thriller and my take on a traditional detective story. The story takes place in Alaska and the title refers to the winter phenomenon of 24-hour darkness in the Arctic regions.

Polar Night

What genre(s) do you write/want to write in?
I love to write anything with a paranormal or supernatural twist.

What do you like most about being an author?
I love that I get to share the characters that exist in my head. And I love that writing gives me an opportunity to step out of everyday life and go somewhere else through my imagination.

What do you hate about being an author?
I can honestly say I don’t hate anything. Writing has totally changed my life and has been a gift to me in more ways than I ever could have imagined. But if I had to pick one negative, I would say it has been learning to deal with bad reviews. I am still working on developing a thick skin.

What does Untethered Realms mean to you and why should folks follow UR?
One of my favorite things about writing has been connecting with other authors. I’ve never been a “joiner” and in fact have always been pretty much of a loner, but writing has changed that. I was very excited when I was asked to join UR and I’m proud to be part of such a talented group. I encourage readers to follow UR because I think there is something for everyone under the UR umbrella. Anyone who enjoys speculative fiction will find something to love among our books.

What do you enjoy doing when you’re not writing?
I love to read and I’m a total tv junkie. I also love walking and taking my dog on various adventures.

What’s the most fun you’ve had writing a scene or character and why?
I loved writing the character of Aleksei, who is the antagonist in Polar Night. He’s drop-dead gorgeous, so it was fun to imagine him in my mind!

If you could live as one of your characters, whom would you choose and why?
I don’t think I’d want to be any of them. I love to write dark stories and I don’t have any problem putting my characters through all kinds of pain and turmoil. George R.R. Martin is my writing idol so if you’re familiar at all with Game of Thrones or A Song of Ice and Fire you know that’s enough said.

Tell us a secret, something few people know about you!
I used to work in a psychiatric hospital and I was the on-call emergency clinician responsible for assessing suicide and homicide risks. I only did this for about 6 months but it’s safe to say it was quite an experience!

What’s your next book about? Go on, tease us!
My next book is called The Ghosts of Aquinnah and it is set on the island of Martha’s Vineyard, Massachusetts in both the 1880s and the present day. It’s hard to pick a genre for the book but I’d say it is a combination of ghost story, love story, historical fiction, and mystery.


Other Untethered Realms authors: Angela Brown | Christine Rains
 

Jun
28
2013
--

Percona Toolkit 2.2.3 released; bug fixes include pt-online-schema-change

Among all of the excellent events going on like YAPCVelocity, and Percona MySQL University, we recently released Percona Toolkit 2.2.3. It’s a small update that includes the following:

  • pt-online-schema-change did not handle the failure of the DROP TRIGGER statements correctly
  • Created pt-agent
  • pt-query-digest –output json now includes more data

Percona ToolkitThe pt-online-schema-change issue had a high importance. The bug fix is bug 1188002: pt-online-schema-change causes “ERROR 1146 (42S02): “Table ‘db._t_new’ doesn’t exist”. This happens when the tool’s triggers cannot be dropped. If that occurred, it still dropped the new table, which meant triggers were referencing a missing table, causing errors on write to the original table.

This is handled properly now: if triggers can’t be dropped, then the new table isn’t dropped either. This leaves a half-finished new table, but that’s better than write errors on the original table. Those using pt-online-schema-change should definitely upgrade.

The “pt-query-digest –output json” change now delivers more information. The previous JSON output was minimal; the output as of 2.2.3 is much more extensive and, consequently, has a new structure.  This means 2.2.3 pqd –output json is not backwards-compatible (but we warned in the first 2.2 release that –output json would be in flux).

See https://launchpad.net/percona-toolkit/+milestone/2.2.3 for the full list of bugs fixed in Percona Toolkit 2.2.3.

Visit http://www.percona.com/software/percona-toolkit to download Percona Toolkit 2.2.3; it’s also in our main repos.

The post Percona Toolkit 2.2.3 released; bug fixes include pt-online-schema-change appeared first on MySQL Performance Blog.

Jun
28
2013
--

2 new features added to Percona XtraDB Cluster (PXC) since 5.5.31

With the last Percona XtraDB Cluster (PXC) release, two major features were added:

* a new command to bootstrap the cluster was added to the init script
* SST via Xtrabackup now supports Xtrabackup 2.1 features

In this post, I want to explain the benefits of these added features and how to use them.

If you follow the mysqlperformanceblog regularly, you’ve already noticed that there are several ways to start a node and that it’s not always easy to start the node that is supposed to bootstrap the entire cluster.

See :
http://www.mysqlperformanceblog.com/2013/01/29/how-to-start-a-percona-xtradb-cluster/
http://www.percona.com/doc/percona-xtradb-cluster/manual/bootstrap.html
http://www.lefred.be/node/167
new undocumented option –wsrep-new-cluster

With the new option, bootstrap-pxc, there is no need to modify my.cnf or add parameters (when supported) to the init script. Now you can simply bootstrap (start the first node that will initiate the cluster) using one of these commands :

/etc/init.d/mysql bootstrap-pxc

or

service mysql bootstrap-pxc

Much easier, isn’t it ?

Now the second added feature is the support of xbstream from Xtrabackup 2.1.

This is useful as it will help to speed up SST operations as the backup could be performed in parallel, compressed and compacted (without copying the secondary indexes).

To configure this, you just need to add two sections to my.cnf: [sst] and [xtrabackup] like this:

[sst]
   streamfmt=xbstream
   [xtrabackup]
   compress
   compact
   parallel=2
   compress-threads=2
   rebuild-threads=2

Some example to compare the effect of these parameters.

We have one database with 4 InnoDB tables of 480M, and we test Xtrabackup SST with the following settings:

Blue:Xtrabackup with tar (default)
Red: Xtrabackup with xbstream and 2 threads (like the configuration above)
Yellow: Xtrabackup with xbstream and 4 threads

SST_XBSTREAM

This quick benchmark was realized on a small dataset with tables not having secondary indexes (no benefits from compact). I will prepare a more detailed benchmark for a future blog post.

But we can see that the new settings helps to “free” the donor faster, but the process on the joiner takes more time.
So it depends on what are your constraints to choose the best options that fit your workload’s requirements.

Remember: to be able to use Xtrabackup as SST you need to add in my.cnf:
wsrep_sst_method=xtrabackup
wsrep_sst_auth=user:password

Important: to be able to use xbstream with compression, you need to have qpress installed on the system and in the mysql user’s path (like /usr/bin for example).

Also the MySQL datadir must be empty before starting the SST process (see lp:1193240)

If you want to get familiar with these new options, I’ve updated my puppet recipes available on github.

Note:
For rebuild-threads, the default is $(grep -c processor /proc/cpuinfo) and for qpress, the decompression is also done in parallel (with number of qpress instances == $(grep -c processor /proc/cpuinfo)).

The post 2 new features added to Percona XtraDB Cluster (PXC) since 5.5.31 appeared first on MySQL Performance Blog.

Jun
27
2013
--

Percona Server 5.6.12-60.4 second Release Candidate now available

Percona Server for MySQL version 5.6.12-60.4

Percona Server for MySQL version 5.6.12-60.4

Percona is glad to announce the second Release Candidate release of Percona Server 5.6.12-60.4 on June 27th, 2013 (Downloads are available here and from the Percona Software Repositories).

Based on the recently released MySQL 5.6.12, including all the bug fixes in it, Percona Server 5.6.12-60.4 is the second RC release in the Percona Server 5.6 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.6.12-60.4 milestone at Launchpad.

New Features:

Bugs Fixed:

  • mysql_install_db did not work properly in debug and Valgrind builds. Bug fixed #1179359.
  • Fixed yum dependencies that were causing conflicts in CentOS 6.3 during installation. Bug fixed #1051874.
  • The RPM installer script had the server’s datadir hardcoded to /var/lib/mysql instead of using my_print_defaults function to get the correct datadir info. Bug fixed #1181753.
  • Fixed the upstream bug #68354 that could cause server to crash when performing update or join on Federated and MyISAM tables with one row, due to incorrect interaction between Federated storage engine and the optimizer. Bug fixed #1182572.
  • Fixed the compiler warnings caused by Atomic write support for Fusion-io devices feature when building Percona Server on non-Linux platforms. Bug fixed #1189429.

Other bugs fixed: bug fixed #1188516, bug fixed #1130731 and bug fixed #1133266.

Release notes for Percona Server for MySQL 5.6.12-60.4 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.6.12-60.4 second Release Candidate now available appeared first on MySQL Performance Blog.

Jun
26
2013
--

Talking Drupal #004 – DRUSH

Show Topics

– What is drush?

– Benefits of using drush

– Installing drush

– Security and Running at root

– Where to run drush from

– Useful drush commands

– Site setup with drush

– Scripts and Make files

Links

Drupal – Drush.org

Drupal project – https://drupal.org/project/drush

Hosts

Stephen Cross – www.ParallaxInfoTech.com @stephencross

Jason Pamental – www.designco.com @jpamental

John Picozzi – www.RubicDesign.com @johnpicozzi 

Nic Laflin – www.nLightened.net @nicxvan

Jay Lee – www.ParallaxInfoTech.com @jayrandolphlee

Jun
26
2013
--

Talking Drupal #004 – DRUSH

Show Topics

– What is drush?

– Benefits of using drush

– Installing drush

– Security and Running at root

– Where to run drush from

– Useful drush commands

– Site setup with drush

– Scripts and Make files

Links

Drupal – Drush.org

Drupal project – https://drupal.org/project/drush

Hosts

Stephen Cross – www.ParallaxInfoTech.com @stephencross

Jason Pamental – www.designco.com @jpamental

John Picozzi – www.RubicDesign.com @johnpicozzi 

Nic Laflin – www.nLightened.net @nicxvan

Jay Lee – www.ParallaxInfoTech.com @jayrandolphlee

Jun
26
2013
--

Limited disk space? Compact backups with Percona Xtrabackup 2.1

Percona XtraBackup for MySQLOne very interesting feature, “Compact Backup,” is introduced in Percona XtraBackup 2.1. You can run “compact backups” with the  –compact option, which is very useful for those who have limited disk space to keep the database backup. Now let’s first understand how it works. When we are using –compact option with Innobackupex, it will omit the secondary index pages. This will make the backups more compact and this way they will take less space on disk but the downside of this is, the backup prepare process will take longer time because those secondary index pages will be recreated while preparing the backup. Here, we need to consider couple of things before implement it or use it.

  1. Compact backups are not supported for system table space, so in order to work correctly innodb-file-per-table option should be enabled.
  2. Difference in backup size will be depend on the size of the secondary indexes. so you’ll not see drastically changes in backup size if you have less secondary indexes in database.

Taking Compact Backup: We can use –compact option with innobackupex like  sudo innobackupex –compact /home/X_Backup/. As it looked like interesting feature, I have tried to teste it with some scenarios which I would like to share. I have taken a table something like this

Create Table: CREATE TABLE `compact_test` ( `id` int(11) DEFAULT NULL,
`name` varchar(50) DEFAULT NULL,
`city` varchar(25) DEFAULT NULL,
`pin` int(11) DEFAULT NULL,
`phone` bigint(20) DEFAULT NULL,
`mobile` bigint(20) DEFAULT NULL )
ENGINE=InnoDB DEFAULT CHARSET=latin1

Added around 10M records and without index I have tried to take backup with and without –compact option. Found bellow result.

3.9G 2013-06-19_10-21-58 – without –compact option, without index – Time: 3 minutes

3.9G 2013-06-19_10-48-44 – with –compact option, without index – Time: 3 minutes

1.4G backup.xbstream – compress  backup with xbstream – Time: 2.5 minutes. Total time 5.5  minutes (backup + compress)

For further testing, I have added indexes on compact_test table for name, city, pin, phone and mobile columns and found below result.

4.7G 2013-06-19_11-42-07 – without –compact option, with indexes – Time: around 3 minutes

3.9G 2013-06-19_11-55-13 – with –compact option, with indexes – Time: around 3 minutes

1.4G backup_with_indexes.xbstream – compress above backup with xbstream – Time: 3 minutes.  Total time 6  minutes (backup + compress)

So now it clarifies that if you have many indexes on tables then only we can take benefit of –compact  otherwise it will not be useful that much. After taking simple backup, if we can compress it with xbstream, it takes more time but looks like it’s worth it.  One more thing, that if you have many indexes and you are using –compact option, then you can compress only that much space which is occupied by indexes. You can see in above example that with –compact, backup size (3.9G) is same like without index backup.

If you want to check from backup dir that if this backup is taken with –compact or not, you can simply check xtrabackup_checkpoints file, Compact value will be 0 if compact option will not be used.

backup_type = full-backuped

from_lsn = 0

to_lsn = 9023692002

last_lsn = 9023692002

compact = 1

Restoring Compact Backup:

  • Prepare Backup

While preparing the backup with –apply-log on both the backups, I found that –apply-log takes around 13 minutes with the compact backup while its taking 14 seconds for without compact backup.  I have also tried to use –use-memory option to give extra memory to prepare operation but I think it didn’t affect to time. (http://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/innobackupex_option_reference.html#cmdoption-innobackupex–use-memory) There is also one option like –rebuild-threads where you can spcify no. of threads to make processes parallel. I have tried to test it with 3 tables but didn’t make any difference to process time.  It might be possible that more tables can make difference. (http://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/innobackupex_option_reference.html#cmdoption-innobackupex–rebuild-threads)

4.7G 2013-06-19_11-42-07 – Without –compact , prepared the backup – Time: 15 seconds

3.9G 2013-06-19_11-55-13 – With –compact, prepared the backup – Time: 13 minutes 

3.9G 2013-06-19_13-08-05 – With –compact and –use-memory=1G, prepared the backup – Time 13 minutes

As I said, this is the downside of compact backup that its taking longer time to prepare it. Even with compact backup, –apply-log output will be slightly different. i.e.

nilnandan@nil:~$ date && sudo innobackupex –apply-log –rebuild-indexes –use-memory=1GB /home/nilnandan/X_Backup/2013-06-19_13-08-05 && date
Wed Jun 19 14:04:13 IST 2013 [sudo] password for nilnandan:

InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona Ireland Ltd 2009-2012. All Rights Reserved.

130619 14:04:17 innobackupex: Starting ibbackup with command: xtrabackup_55 –defaults-file=”/home/nilnandan/X_Backup/2013-06-19_13-08-05/backup-my.cnf” –defaults-group=”mysqld” –prepare –target-dir=/home/nilnandan/X_Backup/2013-06-19_13-08-05 –use-memory=1GB –tmpdir=/tmp –rebuild-indexes

Starting to expand compacted .ibd files.
130619 14:04:18 InnoDB: Warning: allocated tablespace 14, old maximum was 9
Expanding ./test/compact_test.ibd

130619 14:05:00 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files

130619 14:05:02 InnoDB: Waiting for the background threads to start
[01] Checking if there are indexes to rebuild in table test/compact_test (space id: 14)
[01] Found index name
[01] Found index city
[01] Found index phone
[01] Found index mobile
[01] Rebuilding 4 index(es).
130619 14:17:28 Percona XtraDB (http://www.percona.com) 1.1.8-20.1 started; log sequence number 11016623144

xtrabackup: starting shutdown with innodb_fast_shutdown = 1
130619 14:17:37 InnoDB: Starting shutdown

130619 14:17:42 InnoDB: Shutdown completed; log sequence number 11764348940
130619 14:17:42 innobackupex: completed OK!
Wed Jun 19 14:17:42 IST 2013
nilnandan@nil:~$

  • Restore backup to Data dir

Restore the compact backup is very simple and just like normal innobackupex utility. You can use –copy-back option with innobackupex and restore the prepared backup into database dir.

nilnandan@nil:~/X_Backup$ sudo innobackupex –copy-back /home/nilnandan/X_Backup/2013-06-19_13-08-05

It will copy all the data-related files back to the server’s datadir, determined by the server’s my.cnf configuration file. I would suggest to check the last line of the output for a success message. i.e

130619 14:17:42 innobackupex: completed OK!

Conclusion: As we can see, compact backup is helpful for saving disk space but it will also slow the preparation process. But for people who are concerned with disk space over recovery time, the compact + xbstream (archive) can be the best solution.  Actually, it’s just matter of need.

The post Limited disk space? Compact backups with Percona Xtrabackup 2.1 appeared first on MySQL Performance Blog.

Jun
25
2013
--

TokuMX is MongoDB on steroids

TokuMX is MongoDB on steroidsI am actually quite excited about Tokutek’s release of TokuMX. I think it is going to change the landscape of database systems and it is finally something that made me looking into NoSQL.

Why is TokuMX interesting? A few reasons:

  • It comes with transactions, and all that good stuff that transactions provide: a concurrent access to documents (no more global write-lock in MongoDB); crash recovery; atomicity
  • Performance in IO-bound operations
  • A good compression rate, which is a money-saver if you use SSD/Flash
  • But it is also SSD/Flash life-time friendly, which is double money-saver

So having all these factors it is just a no-brainer if you have a MongoDB component in your setup. I actually started to use it as a service for queue in a system I’m working on right now, and my experience so far has been quite smooth.

I have some suggestions for Tokutek – please stop placing all files into a single directory (this is for both TokuDB and TokuMX) – if you have 1,000 databases with 100 tables or collections each, it gets quite messy with 100.000*x files in the single datadir.

In general I will agree with Baron, that TokuMX means for MongoDB the same as InnoDB to MySQL, and the NoSQL world just got a level-up… and it is becoming interesting.

The post TokuMX is MongoDB on steroids appeared first on MySQL Performance Blog.

Jun
24
2013
--

Percona XtraDB Cluster 5.5.31-23.7.5 now available

Percona is glad to announce the release of Percona XtraDB Cluster 5.5.31-23.7.5 on June 25, 2013. Binaries are available from the downloads area or from our software repositories.

New Features:

  • Information about the wsrep sequence number has been added to INFORMATION_SCHEMA.INNODB_TRX table.
  • Percona XtraDB Cluster can now be bootstrapped with the new bootstrap-pxc option in the init script.
  • Percona XtraDB Cluster has implemented parallel copying for rsync State Snapshot Transfer method. Bug fixed #1167331 (Mrten).
  • Percona XtraDB Cluster has implemented new Python version of the clustercheck script.
  • Percona XtraDB Cluster now has better integration with Percona XtraBackup 2.1 by implementing new xbstream option for xtrabackup State Snapshot Transfer method.

Bugs Fixed:

  • Fixed the packaging issues that caused Percona XtraDB Cluster to conflict with MySQL libraries when performing fresh CentOS installation. Bug fixed #1183669.
  • The RPM installer script had the datadir hardcoded to /var/lib/mysql instead of using my_print_defaults to get the datadir info. bug fixed #1172621.
  • Fixed the package conflict for percona-xtradb-cluster-client-5.5 package. Bug fixed #1172621.
  • Fixed the Percona-Server-shared-55 and Percona-XtraDB-Cluster-shared RPM package dependencies. Bug fixed #1050654.
  • mysql_tzinfo_to_sql script failed with Percona XtraDB Cluster since MyISAM DDL/DML statements were not consistently replicated. Fixed by enabling session variable wsrep_replicate_myisam in the generator file. Bug fixed #1161432.
  • Startup script on Debian and Ubuntu was failing prematurely during SST. Bug fixed #1099428.
  • Due to Debian way of stopping/reloading/status checking, there was failure after SST since the machine-specific Debian maintenance password didn’t work. Fixed by using signals instead of mysqladmin as it is done in CentOS. Bug fixed #1098360.
  • wsrep_recover would create ibdata1 and ib_logfile* files which the SST after that would remove. Bug fixed #1191767.
  • When running wsrep_recover to recover galera coordinates, LRU recover (and its blocking counterpart) were causing issues. Bug fixed #1108035.
  • Percona XtraDB Cluster improved status visibility in the processlist on other nodes when provider is locked with FLUSH TABLES WITH READ LOCK on one of them. Bug fixed #1109341.

Crash/stalling bugs:

  • Percona Server Crash-Resistant Replication was causing conflicts with wsrep_crash_recovery on a node which is a slave to an async master in standard MySQL replication. Bugs fixed #1182441 and #1180802.
  • Percona XtraDB Cluster node would hang on shutdown when thread_handling was set to pool-of-threads. Bug Fixed #1185523.
  • FLUSH TABLES WITH READ LOCK didn’t behave as expected, it had different behavior than the upstream MySQL version. Bug fixed #1170821.
  • When rsync state snapshot transfer failed, the rsync daemon would remain running which caused pid errors on subsequent SST retries. Bug fixed #1169676.
  • When doing cascading Foreign Key processing Percona XtraDB Cluster was doing unnecessary Foreign Key checks, which are needed only for the purpose of getting Foreign Key parent reference for child table modifications. Bug fixed #1134892.
  • High priority threads (slave threads, replaying threads, total order isolation threads) were not protected from kill signal and attempt to kill high priority thread could cause server to hang. Bug fixed #1088267.
  • Galera provider may deadlock if applier thread is still executing apply_trx() while processing commit, it could cause corresponding transaction to be purged from cert index.

Regressions:

  • Previous Percona XtraDB Cluster release introduced the regression in the RPM packaging that prevented the server from restarting following an upgrade. Bug fixed #1168032.

Percona XtraDB ClusterOther bug fixes: bug fixed #1187526, bug fixed #1133266, bug fixed #1079688, bug fixed #1177211, bug fixed #1166423, bug fixed #1084573, bug fixed #1069829, bug fixed #1012138, bug fixed #1170066, bug fixed #1170706, bug fixed #1182187, bug fixed #1183530, bug fixed #1183997, bug fixed #1184034, bug fixed #1191763, bug fixed #1069829.

Based on Percona Server 5.5.31-30.3 including all the bug fixes in it and on Codership wsrep API 5.5.31-23.7.5, Percona XtraDB Cluster 5.5.31-23.7.5 is now the current stable release. All of Percona’s software is open-source and free. Release notes for Percona XtraDB Cluster 5.5.31-23.7.5 are available in our online documentation.

We did our best to eliminate bugs and problems, but this is a software, so bugs are expected. If you encounter them, please report them to our bug tracking system.

The post Percona XtraDB Cluster 5.5.31-23.7.5 now available appeared first on MySQL Performance Blog.

Jun
24
2013
--

pt-online-schema-change and binlog_format

Statement-based or row-based, or mixed?  We’ve all seen this discussed at length so I’m not trying to rehash tired arguments.  At a high level, the difference is simple:

  1. Statement based replication (SBR) replicates the SQL statements to the slave to be replayed
  2. Row based replication (RBR) replicates the actual rows changed to the slave to be replayed
  3. Mixed mode uses RBR in the event of a non-deterministic statement, otherwise uses SBR

Recently, I worked with a client to optimize their use of pt-online-schema-change and keep replication delay to a minimum.  We found that using RBR in conjunction with a smaller chunk-time was the best result in their environment due to reduced IO on the slave, but I wanted to recreate the test locally as well to see how it looked in the generic sense (sysbench for data/load).

Here was my local setup:

  1. 3 VMs (CentOS 6.4, 512M RAM, Macbook Pro host)
  2. 1 VM was for sysbench and pt-osc
  3. 1 master, 1 slave (Percona Server 5.5.30, 256M buffer pool)

And here is the base test that I ran:

  1. Populate the db with four, 1 million row tables
  2. Restart MySQL to ensure an empty buffer pool (no LRU dump/restore)
  3. Run pt-osc against the master and capture diagnostics on slave
  4. Repeat with both SBR and RBR

Visually, the most telling difference between the two baselines comes from comparing the slave IOPs when using SBR vs RBR:

Statement Based - IOPs (iostat)

Statement Based – IOPs (iostat -mx 1)

Row Based - IOPs (iostat)

Row Based – IOPs (iostat -mx 1)

 

While the write operations look similar in both, you can see the dramatic difference in the read operations in that they are almost negligible when using row based. I had assumed there would be high read IOPs as the buffer pool was empty, so I also verified that the Innodb_buffer_pool_reads (reads that missed the buffer pool and went to disk) on both:

SBR - Buffer Pool Reads (from disk)

SBR – Buffer Pool Reads (from disk)

RBR - Buffer Pool Reads (from disk)

RBR – Buffer Pool Reads (from disk)

 

Looking at how pt-osc and the buffer pool operate, these results make sense for this workload.  Here is the basic process for pt-osc:

  1. Create a new, shadow table based on the original table
  2. Apply the alters to the new shadow table
  3. Add triggers to the original table to track changes
  4. Run INSERT … SELECT CHUNK against the original table to populate the shadow table
  5. Rename the tables after all records are copied

In the case of SBR, the actual INSERT … SELECT CHUNK is replayed on the slave verbatim, meaning that the chunk will need to be read from disk if not in the BP in order to insert it into the new table.  However, when using RBR, you simply send the rows to the slave.  As this is a new table, there is nothing to read from disk so InnoDB simply writes the new page and it eventually gets flushed to disk.

This is where the –set-vars switch can come in handy with pt-online-schema-change.  Picture this scenario:

  1. The buffer pool is undersized (due to lack of memory compared to data size)
  2. You are altering a table that mostly archive (i.e. not hot data)
  3. The slave is nearly IO bound already as it is actively serving read traffic

In this case, adding extra read IOPs to the slave could be a performance killer.  So assuming you have binlog_format=mixed on the slave, you can use the –set-vars like this to run the alter using RBR to save on IOPs:

pt-online-schema-change –alter=”ENGINE=InnoDB” –set-vars=”binlog_format=row” –execute h=master,D=db,t=tbl

Keep in mind, RBR isn’t going to give you the same results in the course of normal replication.  When replicating changes to the slave, the page is requested from the buffer pool and read from disk if not present so that the changes can be applied to it.  This can still be a win (think of a query that is very complicated that returns only a few rows), but you won’t see the dramatic difference as you do when using this approach with pt-osc.

The post pt-online-schema-change and binlog_format appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com