I recently conducted a test backup of my “master-slave” setup in my VirtualBox as I was migrating from Percona Server 5.6.12 to version 5.6.13-rel61.0 with Percona XtraBackup v2.2.0 rev. 4885. However, doing the backup on my slave, I encountered this problem:
[04] Compressing and streaming ./test/checksum.ibd [01] Compressing and streaming ./mysql/slave_master_info.ibd Assertion "to_read % cursor->page_size == 0" failed at fil_cur.cc:293 innobackupex: Error: The xtrabackup child process has died at /usr/bin/innobackupex line 2641.
This is related to a bug posted by my colleague George https://bugs.launchpad.net/percona-xtrabackup/+bug/1177201. While I was tracing the code, in fil_cur.cc line 293, it shows that this is caused by the assertion failing to return a 0 based result shown below:
xb_a(to_read > 0 && to_read <= 0xFFFFFFFFLL); xb_a(to_read % cursor->page_size == 0); npages = (ulint) (to_read >> cursor->page_size_shift);
which xb_a() is defined as:
#define xb_a(expr) \ do { \ if (!(expr)) { \ msg("Assertion \"%s\" failed at %s:%lu\n", \ #expr, __FILE__, (ulong) __LINE__); \ abort(); \ } \ } while (0);
in file common.h of line 29. The problem relies in this part as:
xb_a(to_read % cursor->page_size == 0);
I tried to modify the code to add some verbosity for the values of to_read and its page_size value. So the “to_read” and “cursor->page_size” points as the file size of your *.ibd file and it’s page_size defined in XB base on the current version of MySQL you’re trying to back up. As of MySQL 5.6.4 or Percona Server 5.5, innodb_page_size is introduced which can be set by 16k, 8k and 4k. In relation to that, the part of the code I add was just a simple copy of “msg()” function which shows as:
msg("\n\n<<<<<< to_read value is: \"%lld\", value of page size is: \"%llu\", and modulus is: %d\n\n", (long long) to_read, cursor->page_size, to_read % cursor->page_size);
This shows as:
<<<<<< to_read value is: "98318", value of page size is: "16384", and modulus is: 14 Assertion "to_read % cursor->page_size == 0" failed at fil_cur.cc:296 innobackupex: Error: The xtrabackup child process has died at /usr/bin/innobackupex line 2641.
which shows as the file size of my *.ibd file as below:
-rw-rw---- 1 mysql mysql 98318 Nov 6 12:03 slave_master_info.ibd
Still, I got no clue how the 14 bytes were added to the tablespace and what was the cause of it (my bad was not able to backup or copy the file for further investigation). I tried to search for some possible bug that is relevant to this but I could only find this bug #67963 that Jeremy Cole have posted, which might be relevant to the root cause of those 14 bytes. For the past few days I tried and have few attempts to corrupt my tablespace for other tables as well but I still got no luck (any comments regarding this could really be awesome!). Still, my clues led to some random garbage that was inserted which consists of 14 bytes in the tablespace, and to this belief, I couldn’t point the culprit to Percona XtraBackup. To fix the problem, I came up with running “ALTER TABLE” syntax as:
mysql> alter table mysql.slave_master_info engine=InnoDB; Query OK, 1 row affected (0.36 sec) Records: 1 Duplicates: 0 Warnings: 0
Which then shows as…
-rw-rw---- 1 mysql mysql 98304 Nov 6 12:17 slave_master_info.ibd
where…
#> echo "scale=5;98304/16384"|bc 6.00000
…is divisible of 16KiB, and that shows that the tablespace is now aligned accordingly to the desired page size (default one). I had a conversation on this with our dev’s specially George or Aleks who have expressed that this could be a workaround if such case is encountered. Basically, the ALTER TABLE syntax here (I presume OPTIMIZE TABLE might work as well) does defragments the tablespace of the affected *.ibd file which causes to remove those unwanted bytes. By defragmenting your tablespace, the ALTER TABLE statement creates a copy of the original table slave_master_info which those garbage are destroyed by deleting the old copy and the new copy is renamed to the original designated name of the table, which is slave_master_info. In any case, your innodb_page_size value is set to 4KiB or 8KiB, XtraBackup will be able to identify this via your my.cnf file. You can verify this by setting and modifying it from 4KiB to 8KiB, and try to run:
# xtrabackup_56 --help
As a supplementary for prior checking if your *.ibd files too are in line in accordance to your page size, you can try to do some pre-check by using awk:
find . -name "*.ibd" -exec ls -alt {} \; | awk '{print $9 ": " $5 " mod of 16384 is: " $5 % 16384}'
or you can check thru INFORMATION_SCHEMA:
SELECT NAME, PAGE_SIZE, (PAGE_SIZE%16384) MOD_RES FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES;
However, there’s one thing I haven’t tried yet! Since XtraBackup has the ability to read from your *.cnf file, I am not sure if the bug might occur from prior versions when such innodb_page_size is change via its codebase, i.e. done thru innobase/include/univ.i. If it fails, that could be either your *.ibd file is not within space bounds and by running or doing the pre-check, you can have the list of tablespace files that was not aligned accordingly base in your InnoDB page size.
Moreover, on this blog post, if you find other workarounds to alleviate the failed assertion bug, please post in the comments below. I will add some further investigation regarding this issue in the future and as well try with the old version of MySQL. Thank you!
The post Percona XtraBackup – A workaround to the failed assertion bug appeared first on MySQL Performance Blog.