Sep
04
2019
--

Simulating InnoDB Secondary Index Corruption

InnoDB Secondary Index Corruption

InnoDB Secondary Index CorruptionWorking as a support engineer here at Percona is amazing, as you get a variety of requests, ranging from the most trivial questions to questions that require source code review and understanding of the internals of InnoDB, for example.

In our world where High Availability is a must, everything is about being pro-active, and when we need to be reactive we must act fast. To do so we need to ensure we have proper monitoring in place that covers all possible failure scenarios. Unfortunately, that is not always possible and we are always improving and learning as we face new types of issues.

A few days ago one of our customers faced an issue where MySQL identified an InnoDB secondary index corruption and marked that table as corrupted instead of crashing the server. Even though one would think that a single table marked as corrupted is not as bad as an entire server crash, this particular table was key for the whole application, which in the end caused an outage anyway. They wanted to artificially simulate this situation in order to improve their QA / Monitoring.

The Error

2019-08-26T15:47:10.526458Z 2 [ERROR] InnoDB: Database page corruption on disk or a failed file read of tablespace test/tb1 page [page id: space=48, page number=8]. You may have to recover from a backup.
2019-08-26T15:47:10.526465Z 2 [Note] InnoDB: Page dump in ascii and hex (16384 bytes):
 len 16384; hex dc67f35500000008ffffffff000000090000000000 ... 
InnoDB: End of page dump
2019-08-26T15:47:10.594440Z 2 [Note] InnoDB: Uncompressed page, stored checksum in field1 3697800021, calculated checksums for field1: crc32 3697800020/786197600, innodb 812191569, none 3735928559, stored checksum in field2 3697800020, calculated checksums for field2: crc32 3697800020/786197600, innodb 1337770384, none 3735928559,  page LSN 0 3082690, low 4 bytes of LSN at page end 3082690, page number (if stored to page already) 8, space id (if created with >= MySQL-4.1.1 and stored already) 48
InnoDB: Page may be an index page where index id is 49
2019-08-26T15:47:10.594466Z 2 [Note] InnoDB: Index 49 is `b` in table `test`.`tb1`
2019-08-26T15:47:10.594474Z 2 [Note] InnoDB: It is also possible that your operating system has corrupted its own file cache and rebooting your computer removes the error. If the corrupt page is an index page. You can also try to fix the corruption by dumping, dropping, and reimporting the corrupt table. You can use CHECK TABLE to scan your table for corruption. Please refer to http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html for information about forcing recovery.
2019-08-26T15:47:10.594571Z 2 [ERROR] Got error 126 when reading table './test/tb1'

InnoDB will retry the read 100 times, and if it fails on all of the attempts, the server will assert – otherwise the behavior will vary depending on the version you are using. On 5.5 it will consider the table as corrupted while on 5.7 it will mark the table missing on the data dictionary. Either way, from this moment forward, all queries on that particular table will fail.

InnoDB Internals

InnoDB has a defined layout for it’s in disk structures (redo log, .ibd files, …).

If you are interested in learning or poking around, Jeremy Cole has a project to diagram most of the internal InnoDB structures at https://github.com/jeremycole/innodb_diagrams and also a powerful tool to extract data from innodb at https://github.com/jeremycole/innodb_ruby. The intention of this post is to show you how you can do it manually, so you can get more exposure to internals.

To simulate the corruption we will need to read only a small part of each page header. Here is the list and it’s location as per 5.7 source code:

  • Each page is defined by a type – FIL_PAGE_TYPE
  • An index page has a value of 45bf (or 17855 in decimal) – FIL_PAGE_INDEX
  • Checksum is stored from byte/offset 0 to 4 of each page – FIL_PAGE_SPACE_OR_CHKSUM
  • Index ID is stored at byte/offset 28 of each page data – PAGE_INDEX_ID
#define FIL_PAGE_TYPE 24 /*!< file page type: FIL_PAGE_INDEX,..., 
                           2 bytes. */

#define FIL_PAGE_INDEX 17855 /*!< B-tree node */ 

#define FIL_PAGE_SPACE_OR_CHKSUM 0 /*!< in < MySQL-4.0.14 space id the
page belongs to (== 0) but in later
versions the 'new' checksum of the
page */ 

/**************************************************************//**
Gets the index id field of a page.
@return index id */
UNIV_INLINE
index_id_t
btr_page_get_index_id(
/*==================*/
const page_t* page) /*!< in: index page */
{
return(mach_read_from_8(page + PAGE_HEADER + PAGE_INDEX_ID));
} 

#define PAGE_HEADER FSEG_PAGE_DATA /* index page header starts at this

#define FSEG_PAGE_DATA FIL_PAGE_DATA

#define FIL_PAGE_DATA 38U /*!< start of the data on the page */

#define PAGE_INDEX_ID 28 /* index id where the page belongs.
This field should not be written to after
page creation. */

Source code:
storage/innobase/include/fil0fil.h#L507
storage/innobase/include/fil0fil.h#L575 
storage/innobase/include/fil0fil.h#L483
storage/innobase/include/btr0btr.ic#L109
storage/innobase/include/page0page.h#L56
storage/innobase/include/fsp0types.h#L68
storage/innobase/include/fil0fil.h#L560
storage/innobase/include/page0page.h#L82

To extract the above information from an .ibd file I will use a small bash script to read each page from the file and export the information we will need:

#!/bin/bash
FILE=$1

echo "Reading ${FILE}"
PAGE_SIZE=16384
SIZE=$(ls -l ${FILE} | awk '{print $5}')
PAGES=$((SIZE / PAGE_SIZE))
echo "FILE HAS ${PAGES} pages"

for ((page=1; page<=PAGES; page++))
do
  OFFSET=$((page * PAGE_SIZE))
  PAGETYPE_OFFSET=$((OFFSET + 24))
  PAGETYPE=$(xxd -p -l2 -s ${PAGETYPE_OFFSET} ${FILE})
  PAGECHECKSUM_OFFSET=$((OFFSET))
  PAGECHECKSUM=$(xxd -p -l4 -s ${PAGECHECKSUM_OFFSET} ${FILE})
  INDEXID_OFFSET=$((OFFSET + 38 + 28))
  INDEXID=$(xxd -p -l8 -s ${INDEXID_OFFSET} ${FILE})
  echo "${page} at offset ${OFFSET} has type ${PAGETYPE} and checksum ${PAGECHECKSUM} - ID ${INDEXID}"
done

The last piece for the puzzle is to translate the Index ID to our secondary index. To do it we will make usage of the information_schema table:

SELECT i.NAME, HEX(i.INDEX_ID) FROM information_schema.INNODB_SYS_INDEXES as i JOIN information_schema.INNODB_SYS_TABLES t USING (TABLE_ID) WHERE t.NAME = 'DB/TABLE';

Reproducible test case

To reproduce the corruption we will use a test table with one primary key, one secondary index, and 1000 rows:

#create table
mysql -u root -psekret -e "CREATE TABLE `tb1` (   `ID` int(11) NOT NULL,   `b` varchar(100) DEFAULT NULL,   PRIMARY KEY (`ID`),   KEY `b` (`b`) ) ENGINE=InnoDB" test

#populate table
for i in $(seq 1 1000); do mysql -psekret -e "INSERT INTO test.tb1 VALUES (${i}, 'marcelo_$i')"; done

At this point, InnoDB has all the table pages already loaded into InnoDB Buffer Pool. We will force it to read the pages from disk. The most efficient way will be by adding innodb_buffer_pool_load_at_startup=OFF to my.cnf and restart MySQL.

We would now like to check the list of  Index ID’s of test/tb1 table:

mysql -psekret -e "SELECT i.NAME, HEX(i.INDEX_ID) FROM information_schema.INNODB_SYS_INDEXES as i JOIN information_schema.INNODB_SYS_TABLES t USING (TABLE_ID) WHERE t.NAME = 'test/tb1';"
mysql: [Warning] Using a password on the command line interface can be insecure.
+---------+-----------------+
| NAME    | HEX(i.INDEX_ID) |
+---------+-----------------+
| PRIMARY | 30              |
| b       | 31              |
+---------+-----------------+

We will be looking for pages which have a type 45bf (FIL_PAGE_INDEX) and Index ID 31. It’s time to execute our bash script to read the .ibd file:

./read.sh /var/lib/mysql/test/tb1.ibd
Reading /var/lib/mysql/test/tb1.ibd
FILE HAS 10 pages
1 at offset 16384 has type 0005 and checksum 50312eaf - ID 0000000000000000
2 at offset 32768 has type 0003 and checksum 563aeb4c - ID ffffffff0000ffff
3 at offset 49152 has type 45bf and checksum 30ad20b7 - ID 0000000000000030
4 at offset 65536 has type 45bf and checksum 96977741 - ID 0000000000000031
5 at offset 81920 has type 45bf and checksum 5aa506c9 - ID 0000000000000030
6 at offset 98304 has type 45bf and checksum 20f3fe3f - ID 0000000000000030
7 at offset 114688 has type 45bf and checksum 081fbb8f - ID 0000000000000030
8 at offset 131072 has type 45bf and checksum dc67f354 - ID 0000000000000031
9 at offset 147456 has type 45bf and checksum 370eb81c - ID 0000000000000031
10 at offset 163840 has type  and checksum  - ID

Let’s get page eight as the page we will be manipulating. It starts at offset 131072 of tb1.ibd, has a page type 45bf, the index ID field is 31, and it has a checksum of dc67f354. To manipulate the page, we will be changing the checksum field to dc67f355.

printf '\xdc\x67\xf3\x55' | dd of=/var/lib/mysql/test/tb1.ibd bs=1 seek=131072 count=4 conv=notrunc
4+0 records in
4+0 records out
4 bytes (4 B) copied, 0.000121983 s, 32.8 kB/s

Now, next time we try to read this particular page, MySQL will identify a corruption because the checksum doesn’t match. We don’t want the server to assert, so we will change the checksum back to its original value right after we attempt to read the table. This way MySQL will only mark this table as corrupted and as missing in the data dictionary:

mysql -psekret -e "SELECT * FROM test.tb1" & sleep 0.1 && printf '\xdc\x67\xf3\x54' | dd of=/var/lib/mysql/test/tb1.ibd bs=1 seek=131072 count=4 conv=notrunc

That is it! We have provoked an InnoDB secondary index corruption and now all subsequent queries will fail. Now you can make sure your monitoring tool / QA process covers this type of situations.

If in your version of MySQL it has only marked the table as corrupted, a simple CHECK TABLE will bring it back to a usable state. If your version of MySQL is reporting the table as missing from the data dictionary, you can just restart MySQL to force it to re-read the table pages again, since we have changed the checksum back to its original value, it won’t report any error at this time.

Never try to change any MySQL file manually on a production environment. This can cause unrecoverable corruption and data loss.

Dec
24
2014
--

InnoDB crash recovery speed in MySQL 5.6

It has been a while since I have looked at InnoDB crash recovery. A lot has change in the last few years – we have serious crash recovery performance improvements in MySQL 5.5 and MySQL 5.6, we have solid state drives raising as typical high performance IO subsystem and we also have the ability to set much larger log files and often have a much larger InnoDB Buffer Pool to work with.

First let me revisit the challenge with have with InnoDB configuration. For write-intensive workloads it is extremely important to size innodb_log_file_size for good performance, however the longer log file size you have the longer you might have to wait for InnoDB to complete crash recovery, which impacts your recovery strategy.

How much can innodb_log_file_size impact performance? Massively! Doing intensive writes to a database that well fits in memory, I’d say there’s a 10x difference between having combined size of log files of 32GB vs 64MB.

Before we look at some performance numbers let’s talk about what InnoDB Crash Recovery time depends on:

Combined Innodb Log File Size innodb_log_file_size*innodb_log_files_in_group is what really matters. It does not really matter which of those two you change. I prefer to keep innodb_log_files_in_group as default and only work with innodb_log_file_size. The larger size you have allocated the longer recovery will take.

innodb_checkpoint_age – Combined size of InnoDB log files defines how many changes not reflected in the tablespace we may have where innodb_checkpoint_age shows how much changes we actually have at the current moment, being an actual driving factor of recovery time. If you have very large log files allocated but for your workload innodb_checkpoint_age stays low chances are recovery will be quick. Be careful however – intensive writes can cause innodb_checkpoint_age to go much higher than the average for your workload causing recovery time from crashes at that time to be much longer.

Innodb Buffer Pool Size – This is another very important factor. During recovery, InnoDB has to redo changes to the unflushed/dirty pages from buffer pool, which is obviously limited by buffer pool size. This also means innodb_max_dirty_pages_pct can be used to impact recovery speed. This is the number of dirty pages being the true driving factor. With small buffer pool, a limited number of dirty pages based on the workload you might not have innodb_checkpoint_age to go high even if you have allowed for large log space.

Data Structure matters a lot for recovery speed. Generally shorter rows being updated will mean longer recovery time for the same log file size. This should make sense as shorter row changes means there is less log space produced for the same amount of page changes. If you do a lot of blob writes InnoDB crash recovery can be short even with relatively large log files.

Access Pattern is another key factor – the more “random” access is the more distinct pages you will have touched during the same innodb_checkpoint_age the longer recovery can take.

Hardware – Better hardware means recovery goes faster, as much is obvious. More specifically you will be looking for storage performance at low concurrency (both reads and writes are important) as well as fast CPU cores – crash recovery at this point is not able to use multiple cores effectively.

Let’s now look at the test….
I am running Sysbench on an 11GB table, designed to fit in the 12GB buffer pool. Here is the exact command:

sysbench  --tx-rate=4000 --num-threads=64 --report-interval=10 --max-time=0 --max-requests=0 --rand-type=uniform --oltp-table-size=40000000 --mysql-user=root --mysql-password=password  --test=/usr/share/doc/sysbench/tests/db/update_index.lua run

The box is rather low end i3-4010U (4 threads) CPU with a Samsung EVO 840GB SSD, so numbers are expected to be higher on real server hardware.

In my first test I’m injecting 4000 updates/sec which is about half of what the box can do at sustained load. I do this to illustrate more common load scenario as we rarely run systems at their saturation point in real world. The uniform distribution should mean worse case scenarios for in-memory workloads through I think recovery speed would be slower if u use random writes to the database much larger than the amount of memory.

At this workload I’m getting innodb_checkpoint_age of 15GB even though total log file size is 32GB. Crashing the system makes for about 40 minutes recovery time so the log was processed at the 6.25MB/sec

Here are some interesting graphs:

As you can see recovery is essentially close to single core. It also can be CPU bound at times (and will be more so with faster storage) – at certain times of recovery when logs are being scanned it can be completely CPU bound (see how IO wait essentially goes to zero at some times)

Over time as recovery progresses more and more blocks become cached, so they do not have to be read from the disk for log records to be applied, meaning the workload becomes more and more write bound.

This is an unweighted IO utilization graph where 1000 corresponds to 100% of time where at least one IO request was outstanding. As you can see from this and the previous drive, InnoDB does not keep the IO well saturated all the time during crash recovery.

Additionally to the first sysbench crash test I did two more – one running a system completely saturated with updates. This made innodb_checkpoint_age to go as high as 23.2GB and crash recovery took 1h 20 minutes, showing some 4.8MB/sec The thing to consider in this case is that MySQL was not able to keep up with purging the history so it was growing quickly meaning crash recovery had to cover a lot of undo space modifications.

Finally I also did a run with a more skewed pareto distribution which resulted in 9.8G innodb_checkpoint_age 33min crash recovery time and 4.94MB/sec of log processing speed.

As I explained above there are a lot of moving parts so your numbers are likely to be quite different, yet I hope this can provide some reasonable baseline you can use for calculation.

Note also waiting for the server to recover from the crash is only one way to deal with recovery. Even if you size log files to be very small you will likely need to deal with Operating System boot and when warmup which will take a few minutes. It is often much better to use a different primary method of crash recovery, such as failover to the MySQL Replication Slave or using Percona XtraDB Cluster. If you use these methods you can often use quite a high combined InnoDB log file size to optimize for performance.

Final Thoughts: Even though InnoDB Crash Recovery has improved in MySQL 5.5 and MySQL 5.6 there is still room to improve it even more. As we see from the resource usage graphs during recovery there is an opportunity to both use multiple CPU cores more effectively as well as drive IO subsystem with higher concurrency and in more sustained fashion.

The post InnoDB crash recovery speed in MySQL 5.6 appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com